Research |
Despite their name, the current batch of Massively Multiplayer
Online Games (MMOGs) do not scale well. They rely on centralised
client/server architectures which impose a limit on the maximum number
of players (avatars) and resources that can coexist in any given virtual
world. One of the main reasons for such a limitation is the common
belief that fully decentralised solutions inhibit cheating prevention.
The purpose of the MARS-P2P project is to break this belief by designing
and implementing a P2P gaming architecture that monitors the game at
least as efficiently and as securely as a centralised architecture. Our
approach delegates game refereeing to the player nodes. A reputation
system assesses node honesty in order to discard corrupt referees and
malicious players, whilst an independent multi-agent system challenges
nodes with fake game requests so as to accelerate the
reputation-building process.
I am leader of this project, which obtained internal funding for the year 2012 by
the Laboratoire d'Informatique de Paris 6 and a 3-year PhD scholarship (2012-2015)
from the french Ministry of Higher Education and Research.
Managing and processing Big Data is usually handled in a static manner. Static cluster- or grid-based solutions are ill-suited for tackling Dynamic Big Data workflows, where new data is produced continuously. Our final objective in this project is to design and implement a large scale distributed framework for the management and processing of Dynamic Big Data. We focus our research on two scientific challenges: placement and processing. One of the most challenging problems is to place new data coming from a huge workflow. We explore new techniques for the mapping of huge dynamic flows of data in a large scale distributed system. In particular, these techniques ought to promote the locality of distributed computations. With respect to processing, our objective is to propose innovative solutions for the processing of a continuous flow of big data in a large scale distributed system. To this effect, we identify properties that are common to distributed programming paradigms, and then integrate these properties in the design of a framework that takes into account the locality of the data flow and ensures a reliable convergence of the data processing.
I am leader of the Inria Associate Team that embodies this project in the context of a cooperation with the Universidad Tecnica Santa Maria in Valparaiso (Chile) and the Universidad de Santiago de Chile. This project also obtained a joint Inria/CONYCIT funding for a 3-year PhD scholarship (2013-2016)
The GEMS project proposes a new approach towards filtering and processing the
ever growing quantity of data published from mobile devices before it
even reaches the Internet. We intend to tackle this issue by syndicating
geocentric data on the fly as they get published by mobile device
owners. By circumscribing data to the zone where it is published, we
believe it is possible to extract information that is both trustworthy
and relevant for a majority of users.
This project obtained internal funding for the year 2012 by
the Laboratoire d'Informatique de Paris 6
In peer to peer networks, trusted third parties (TTPs) are useful for
certification purposes, for preventing malicious behaviours and for
monitoring processes, among other things. In these environments, traditional
centralised approaches towards TTPs generate bottlenecks and single points
of failure/attack; distributed solutions must address issues of dynamism,
heterogeneity, and scalability. The purpose of this work is to design a
system that builds and maintains a community of the most trustworthy nodes
in a DHT-based peer to peer network. The resulting system must be scalable
and decentralised, where nodes can build sets of reputable peers efficiently
in order to constitute collaborative TTPs, and must be able to cope well
with malicious behaviours.
This project (2008-2011) got funded by the INRIA and the CONYCIT
(Chilean national research agency); it was a joint research effort between REGAL
and the University Frederico Santa Maria, Valparaiso, Chile.
In the context of this project, I co-supervised a PhD student who defended her
thesis in November 2011.
Distributed applications are very sensitive to host or process
failures. This is all the more true for multi-agent systems, which are
likely to deploy multitudes of agents on a great number of locations.
However, fault tolerance involves costly mechanisms; it is thus
advisable to apply it wisely. The DARX project investigated the dynamic
adaptation of fault tolerance within multi-agents platforms. The aim of
this research was double: (a) to provide effective methods for ensuring
fail-proof multi-agent computations, and (b) to develop a framework for
the design of scalable applications, in terms of the number of hosts as
well as the number of processes/agents.
This project (2007-2011) got funded by the french national research agency ANR
(ACI SETIN) and was a joint research effort between INRIA, LIP6 and LIRMM.
I was assistant coordinator of this project, and leader of the INRIA team.
The primary objective of the DDEFCON project was to design a middleware
for the dependable deployment of massively parallel, cooperative
applications over open environments such as the Internet. In practical
terms, we developed a middleware prototype which allows to run
massively parallel computations in a fully decentralised manner, and
hence takes into account the obstacles mentioned above as obstacles. Our
work comprised three interdependent research efforts: (a) the study of the
replication of cooperative software tasks within a P2P overlay, (b) the
development of a secure runtime environment over a heterogeneous
network, and (c) the design of a language for the dependable deployment of
code.
I was leader of this project, which obtained internal funding during the year 2008 by
the Laboratoire d'Informatique de Paris 6
The MapReduce (MR) programming model/architecture allows to process
huge data sets efficiently. The most popular platform, Hadoop, adopts a master/slave
architecture that fits very well on top of computing grids and clouds.
While there are many solutions that introduce crash recovery for Hadoop,
to the best of our knowledge the issue of malicious nodes remains to be
addressed. FTH-GRID injects a simple task replication scheme along with
a results comparison mechanism into Hadoop. Voting out inconsistent
results allows to detect corrupt outputs as well as potentially
malicious nodes.
This project (2008-2010), was a joint research
effort between the INRIA/LIP6 REGAL team and the LASIGE, and obtained an allocation
from the EGIDE european fund. I was leader of the REGAL team on this
project.