![](PDF/Hiding.jpg)
slides
all slides in a zip
|
Programme
- 9h30: welcome
- 10h-10h45:
Kostas Chatzikokolakis
Measuring Information Leakage using Generalized Gain Functions
This talk introduces g-leakage, a rich generalization of the
min-entropy model of quantitative information flow. In g-leakage, the
benefit that an adversary derives from a certain guess about a secret
is specified using a gain function g. Gain functions allow a wide
variety of operational scenarios to be modeled, including those where
the adversary benefits from guessing a value close to the secret,
guessing a part of the secret, guessing a property of the secret, or
guessing the secret within some number of tries. I will discuss
important properties of g-leakage, including bounds between
min-capacity, g-capacity, and Shannon capacity. Moreover I will
discuss a connection between a strong leakage ordering on two
channels, C1 and C2, and the possibility of factoring C1 into C_2C_3,
for some C3. Based on this connection, I will propose a generalization
of the Lattice of Information from deterministic to probabilistic
channels.
- 10h45-11h45:
Annabelle McIver
Abstract Entropies for Abstract Channels
Communication channels are measured for
leakage by comparing the entropy of the prior distribution (on inputs)
with the conditional entropy on the posterior distributions (induced
by observed outputs). A wide variety of entropies can be used for
this, and recently the "gain function" approach [1] has been shown to
be a generalisation of many of them. In that context we have recently
generalised channels to "abstract channels" that retain only the
essential information needed to evaluate those entropies [2].
In this talk I will go further, showing how gain functions themselves
are special cases of something slightly more general [3]: they are
(duals of) concave, continuous functions on spaces of
distributions. Defining continuity for a function on distributions
requires (eg) a metric between distributions, and it is no surprise
that the Kantorovich metric works well. In particular, the K-distance
between distributions turns out to be related to the maximum leakage
over all abstract entropies (thus bounding the maximum leakage wrt
gain functions, since they are a special case).
I will outline the benefits this more general viewpoint brings, its
relation to the (Giry) monad approach to computation, and the
opportunities this offers for further research.
[1] Measuring Information Leakage using Generalized Gain Functions
(M. Alvim, K. Chatzikokolakis, C. Palamidessi, G. Smith), CSF 2012
[2]Abstract channels and their robust information-leakage ordering
(A. McIver, C. Morgan, G. Smith, B. Espinoza, L. Meinicke), PoST under
review 2014
[3] A Kantorovich-monadic powerdomain for information
hiding, with probability and nondeterminism (A. McIver, L. Meinicke,
C. Morgan), LiCS 2012
|
|
-
11h45-12h30:
Nicolas Bordenabe
Location privacy protection through an optimal and differentialy
private obfuscation mechanism
The growing popularity of location-based systems, allowing
unknown/untrusted servers to easily collect huge amounts of
information regarding users' location, has recently started raising
serious privacy concerns. In this work, we study a formal notion of
location privacy that protects the user's exact location, while
allowing approximate information - typically needed to obtain a
certain desired service - to be released. This notion is derived from
a generalized version of the well-known concept of differential
privacy, and therefore has the property of being independent from the
adversary's prior knowledge. Moreover, we show a method to obtain an
obfuscation mechanism that satisfies our location privacy definition
while achieving optimal utility. However, since this method is in
general computationally demanding, we also present an alternative and
more efficient approach to obtain a mechanism with the same privacy
guarantees and approximate optimal utility. Finally, we perform a
detailed evaluation using traces of real users form the GeoLife
dataset. We evaluate the privacy guarantees of our mechanism in
comparison with other state-of-the-art approaches, and the effects of
the approximation method in terms of utility and efficiency.
- 12h30-14h: lunch
-
14h-14h45: Stefan
Schwoon
Optimal Constructions for Active Diagnosis
This talk presents joint work with Stefan Haar, Serge Haddad, and
Tarek Melliti, to be presented at FSTTCS 2013. The task of diagnosis
consists in detecting, without ambiguity, occurrence of faults in a
partially observed system. Active diagnosis means to control the
system in such a way that unambiguous detection of faults is always
possible (within a bounded delay). Solutions have already been
proposed for the active diagnosis problem, but their complexity
remains to be improved. We propose an improved method for active
diagnosis based on automata theory and prove that it is optimal with
respect to several criteria. We also construct a memory-optimal
diagnoser whose fault-detection delay is at most twice the minimal
delay, whereas the memory required for a diagnoser with optimal delay
may be highly greater.
-
14h45-15h30:
Serge Haddad
Active diagnosis for probabilistic systems
joint work with Nathalie Bertrand, Eric Fabre, Stefan Haar and Loïc Hélouët
The diagnosis problem amounts to deciding whether some specific
"fault" event occurred or not in a system, given the observations
collected on a run of this system. This system is then diagnosable
if the fault can always be detected, and the active diagnosis
problem consists in controlling the system in order to ensure its
diagnosability. We consider here a stochastic framework for this
problem: once a control is selected, the system becomes a stochastic
process. In this setting, the active diagnosis problem consists in
deciding whether there exists some observation-based strategy that
makes the system diagnosable with probability one. We prove that
this problem is EXPTIME-complete, and that the active diagnosis
strategies are belief-based. The safe active diagnosis problem is
similar, but aims at enforcing diagnosability while preserving a
positive probability to non faulty runs, i.e. without enforcing the
occurrence of a fault. We prove that this problem requires non
belief-based strategies, and that it is undecidable. However, it
belongs to NEXPTIME when restricted to belief-based strategies. Our
work also refines the decidability/undecidability frontier for
verification problems on partially observed Markov decision
processes.
- 15h30-16h: coffee break
- 16h-16h45: John
Mullins
Opacity with Orwellian observers and Intransitive
Non-interference
Opacity is a general language-theoretic scheme which may be
instantiated to several security properties of a system. A behavior of
a system is opaque if a passive attacker can never deduce its
occurrence from the system observation. Instead of considering the
case of static observability where the set of observable events is
fixed off-line or dynamic observability where the set of observable
events changes over time depending on the history of the trace, we
consider Orwellian partial observability where unobservable events
are not revealed unless some downgrading event occurs in the future
of the trace. We show how to check that a regular secret is opaque for
a regular language L w.r.t. an Orwellian projection while it has been
proved undecidable for regular languages w.r.t. Orwellian
observation functions. We finally illustrate relevancy of our results
by proving the equivalence between the opacity property of regular
secrets w.r.t. Orwellian projection and the intransitive
non-interference property.
-
16h45-17h30:
Hervé Marchand
Supervisory Control for Opacity
Given a labelled transition system partially observed by an
attacker, and a regular predicate over the runs of the system,
enforcing opacity of the secret means computing a supervisory
controller such that an attacker who observes a run of the controlled
system cannot ascertain that the trace of this run belongs to the
secret. During this talk we will present the conditions under which
such a controller can be actually computed and extend this results to
the case where the controller has to enforce the opacity of a secret
for a family of (possibly infinite) systems specified by a Modal
Transition System.
|