TCSA Day 2011
We are happy to invite you to the third annual
-
Theoretical Computer Science Amsterdam (TCSA) Day
which will take place on
The TCSA Day is intended to be an annual event, taking place in the Fall, to alternate with
the national NVTI Theory Day that takes place in Spring.
The event is organized jointly by CWI, UvA, and VU, and its aim is to foster cooperation among the various TCS-reseachers in and around Amsterdam.
Its Steering Committee consists of Jan Bergstra (UvA), Harry Buhrman (CWI, UvA), and Wan Fokkink (VU).
The Organizing Committee are
Alban Ponse (UvA),
Femke van Raamsdonk (VU),
Leen Torenvliet (UvA),
and Ronald de Wolf (CWI, UvA).
The 2009 edition took place at the VU, and the 2010 edition took place at CWI.
The programme consists of five talks by researchers from CWI, UvA, and VU.
There is no need to register. Tea and coffee will be provided. Lunch is not organized, but there is a cafetaria in the UvA building and another one at the CWI building across the street.
Schedule
- 9:30-10:15 Krzysztof Apt (CWI, UvA): Strategic games: social optima and Nash equilibria
- 10:15-10:45 coffee/tea break
- 10:45-11:30 Ela Krepska (VU): BioCheck: Proving termination for biological systems
- 11:30-12:15 Alban Ponse (UvA): Short-Circuit Logic
- 12:15-13:15 lunch (not organized)
- 13:15-14:00 Jurgen Vinju (CWI, UvA): On the Theory of Controlled Experiments in Software Engineering
- 14:00-14:45 Alexandru Baltag (UvA): Learning by Iterated Belief Revision
Since the program ends at 14:45, there is plenty of time for people who also want to attend Yde Venema's inaugural lecture, 16:00 at the Aula of the UvA.
Abstracts
- 9:30-10:15 Krzysztof Apt (CWI, UvA): Strategic games: social optima and Nash equilibria
Abstract:
One of the main issues in the area of strategic games is the tension
between the notions of a social optimum and a Nash equilibrium. We
provide a very brief introduction to strategic games focusing on this matter.
This will lead to a discussion of the concepts of price of anarchy and
price of stability and of the recent notion of the selfishness level
proposed by Apt and Schaefer.
- 10:45-11:30 Ela Krepska (VU): BioCheck: Proving termination for biological systems
Abstract:
We describe an efficient procedure for proving stabilization
of biological systems modeled as qualitative networks or genetic
regulatory networks. For scalability, our procedure uses modular
proof techniques, where state-space exploration is applied only
locally to small pieces of the system rather than the entire
system as a whole. Our procedure exploits the observation that,
in practice, the form of modular proofs can be restricted to a very
limited set. For completeness, our technique falls back on a
non-compositional counterexample search. Using our new procedure,
we have solved a number of challenging published examples,
including: a 3-D model of the mammalian epidermis; a model of
metabolic networks operating in type-2 diabetes; a model of fate
determination of vulval precursor cells in the C. elegans worm;
and a model of pair-rule regulation during segmentation in the
Drosophila embryo. Our results show many orders of magnitude
speedup in cases where previous stabilization proving techniques
were known to succeed, and new results in cases where tools had
previously failed.
During the talk I will show a demo of the prototype BioCheck tool.
Paper reference:
B. Cook, J. Fisher, E. Krepska, N. Piterman,
Proving stabilization of biological systems,
In (R. Jhala and D. Schmidt, eds.) Proc. 12th
Verification, Model Checking and Abstract Interpretation Conference
VMCAI'11, LNCS vol. 6538, pp 134-149, 2011.
- 11:30-12:15 Alban Ponse (UvA): Short-Circuit Logic
Abstract:
Short-circuit evaluation denotes the semantics of
propositional connectives in which the second argument is only
evaluated if the first argument does not suffice to determine the
value of the expression. In programming, short-circuit evaluation
admits the possibility to model side effects, as for example in the
conditional template "if (x:=x+1 && x==2) then ...", where evaluation
of the first conjunct has a side effect on the evaluation of the
second conjunct. Short-circuit evaluation characterizes propositional
logic (PL) if no side effects can occur.
We introduce short-circuit logic (SCL) as a generic name for variants
of PL defined by short-circuit evaluation. We use left-sequential
conjunction as the primitive connective that prescribes short-circuit
evaluation and define left-sequential disjunction by a sequential
version of De Morgan's laws. We discuss different SCLs that stepwise
identify more propositional statements: in FSCL (free SCL), the least
identifying SCL we consider, evaluation can be defined by functions
on sequences of atoms (atomic propositions) and an evaluation can be
such that different occurrences of the same atom in a propositional
statement yield different evaluation values. We argue that FSCL is
the logic that applies to conditions as used in common imperative
programming languages, and show that the left-sequential versions of
many laws of PL do not hold in FSCL (such as commutativity,
idempotence, distributivity and absorption). We discuss various
characterizations of FSCL and of more identifying SCLs. For more
information see our paper at arXiv:1010.3674.
- 13:15-14:00 Jurgen Vinju (CWI, UvA): On the Theory of Controlled Experiments in Software Engineering
Abstract:
We have recently introduced an empirical research method in
Software Engineering. Our goal is to relate software quality
attributes to software design choices, for example to be able to
precisely explain trade-offs between run-time performance and
source-code maintainability in real software. The key of the method is
the construction and verification of semantics preserving
source-to-source program transformations. Each such transformation
changes existing software systems, modifying many instances of the
use of one design choice to an alternative design choice. By isolating
a design choice in this manner we can study its effect among the
plethora of other factors that influence software quality.
I will motivate our research questions and the research method in this
talk by reporting on our first study that applies it. The talk opens
discussion on the inter-play between theoretical and empirical methods
in Computer Science. There are a number of questions that need resolving.
- 14:00-14:45 Alexandru Baltag (UvA): Learning by Iterated Belief Revision
Abstract:
We consider a generalized version of the central problem of Computational
Learning Theory: given a set S of possible states of a system (system
assumed to be in exactly one of these states, called the ``actual state"),
together with a family O of state properties (thought of as those properties
that are potentially observable), an agent observing an infinite incoming
stream of successive inputs (consisting of observable properties of
the actual state) has the task of learning which of the possible states is
the actual one. The 'agent' (that can be simply identified here
with her learning method) produces successive hypotheses (``beliefs"), and
is said to learn the actual state iff she ``identifies it in the limit":
i.e. her beliefs stabilize on the correct state after some finite (but
unbounded) number of observations. If this is guaranteed to happen no matter
which of the states in S is the actual one, then the system (S,O) is said to
be learnable by this agent (or learning method). A system is said simply to
be ``learnable" if it is learnable by some agent. An agent is said to be
universal if it is ``as good as any other agent": i.e. every learnable
system is learnable by this particular learning method.
First, we use a known characterization of learnability to show that this
concept is equivalent to a topological ``Separation" property of the set of
possible worlds (whose strength lies in between the Separation conditions
T_0 and T_1). Second, we characterize learnability in Game-Theoretic terms.
Third, we consider learning methods that are computationally and logically
natural, namely the ones generated by standard (iterated) Belief-Revision
methods: the qualitative methods that are common in the AGM tradition of
Belief Revision Theory (such as conditioning, lexicographic revision,
minimal revision and other methods for updating the standard qualitative
models for belief-revision, usually given either in terms of a plausibility
preorder or in terms of a family of nested spheres), as well as of the most
common probabilistic revision methods (Bayesian conditioning and its various
extensions to generalized settings such as conditional probabilistic
spaces). We consider the question of whether any of these widely
used belief-revision procedures is universal, as a learning method. The good
news is that there are well-known belief revision methods that are
universal: in particular, conditioning and lexicographic revision are
universal. These methods have the great advantage that they are
``history-free" (the successive beliefs depending at each stage only on the
beliefs at the previous stage and on the new input) and ``weakly
conservative" (the beliefs do not change if the new information was already
believed). The bad news is that, in order to achieve this universality, we
have to give up all the standard (probabilistic or qualitative) settings
for belief-revision, considering instead appropriate generalizations of
these settings. Moreover, conservatism has its limits: none of the even
``more conservative" revision methods, such as the so-called ``minimal
revision" (prefered by BT researchers because it minimizes the change of the
plausibility order) is universal. The bad news become worse if one insists
on requiring that the learning method is actually ``computable": it follows
from some learning-theoretic results of Dana Angluin that no computable
method which is consistent with the AGM postulates can be universal (among
the computable methods).
These results reinforce and extend the conclusions reached in previous work
by Kelly (and also in work by Baltag and Smets): there exists an inherent
tension between the AGM requirement of minimizing belief change and the
drive to maximize learning power. The tension cannot be relaxed by moving to
probabilistic settings. Nevertheless, to finish on a more optimistic tone, I
sketch a natural way to modify the above belief-revision methods and obtain
a (non-conservative, but still history-free) universal computable learning
method. I also study the case when the new inputs may include errors, and
show that, depending on the fairness assumptions on the errors, one can
either still learn the truth by lexicographic revision (but not by simple
conditioning) or else learning is achieved by a combination of voting and
individual upgrades by the members of a growing population of agents.
Joint work with N. Gierasimczuck and S. Smets
This page is maintained by Ronald de Wolf. Last update: Oct 24, 2011