I present some new results and developments of my work on modeling communication and learning in distributed systems.
Epistemic programs are programs for jointly updating all the agents' states of knowledge (or "belief") about the current system. They can be intuitively understood as computing the effect of exchanges of information ("communication") between agents. Such programs are recursively built from basic epistemic actions (e.g. public broadcasting, or various more "secret" forms of communication), using standard program constructors. In some contexts (such as security issues), it is even natural to consider programs that induce false "knowledge", i.e. wrong beliefs, in some agents.
I give examples, I present a general syntax and semantics for any such programs, I define non-standard (epistemic) program constructors (such as "learning a given program"), and I give a general algorithm for updating information. I introduce a (complete and decidable) propositional Hoare logic for epistemic programs, which can be used to reason about informational changes, and in particular to prove the partial correctness of communication strategies with respect to given epistemic goals (and given epistemic preconditions). I apply this to some well-known epistemic puzzles, as well as to some attack strategies in security protocols. Finally, I present an extension of the Hoare logic to a much more expressive dynamic logic of epistemic programs, which turns out to be undecidable, although still useful in some contexts.