SAVCBS, FroCoS, and FTP: A trip report
Post by Radu Grigore:
I spent the last couple of weeks attending three workshops: SAVCBS, FroCoS, and FTP. The keyword for SAVCBS is components. Many people in the program committee, however, are formal methods people. Both FroCoS and FTP are about theorem proving. FroCoS is focusing on integration. Many articles were about how to put together interactive and automatic theorem provers. FTP is less popular and may even disappear. As I understand, historically the papers were very theory oriented and that is why more applied people are reluctant to submit papers. Although the program committee is made out of theory people they seem to be open to applications. If you are working on a theorem prover or if you are using a theorem prover have a look at future editions of this workshop.
The following is a selection of my notes. It includes details only for presentations that were either particularly clear or related to my work.
1 SAVCBS
Short papers first. These have really preliminary results. In fact, some can be viewed as position papers.
Specification of trustworthy real-time reactive systems. This is a paper about design methodology. It seems to be similar to what Vali and Oana are doing, but it is not my cup of tea. The basic idea is to have a formal specification of the requirements, generate automata from them, and model check them. It sounds a bit like magic.
Component, objects, and contracts. The goal here is to design a (formal) language able to express the kind of things that appear in legal contracts. Also, the authors would like to treat legal contracts like components, but it is unclear why and how. The language and its semantics would be quite cool in my opinion. It could be used, for example, to teach computer scientists how to read all that legal gibberish.
A concept for dynamic wiring of components. The idea is to introduce some kind of behavioural subtyping. This implies a specification language and tools to check for subtyping. The goal is to statically determine if it's safe to change a component with another one. I pointed out that in previous work on dynamic upgrade state was a major problem that seems to be completely neglected. (Remember that this is preliminary work.)
Game-based safety checking with Mage. The presentation was very thin (aka `high-level') and full of cute pictures. (People who know me can infer from that that I could barely stay in the room.)
The next papers were supposed to present results that are have some technical meat.
Effective verification of systems with a dynamic number of components. First of all, the title is misleading. The paper is not about plugging in and out components in a working system. Instead, it introduces a temporal logic that can be used to reason about classes of (transition) systems parametrized by the number of components. I didn't quite get what's the point.
Plan-directed architectural change for autonomous systems. In my opinion the presentation was particularly bad and managed to obscure a lot of what seemed good work. All I got is that a plan seems to be something like a very high-level program that can be used, for example, to guide a robot.
Reachability analysis for annotated code. Mikoláš's presentation went well. I'll have a series of posts about reachability on my blog.
Faithful mapping of model classes to mathematical structures. This talk by Ádám Darvas was very good. For a brief description I shall again refer you to my blog. I'll just say here that I highly recommend JML people to read this article.
Proof transforming compilation of programs with abrupt termination. This presentation contained some cute Java quizzes that stumped both me and Mikoláš. Here's the question I got wrong: What does the following program do for the call f(1)?
static int f(int x) throws Exception { int r = 0; while (true) try { if (x > 0) { throw new Exception(); } } finally { ++r; break; } ++r; return r; }
The article presents some formal Java semantics that capture the interplay of the try ⋯ finally, break, and while constructs. Moreover, it discusses a bit how to transform proofs about such programs at source level into proofs at bytecode level. Again, all JML people should read this article.
Playing with time in publish-subscribe using a domain specific model checker. I couldn't follow the details of this presentation. The message seemed to be that they customized the model checker Bogor to more efficiently deal with a bunch of components communicating using a publish-subscribe bus.
On timed components and their abstraction. I am a fan of covering the basics in a presentation but this guy went over board by only covering what timed automata are.
2 FroCoS
I'll start with the invited talks. These generally cover a lot of ground and give a historical perspective on a whole specialized area. The presenter is one of the key people in that area but does not restrict himself (or herself) to his (or her) own work.
From KSAT to delayed theory combination: Exploiting DPLL outside the SAT domain. SAT solvers are very good. That's why many people transform their problems in satisfiability problems and then use a SAT solver. Three particular places where this was done is in reasoning in modal logics, in writing Satisfiability Modulo Theories (SMT) solvers, and in writing SMT solvers for combined theories. The talk was given by one of the MathSAT people.
In modal logics the basic algorithm is to use the SAT solver to find a model, check its consistency with the modal logic (by using tableux rules), rinse and repeat. Essentially the SAT solver is used to enumerate models (assignments of values to constants) in an organized way.
Something very similar happens in SMT provers. The SAT solver finds a model, a decision procedure checks if that is consistent with the theory and if not gives an explanation, rinse and repeat. It's hard to write a decision procedure (that is correct) for a big theory. That's why people prefer to write decision procedures for fragments of the theory and combine them. Nelson–Oppen is the traditional way of doing so. The facts exchanged by decision procedures are equalities. In delayed theory combination (DTC) the SAT solver is used to manage the communication between decision procedures thru booleans.
Architecting solvers for Satisfiability Modulo Theories: Nelson–Oppen with DPLL. A formal description of the architecture of SMT solvers has good educative value (it made things much clearer for me!) and it can used to reason about the solver. These guys at Intel described their SMT solver as a set of rules that can be interpreted. They also used Isabelle to prove various properties of the rules. Of course, the interpreted version is way to slow. They are implementing the `real' thing in OCaml and they will release it on SourceForge in the coming months. The intended application area includes hardware verification, embedded systems verification, device drivers verification.
I highly recommend this paper, for example, to those who want to understand the essence of Nelson–Oppen or who want to start working on a (possibly existing) SMT solver.
Temporalising logics: 15 years after. This talk was much too theoretical for me. All I could understand is that adding time to various logics puts the satisfiability problem in various classes of (huge) complexity. My impression is that in a SMT solver anything quadratic is already dubious. Something whose complexity is given by the Ackermann function is ridiculous. Anyway, I'm happy someone (else) is looking into these fundamental issues.
Those where all the FroCoS-pure invited talks. Now let's move on to the normal presentations.
Visibly pushdown languages and term rewriting. This is a study of a certain set of (tree-accepting) automata.
A compressing translation from propositional resolution to natural deduction. This work tackles a specific problem that appears in integrating automated and interactive theorem provers. A SAT solver can give a resolution proof that a formula is UNSAT. An interactive theorem prover usually has a small proof checking engine. Everything goes thru there. It would be a bit strange to use externally a SAT solver, which is a big chunk of C code, without running its result thru the trusted kernel. But proof checkers are usually designed to handle proofs by natural deduction. (This basically means that the proof is a tree whose structure around each node matches one of the given rules.) This paper presents an efficient way of solving the impedance mismatch. Checking the proof still takes longer than finding it, so there is room for improvement. But I will definitely read this article carefully because it uses Generalized Suffix Trees applied to theorem proving!
Combining algorithms for deciding knowledge in security protocols. I think it was about how to combine equational theories.
RoCTL: A temporal logic of robustness. These guys added two operators to CTL so that you can say ◯φ (in all possible futures φ holds) and ▴φ (in all futures with one more failure φ holds). A problem posed by the presenter is whether RoCTL is elementary. Here is an example to give you an idea of what can be easily expressed in this logic: “You must help your neighbor. If you will not help your neighbor you have to warn him.” (The guys who presented Components, objects, and contracts at SAVCBS should probably take a look at RoCTL.)
Temporal logics with capacity constraints. This was a rather practical presentation that I enjoyed. If you want to say in LTL that ≤2 of a1, a2, a3, a4 are true then you can add some syntactic sugar and then use (¬ a1∧¬ a2) ∨(¬ a1∨¬ a3)∨⋯ to do the verification. The problem is that in practice this doesn't work. You need to put the treatment of such constraints in the model checker. They presented an algorithm (but not an implementation!) of how to do so. I just hate when I hear that “we just need a student to implement it.” Let me remind you that SOFTWARE IS HARD.
Languages modulo normalisation. No idea what this was about.
Combining classical and intuitionistic implications. The talk was about finding axioms to add to the classical and intuitionistic axioms such that deduction theorems still hold and such that classical and intuitionistic implications remain essentially distinct operations. Then it continued with how to find models for a logic given by such a set of axioms.
Proving termination using recursive path orders and SAT solving. I'd say that the presenter, Peter Schneider-Kamp, is a hacker (the good kind). He is working on an automated program verification environment (AProVE). At this point AProVE tests whether programs terminate or not. You can go ahead and type in a Haskell program in the web-interface and see what happens. It's quite impressive. The overall idea is to translate the program into a term rewriting system (TRS) which terminates iff the program terminates. Then the TRS is analyzed using some fancy techniques that I do not (yet) understand. The talk was about an improvement of these fancy techniques that makes the tool a lot faster.
The good news is that they are considering using BoogiePL as input in order to handle imperative programs and I'm developing an open source front-end for that language. I'd be quite happy if they will be my users.
Notherianity and combination problems. I believe this was about extending Nelson–Oppen to the case where theories are not disjoint, but obey some more general constraint called `notherianity'.
3 FroCoS and FTP
The joint session of FroCoS and FTP had one invited talk and three normal talks. Let's start with the invited one.
Hierarchical and modular reasoning in complex theories: The case of local theory extension. The message of this talk for me was that in theory it is possible to organize some theories (pun intended) into a hierarchical structure and use that hierarchical structure to guide the instantiations in a SMT solver. The really nice part is that you can guarantee completeness with such an approach. (Sounds like a nice experiment for Michał).
Now the regular presentations.
Combining proof producing decision procedures. This paper formalizes (part of) what is done in SMT solvers nowadays. The main result is a data structure, the explanation graph, that can be returned by decision procedures instead of conflict sets. This data structure has nice compositionality properties and ensures some kind of minimality (which might not work well in practice for all we know). I have an idea of how this data structure could be slightly improved (and I told Cristophe Ringeissen about it). I'll have to investigate if it works.
Towards an automatic analysis of web service security. It must be my dislike of `new technologies' that put me to sleep during this presentation. Apologies.
Certification of automated theorem proofs. Coccinelle is a Coq library for expressing termination problems. CiME is a program that tries to find termination proofs that was enhanced to produce proofs expressed in Coccinelle that can be checked by Coq. Checking the proof takes about four times as much time as producing the proofs and the presenter said this is `normal' and that doing the proof check with something else than Coq would not help. I wonder if he'll be surprised when Michał will finally publish what he did here.
4 FTP
The invited talks.
The KeY tool. Unfortunately I missed the first half of this talk. The KeY tool translates Java programs into a complicated logic. The first step is to do symbolic execution. The result of this is that all strange constructs in the logic are eliminated and what remains is first-order logic. This sounds a lot like VC generation, except that I got the impression that it is more interactive. For example, the user can say if a method call should be handled by inlining the body of the method or by using the method's spec. In fact, one of the main ideas held by the presenter is that “interaction is important to handle complex programs.” The presentation then went on to cover one way of integrating interactive and automated theorem provers. In this setting automated theorem provers are trusted (without checking).
Applying first-order theorem provers in formal software safety certification. At NASA people have problems that are hard to express in C. It's much easier to design a domain specific language (DSL) that physicists and mathematicians can use so that their bugs don't need to wait for the postdoc implementing in C telling the researcher that their algorithm is crap. Using a DSL keeps postdocs out of the loop and researchers can work at high-level with the computer to ensure that their ideas are correct. Now, a compiler for a DSL that outputs C code (in a subset of the language) is a huge beast and it is quite difficult to ensure that it is correct. So, the next best thing is to verify that the produced programs (one by one) have a set of desired properties (also checked one by one). What they did is very similar to ESC/Java with three differences:
- the annotations are generated from DSL (like the code)
- properties are generated one by one (like producing a VC for checking that null pointers are not dereferenced, one for checking that there is no out-of-bounds array access, etc.)
- the theorem prover they used is resolution-based (Vampire)
The lessons they've learned show (mostly) that they should have used an SMT prover instead of Vampire:
- The prover is not wrong. Bugs are usually elsewhere.
- TPTP ≠ Real World.
- Need controlled simplification. Doing some simple simplifications (!) before starting the actual proof goes a long way.
- Need axiom selection. (hint, hint: We have a huge background predicate. Also, Georg used the same trick when playing with Kleene algebras. See below.)
- Need built-in theory support.
Now the normal talks.
Induction in sequent calculus modulo. Didn't understand anything.
Extending Poitin to handle explicit quantification. This presentation was given by someone from Dublin City University (!). Unfortunately, all I could understand is that they are using program transformation to do interactive theorem proving. Somehow.
Developing modal tableaux and resolution methods via first-order resolution. This talk was over my head. The presenter talked about how to find deduction calculi for modal logics.
Reasoning automatically about termination and refinement. This was a really nice talk. It started from the Bachmair-Dershowitz theorem: “Termination of the union of two rewrite systems can be separated into termination of the individual systems if one rewrite system quasicommutes over the other.” This can be proven for more general (abstract) structures than TRSs. The presenter managed to give an ugly proof a few years ago that this theorem applies to Kleene algebras. Now he used an automated theorem prover and managed to extract a much nicer proof. His work consistent of playing with adding and removing known results about Kleene algebras as axioms. Too many axioms made the theorem prover diverge, too few made it provide counterexamples. Also, he showed that Kleene modules are better candidates because, appart from the informal motivation, the theorem prover was able to derive the said theorem without him doing anything. (PS: Kleene algebras are structures introduced by the Conway.)
Proof Builder. This was a system presentation. ProofBuilder is a tool used to teach students what a proof is. It was designed so that the styles in common Discrete Mathematics textbooks are covered.
Labeled tableaux for distributed temporal logic. DTL is a logic meant to be used to reason about properties of security protocols. Existing arguments in this logic are semantic based. The contribution is a calculus (for a subset of the logic used in practice) and an algorithm (based on Bellman-Ford) to extract models. It is unknown if DTL is decidable.
What first-order theorem provers do for monodic temporal logic. The presentation Ullrich gave was great for someone like me. First-order temporal logic (FOTL) is not semi-decidable. (There is no complete axiomatization of it.) So theoreticians try to find interesting fragments. The one presented has a restriction of the number of free variables that can appear under temporal operators (at most one). They proved that everything can be put in a certain normal form and that resolution is sound and complete. The architecture of the prover is waiting for an implementation.
A graph-based strategy for the selection of hypotheses. These guys assume that the theorem prover query has a specific form. (It is not yet clear to me how important this restriction is.) Then they construct some dependency graphs that lead to a heuristic ordering of how relevant are `hypotheses' for proving the `goals'. Here `hypotheses' and `goals' have a specific meaning related to the specific form I mentioned above. Then they try to prove the goal using only the most relevant hypotheses, and gradually keep adding hypotheses if needed. The idea is that this process might succeed where the naive technique of always using all hypothesis might diverge. Again, I don't know what is the impact on performance. The idea, however, is quite powerful in handling really hard verification conditions. (Maybe this kind of things should be included in the theorem prover.)
Redundancy in the geometric resolution calculus. Skolem was interested in geometry, which explains the historical name of this calculus. The idea is that if logical formulas are restricted in a certain way reasoning can be done differently. In fact this was the motivation of the presenter: let's try something new. As I understand, the approach is reasonably good at producing models, but not that good at finding proofs.
Edit and Verify. I was unhappy with my presentation. Anyway, Silvio Ranise gave me a cool idea of how to apply this stuff. More details about what Edit and Verify is will follow on my blog. In the meantime you can have a look at the paper.
Labels: rgrig, trip report
1 Comments:
"All I could understand is that adding time to various logics puts the satisfiability problem in various classes of (huge) complexity. My impression is that in a SMT solver anything quadratic is already dubious."
Well, the SAT itself is NP hard, Simplex is exponential, bitvectors are at least NP hard. The real question is how it works in practice, and the problem with theoretical results, like classes of complexity for different logics is that they are theoretical ;-) Therefore they give no indication of ,,works in practice''.
Post a Comment
<< Home