This page documents KIV proofs for the 3 challenges of the VerifyThis Competion at the formal methods (FM) conference in Paris 2012. Solutions for challenge 1 (first part) and 3 were worked out at the competion. Challenge 2 was solved after the competition and was about 2 hours of work to figure out the core idea of what's going on (function #fz) and 2 days to translate the idea into a mechanized KIV proof.

Project overview lcp

Specification lcp contains the program definition and the correctness theorem.

It is proved with loop invariant

```
n0 < # ar ∧ n1 < # ar
→ n0 + m ≤ # ar
∧ n1 + m ≤ # ar
∧ (∀ n. n < m → ar[n0 + n] = ar[n1 + n])
```

and with loop variant

`# ar - m`

__ Update 30.08.2013:__ We incorporated the suggestion of one reviewer to specify and verify a generic procedure for sorting

Project overview suffixarray

The algebraic function *lcp* is defined in specification lcp.

Specification suffix contains the definition of the lexicographical order < on arrays and the function *suffix*(*ar*, *i*)
yielding the suffix starting at index *i*. Futhermore, some basic theorems about them are proven.

In the specification sorted the predicate *sorted*(*ar, n0, n1,* ⊑) and *sorted*(*ar,* ⊑) are defined and
state that an array is sorted according to an order ⊑ on *elem*. The two main theorems
are: 1) swapping two elements maintains *sorted* and 2) two consecutive sorted ranges can be merged into a single sorted range.
The implementation of the procedure sort and the invariants can be found in specification sort.
The correctness theorem and its proof are here.

The suffix array *sar* is considered valid with respect to the array *ar*
if the predicate *valid(sar, ar)* specified here
holds. It states that the elements of the suffix array is equal to the multiset {0,...,n-1} and
that the suffixes represented by the indices stored in the suffix array are sorted according to the lexicographical order on arrays.
The multiset {0,...,n-1} is specified recursively as *bag< n*. The function
*ar.elems* (and *ar elems n*) calculate the multiset of elements in the
array (up to the index *n*).

The algorithm for the suffix array construction is specified in suffix-array-ops. The main correctness theorem states that after a call to create_suffixarray the suffix array is valid.

The predicates *rs*(*ar, n0, n1, n*) and *lrs*(*ar, n0, n1, n*) state that at index *n0* in *ar* is a repeated resp. longest repeated substring of length
*n* with the witness suffix starting at *n0*. They are defined in specification lrs.

Specification lrs-ops contains the definition of the algorithm to compute the longest repeated substring and the definition of the loop invariant. The main lemma for the correctness proof states that the longest common prefix of neighboring suffixes in the lexicographical order is at least as long as that of suffixes which are further apart. With this lemma, the main complication during the proof of the correctness theorem for the longest repeated substring algorithm is that one needs to switch from indices in the original array to indices in the suffix array.

We prove the iterative version of the algorithm (seems to be much simpler than the recursion). The solution consists of the two specifications split-finalzeros and prefixsum explained below, and an instance array-of-ints imported from the library: the development graph gives an overview over the structure of the specifications.

This specification contains the main idea for verification. The function # fz(n) calculates the number of final zeros in a bit-represenation of an array index n + 1. lead(n), which is always an odd number, is the result of cutting away the final zero bits of n + 1. Each n + 1 can be uniquely represented as 2^ #fz(n) * lead(n). (lemmas split-ok and split-unique-2).

Remark: As an afterthought, the number of final zero bits of n + 1 is also the number of final one bits of the original n. The function #fz is used in both invariants of the algorithm:

- In upsweep, at level d (i.e. when space = 2^ d), ar[m] contains the sum from index m - 2^min(d,#fz(m)) to m (both inclusive) Therefore at the end of upsweep the array contains the sum from m - 2^#fz(m) to m.
- In downsweep at level d , ar[m] contains the sum from 0 to m - 2^(d + 1) (again both inclusive) if #fz(m) <= d, else the old value from downsweep.

This specification contains:

- declarations for the PREFIXSUM algorithm translated from the original Java. The assigment of 0 to the last array element was moved from downsweep to the main routine, since it does not belong logically to the downsweep. Auxially procedures were added for the inner loops, to improve proof modularity and readability.
- Definitions of the pre- and postcondition of PREFIXSUM (predicates pre and post), as used in the main theorem PREFIXSUM-thm. postu is the postcondition of UPSWEEP. invu, invuh, invd, invdh are the invariants for the while loops in UPSWEEP, UPSWEEPH, DOWNSWEEP, DOWNSWEEPH.
- All the necessary proofs. The pre/postconditions are used in the proofs of the respective theorems (…-thm). The main theorem is PREFIXSUM-thm.

The procedure to get the full proofs done was as follows:

It started by defining all conditions that connect invariants as simplifier rules (lemmas pre-invu, invu-invuh, invuh-invu, invu-postu, postu-invd, invd-invdh, invdh-invd, invd-post), except the conditions that really show that invdh and invuh are invariant in the inner loops.

With these definitions, the proofs of the theorems PREFIXSUM-thm, UPSWEEP-thm, UPSWEEPH-thm, DOWNSWEEP-thm and DOWNSWEEPH-thm stating total correctness of the main program and its subroutines were done first. They are almost automatic using the simplifier rules, and nearly independent of how the axioms defining the predicates invu/d(h) (except for small dependencies for the simple termination conditions). Only two open goals remain: the two inner loop conditions in DOWNSWEEPH-thm and UPSWEEPH-thm.

Then the simplifier rules were proved themselves. This caused some corrections in the defining axioms for invu/d(h). When all the simplifier rules were finally provable, all the bugs from the invariants were gone, except that invuh had the weaker condition n \neq 0 instead of the stronger odd(n).

The finall error was corrected, when at last doing the main difficult parts of the proofs for the invariance of invuh and invdh in DOWNSWEEPH-thm and UPSWEEPH-thm. These are difficult, since they involve laws about 2^n and #fz. Crucial lemmas are e.g. finz-bigger, finz-add-bigger and finz-bounded-05. These had to be found and manually applied.

Project overview del-min

The solution consists of two main parts:

- A proof that the procedure
`search_tree_delete_min`

deletes the leftmost element from the tree - Proofs that the left most element is actually minimal minimal, removed from the tree, and that the result is still a binary search tree.

The final theorem combines these results.

We assume a strict total order on elements in the trees, defined here and garbage collection.

The solution was completed almost in time during the competition.

The solution is based on separation logic, formalized as a set of higher order operators (specifications heap-sep and maplet).

An abstraction predicate to algebraic trees is defined here.

Algebraic functions t.leftmost and t.butleftmost are defined recursively.

The program consists of a while loop and finally some destructive modifications to the data structure. The proof goes by induction over (the size of the) algebraic tree (red node 58). The induction hypothesis is

`⟨ loop; modification ⟩ post`

i.e., the hypothesis holds for the loop *and* the remaining program.

- The base case corresponds to loop exit (left branch of node 57), thus we have deal with the modification.
- The recursive case needs to execute the loop once, thereby unfolding the tree abstraction in step 49. The remaining iterations (+ the remaining program) are covered by the induction hypothesis, which is applied (manually) in node 47.

The predicate bst characterizes binary search trees: elements in the left subtree are strictly less than the root, conversely for the right subtree.

Theorems about the algebraic tree are proven by (structural) induction. They rely on a strengthened characterization of binary search trees and some helper lemmas.

We have also specified insertion (without rebalancing) and proven to match the corresponding algebraic operation `insert`

Allocation of a new node is done by the procedure `mknode`

(which gets a suitable contract). The proof for insertion foillows uses induction on the algebraic tree as well (node 99) and has to consider more cases (as one may descend to the left or right subtree), but is otherwise not difficult.

A specification of the operations insert and delete for ordered sets is here. The abstraction from trees to sets is defined here and standard data refinement proof obligations are shown for `delete`

and insert. A couple of helper lemmas translate the extensional definition of the abstraction from sets to trees to a recursive one, and show commutation with of the tree modifications with the set modifications. A useful lemma from the library is min-q.

Update May 5, 2015. We now have a solution using the magic wand operator based on

S. Blom, M. Huisman: *Witnessing the elimination of magic wands*.

VerifyThis 2012 Special Issue, Software Tools for Technology Transfer, Springer, 2015.

You can find the proof here. The invariant (step 18) states that the modification on the current subtree (spatially) implies the modification on the whole tree

```
∃ t0.
tr(cur, t0.butleftmost) -* tr(root, t.butleftmost)
```

where `t0`

represents the current algebraic abstraction. (Note that `cur`

is named `r2`

by KIV, and `root`

is `r`

).