# VerifyThis Competition 2021: KIV Solutions

This Website documents the KIV solutions of the VerifyThis Competition held at ETAPS 2021 in Luxembourg, Luxembourg (virtual event). All proofs for challenge 2 were done during the competition, the solutions for challenges 1 and 3 were completed after the event.

KIV-Projects always link to an overview which gives the dependency graph of specifications. Clicking on one specification gives its text, and links to a theorem base summary (which lists names of theorems) and a theorem base listing (which shows the full sequent for each theorem). From there individual theorems and the full details of each proof are accessible.

## Challenge 1: Lexicographic Permutations

Challenge 1 (Challenge 1 - Lexicographic Permutations ⤓ (PDF)) provided a code snippet to be verified for generating all permutations of the elements of a array. It should be shown that the permutations are created completely and in lexicographic order (besides properties like termination and bounds checking). For this, we defined a lexicographic ordering on such arrays in our main specification challenge1. These are based on the oarray-sort specification from our library, so on the one hand we could fall back on predicates for sorted arrays as well as a predicate "perm" from specifiaction list-perm, which states whether 2 lists are permutations of each other. This was possible because the used arrays were based upon a list definition as seen in the Project Overview. Furthermore, we specified for permutations what it means to be a "next" permutation in the lexical order and to be the "last", i.e. maximal one. Thus, corresponding postconditions for the routine next# could be formulated. To show completeness, we introduced a predicate "min" describing what is a smallest permutation with respect to the lexicographic order.

The main proof obligation is permut-correct for routine permut# and next-proper-permutation for next#. All loop invariants and terms for termination are placed into the code of the routines.

A challenge for proofing next# was to show that the resulting array is indeed a next permutation. An important lemma is that the problem whether the output is the next permutation to the input can be reduced to the longest different suffix, i.e. a common prefix can be neglected. Finally, 3 main cases are distinguished, how a possible permutation P could look like, so that it would be between the suffix of input and output (in the following referred to as I and O):

• I and P are equal on the first element. Because I and P are permutations of each other, the remaining elements of P must be a greater permutation than that of I. But this is a contradiction to the fact that the algorithm has just chosen the end piece of I as a maximum permutation with respect to the lexicographical order.
• P and O are equal on the first element. That means that the remaining part of P must be a smaller permutation. This is also a contradiction, because the remaining part of O is sorted in increasing order and such a sequence corresponds to a minimal permutation.
• The first element of P is greater than that of I and lower than that of O. Thus, there should be such an element in the remaining part. However, the algorithm has chosen the next largest element of the rest of I as the first element of P, so no element can exist in between.

The proof for the routine permut# relies mainly on the lemma next-proper-permutation for routine next#. The remaining work was mainly to find formalizations for the properties to be shown.

## Challenge 2: DLL to BST

The task definition can be downloaded from ETH Zürich: Challenge 2 - DLL to BST ⤓ (PDF)

We were able to solve all tasks of Challenge 2 during the competition because we could use our library for Separation Logic as seen in the Project Overview. Here the heap is defined as an algebraic data type over references `r` and objects `o` with operations for allocation (`h ++ r`), deallocation (`h -- r`), and reading or updating objects at certain locations (`h[r]` or `h[r, o]` resp.). The specification heap-sep defines the common Separation Logic operators like the separating conjunction or the separating implication as higher order predcicates over the heap. The library also provides a heap-based definition of binary trees using generic nodes with left and right pointers. We reused these nodes for our definition of doubly-linked lists. The predicate `dll(r, x)(h)` abstracts the content of a heap `h` to an algebraic list `x` starting from reference `r`. An auxiliary predicate `dlls(r, r0, r1, x)(h)` is used to describe list segments in the heap, the references `r0` and `r1` point to the nodes to the left and right of the segment. Thus, `dlls(r, null, null, x)` is equivalent to `dlls(r, x)`.

Having abstractions of the heap content to algebraic lists and to algebraic trees allows to formulate the required properties using these algebraic data types. The specification linked-tree-list defines the predicates `ordered≤(t)` and `balanced(t)` that in combination ensure that the algebraic tree `t` is a balanced search tree. An additional function `t.toList` calculates a algebraic list for a tree `t` that contains the same elements as `t` and maintains the order of `t`.

The algorithm for the construction of the binary search tree is implemented in the specification DLL2BST with the procedure `dll_to_bst#`. The auxiliary procedures `size#` and `dll_to_bst_rec#` are implemented in this specification as well. Note that we used natural numbers instead of integers in the declarations since negative values are not relevant for the task.

First, we proved the correctness of the `size#` procedure with the lemma dll-size-correct which states that if we have a valid doubly-linked list in the heap representing an algebraic list `x`, the result in `count` is the length `# x` of the list. The proof is by induction over the length of the list and was straightforward, however it was necessary to generalize the lemma by allowing a list segment separated from an arbitrary heap predicate `hP`.

The main effort was to prove the contract for `dll_to_bst_rec#`. We have to ensure that, given a valid list segment `x` in the heap, the execution of `dll_to_bst_rec#` yields a heap that contains the list segment `x` without its first `n` elements (`restn(n, x)`) and a balanced tree t containing the first n elements of `x` (`t.toList = firstn(n, x)`). The contract can be proven by induction over `n`, which again requires generalization to a heap with an additional separated heap predicate `hP`. After applying the induction hypothesis for both recursive calls it remains mainly to show that both subtrees and the new root node form a combined tree, see goal.

```  ...
(right0 => mknode(left0, b, root0) * tr(left0, t) * tr(root0, t0) * ...)(h2)
...
⊦ ∃ t. (tr(right0, t) * ...)(h2) ...
```
This is done with a few interactions by choosing the algebraic tree `mkbranch(t, b, t0)` for `t` and applying some lemmas from the Separation Logic library. The balanced and same-elements properties for the combined tree are proven automatically using lemmas from linked-tree-list.

Finally, the main proof obligation dll_to_bst_correct can be proven by simply applying the contracts of the two auxiliary procedures:

```  dll(head, x)(h), ordered≤(x)
∃ t. tr(root, t)(h) ∧ t.toList = x ∧ ordered≤(t) ∧ balanced(t)
```
Note that we did not have to prove in dll_to_bst_rec_correct that the resulting tree is `ordered≤`. Instead, `ordered≤(t)` follows directly from `ordered≤(x)` and `t.toList = x` (see this lemma).

We also solved the additional task and implemented an iterative version of `size#`: `size_iter#`. The correctness proof dll-size-iter-correct could be performed analogously using induction over the length of the list and the correctness of the corresponding main procedure `dll_to_bst_iter#` is again proven by using the contracts dll-size-iter-correct and dll_to_bst_rec_correct, see dll_to_bst_iter_correct.

## Challenge 3: ShearSort

A description of the callenge can be found at: Challenge 3 - ShearSort ⤓ (PDF) (external link to ETH Zürich)

A sequential version of this challenge (with an optimization) was solved offline. (Update: an extension to a concurrent version is now available too). To find a solution two steps are crucial: First it is necessary to find out, how the 0-1-Principle must be applied to an algorithm. This is done by fixing an arbitrary element i and marking all elements as 0/1 that are less/greater or equal than i. Second the crucial argument, why one row- and then column sort halves the number of dirty rows must be understood and then formalized. Our formalization (see this link for an overview) uses the following main specifications and predicates:
• Specifications matrix, matrix+array, and rwocol-upd specify (two-dimensional) matrices and operations on them, e.g. "#r mx" is its number of rows "mx[m,n,a]" is the modfied matrix mx, where The element in row m, column n has been updated to a. "mx.toArr" concatenates the rows of the matrix into one array, "mx.toArc" does the same, but column-wise.
• The basic predicate used is count(a, p) from specification filter-count, which counts the number of elements that satisfy predicate p. Since this is a generally useful predicate, its specification has been put into the library. It is used with p = "λ j. j < i" to count the number of elements less than i (i.e. zeros) and with p = "λ j. ¬ j < i" to count the number of ones. A "clean0row" is a row where this count is zero, i.e. where all elements are less than i.
• Predicates #clean0rows" and "#clean1rows" from the toplevel specification ShearSort count the number of consecutive clean rows from the bottom/top with respect to element i. "#dirtyrows" are the other rows in the middle of the matrix, and "mx.dirtyrows≤(bound)" states that there at most bound dirty rows. Initially, predicats #clean0rows/#clean1rows were specified to count all rows that have 0/1 only (not just the consecutive ones at the bottom /top), but with this definition the main lemma "every iteration (consisting of sorting rows/columns) halves #dirtyrows" is not provable.
• "scolsorted(mx, n, mx0)" specifies that matrix mx is a permutation of matrix mx0, where the first n columns have been sorted (in ascending order), while the remaining ones are identical. The index n is needed while sorting columns, the index is equal to "#c mx" after the loop.
• Similarly, "srowsorted(mx, n, mx0)" specifies that the first n columns of mx0 have been snake-sorted in mx: Even rows have been sorted in ascending order, odd rows in descending order (where sorted in descending order is descirbed simply sorted in ascending order for the reverted row). Note that "srowsorted(mx, #r mx, mx)" simply indicates that all rows are snake-sorted.
• "rowsincreasing(mx,r)" indicates that all rows below row r are increasing: the elements in row r0 are all smaller than each element from row r1, when r0 < r1 < r. In the proof the predicate is always used with r = #r mx, the second argument is just used for inductive lemmas.
The toplevel specification ShearSort contains the algorithm. Sorting a single row/column has not been implemented but just specified as "choose a sorted result". The invariants can be seen in the annotations:
• The main invariants of the loops that sort rows/columns are "srowsorted(mx,n,mx`)" and "scolsorted(mx, n, mx`)", where mx` denotes the initial matrix before the loop.
• The main loop of shearsort maintains the crucial invariant "mx.dirtyrows≤((#r mx) /↑ (2^ n))" where n is the number of iterations that the loop has already done (where "m /↑ n" is integer division, but rounding up). Proving this invariant (and understanding why it holds) is the main difficulty of the proof, that we will detail below.
The main proof of the ShearSort algorithm basically reduces to the following lemmas:
• sorting rows never changes #clean0rows or #clean1rows" as stated by lemmas rowperms-clean0unchanged and rowperms-clean1unchanged. Therefore the number of dirty rows also stays the same when sorting rows.
• The main difficulty for showing that the invariant of shearsort holds is therefore to show that when a rowsorted matrix mx0 is sorted column-wise the resulting matrix mx has at most half the dirtyrows of the mx0. This is asserted by lemma scolsort-dirtyboundhalf. We could not find a precise argument on the internet, our proof essentially follows the proof of [Hagerup16]. The proof is broken down into several lemmas: First, the final matrix mx has a column cmax with a maximum number n of zeros (Lemma exists-max0column proves that every matrix has such a column). Since mx is column-sorted, the 0's in column cmax are in the first n rows (Lemma sortedperm-count-splits), and all rows above n are 1's only. Since all other columns have fewer 0's (also in low rows) it is implied that the number of clean 1-rows in mx is #r mx - n as stated by Lemma max0column-determines-clean1rows. The number of zeros in column cmax of mx0 is n too (column-sorting does not change this number), and since all rows below #clean0rows(mx0) contain 0's only, there must be some k >= 0 of extra rows such that n = #clean0rows(mx0,i) + k. k can be at most the number of dirty rows in mx0, since no zero is in a clean 1-row of mx0. The k additional 0's in column cmax are distributed among even and odd rows, so counting zeros in even/odd rows in mx0[cmax]c above #clean0rows(mx0) will give two number keven and kodd, such that k = keven + kodd. Formally, "countevn(a, p)/countodd(a, p)" are used count the elements of an array (here: a column of the matrix) that have an even/odd index and satisfy predicate p. Since k ≤ #dirtyrows(mx0, i), we get that both keven, kodd ≤ #dirtyrows(mx0, i) /↑ 2 (Lemmas countevn-maxcol0-bounded and countodd-maxcol0-bounded). Since mx0 is snakerow-sorted, all columns left of cmax have at least keven zero's, all columns right of cmax have at least kodd zeros above #clean0rows(mx0,i). (Lemma rowsorted-smallercol-evnbounded and rowsorted-biggercol-oddbounded). Therefore every column of mx0 has at least #clean0rows(mx0,i) + min(keven,kodd) zeros. Again, since the number of zero's in a column does not change (Lemma max0column-clean0rowsbounded) the number #clean0rows(mx,i) of clean 0-rows in mx is at least #clean0rows(mx0,i) + min(keven,kodd) too. Combining these facts, the number of clean 0-rows increases by min(keven,kodd), the number of clean 1-rows drops by #dirtyrows(mx0,i) - k, Since k - min(keven,kodd) = max(keven,kodd) ≤ #dirtyrows(mx0, i) /↑ 2 the main lemma scolsort-dirtyboundhalf is implied.
• After "log2↑(#r mx)" iterations (where log2↑(0) = 1, and log2↑(n) is the logarithm with base 2 rounded up to the next integer otherwise) of row and column sorting the number of dirty columns has decreased to at most one (see Lemma log2ub-lem). Since this is the case for every element i, it can be proved that the matrix then has increasing rows, i.e rowsincreasing(mx,#r mx)". This is stated by Lemma dirtyrowsone-rowsincreasing. With increasing rows, sorting all rows in ascending order produces a sorted (not snake-sorted!) matrix, expressed as the postcondition "ordered≤(mx.toArr)" of the total correctness theorem shearsort-correct.
The proof effort for this case study was about two weeks.

### Bibliography

 [Hagerup16] T. Hagerup Course notes: Introduction to Parallel Algorithms

Used basic libraries:

Back to the overview of all KIV projects.