# VerifyThis Competition 2019: KIV Solutions

This Website documents the KIV solutions of the VerifyThis Competition held at ETAPS 2019 in Prague, Czech Republic. All proofs were completed after the meeting.

## Challenge 1, part A

We got close to a solution during the meeting, and found out that what we had missed was that adjacent monotone sequences were not required to alternate between increasing and decreasing. Correcting the predicates to accomodate the change after the competion finished made the problem simpler, and we finished the proof in half an hour then. The overview shows that we solved the problem using lists over ordered elements from the library. These have functions to access and to update the n-th element of a list (writen x[n] and x[n,a]) already defined. Note that the operations are defined to be partial. This has no effect when the operations are used in predicate logic formulas (predicate logic functions are still total). However, when the functions are called outside their domain within a program, then an exception is thrown (programs can define exception handlers). Here, an exception would be thrown when the program would call x[n] and x[n,a] with an index being out of range, i.e. when # x ≤ n, where # x is the length of the list. An instance of lists over elements with a total order has predicates "ordered≤" for non-strictly and "ordered<" for stricly ordered lists already defined. A lot of useful lemmas are proved over this specification.

The main specification gives a recursive characterization of cut sequences using a predicate "cutseq". That a subsequence x is decreasing, is expressed as "the reversed sequence rev(x) is increasing".

The program uses this predicate and a slight variation "cutseq-h" as the invariant to prove the main theorem (a sequent of the form"pre ⊦ ⟨prog⟩ post" expresses total correctness for deterministic programs; it implies that the program will not throw any exception). The main theorem trivially implies that the cut sequent goes from begin to end , has monotonic, maximal sequences (this is exactly the definition of "cutseq"), and is non-empty.

## Challenge 2, part A

For us this challenge was significantly harder, since to solve it elegantly one has to come up with an intuitive characterization of the stack involved. Also using the original characterization with 1-based indices is quite error-prone. The overview shows that again we solved the problem using lists with the same operations as for the first challenge.

The main specification defines a function "leftneighbor(x,n)" that computes the left neighbor m of an element at position n (0-based). The function returns 0 if there is no left neigbor, and m + 1, if the left neighbor exists and is at position m (0-based). "leftneighbors" computes the left neigbours of all elements in the list. "first-lower(stack, x, a)" imitates the inner loop: It computes the topmost element of the stack, that is lower than "a" (so, it is the main function needed in its invariant). The crucial predicate that characterizes the stack is "valid", and is the main formula of the invariant of both loops. It states that if the stack is [a_1, a_2,...a_n] (with a1 being the top of stack), then each a_i+1 is the left neighbor of a_i (with the last element a_n having no left neighbor).

With these predicates correctly defined, the proof of total correctness of the program reduces to essentially one lemma to go through (the lemma (node 16) is applied to close goal no. 15): The left neighbor of the element considered when the inner loop is entered is indeed the first one that is lower on the stack. This lemma is hard to prove, since it needs a generalization to be inductively provable.

## Challenge 3, part A

Of the three challenges, this was the hardest one for us, since we did not have a library specification of two-dimensional matrices with matrix multiplication available. We tried to use arrays of arrays, but this has the awkward consequence of having to carry around a predicate that states that all inner arrays have the same length. We have now developed such a library specification of two dimensional arrays , and formalized matrix and vector operations when the elements stored in the matrix are from a ring, with proofs of some standard laws, e.g. that matrix multiplication is associative, or that summation can be split over subsets. Developing these library specifications took around 2-3 days, with the library in place the challenge is now solvable (although solving it in 90 minutes would still be a hard task.).

The overview shows, that we solve the challenge by defining a sparse matrix (variables "spmx") as a an instance of finite maps, where keys are pairs of natural numbers, and values are ring elements. This avoids ruling out duplicates in a list of triples (as proposed) and allows to process the elements of the sparse array in any order (as needed by the concurrent version, defined in part B). Predicate "okspmx(spmx,r,c)" characterizes sparse matrices that can be converted by "spmx.toMatrix(r,c)" to a matrix with r rows and c columns.

Proving total correctness of the sequential program program then is a standard invariance proof, that reduces to one main lemma for showing invariance.

Note that the program is nondeterministic: It chooses an arbitrary key "np" from the sparse matrix, processes the (ring) value "spmx[np]" that is stored under this key and removes the key (spmx -- np). Note that the order of arguments in the multiplication on the right hand side of the assignment is reverted compared to what was proposed, since we do not assume multiplication to be commutative.

Since the program is non-deterministic, total correctness is now expressed as "pre ⊦ ⟪prog)⟫ post" (all runs terminate, do not throw exceptions and establish the postcondition).

## Challenge 3, part B

Proving partial correctness for an extended concurrent program is done using an experimental translation of concurrent programs to IO Automata that we defined recently. The labelled program "CMULT" defined in specification VF3conc is assumed to be run concurrently by a finite sets of threads identified by a thread id.

Atomic steps of the program are from one label to the next: assignments are atomic, as is the evaluation of tests. An "if* glob=oldval then glob:= newval" executes as one atomic step, and encodes a compare-and-swap instruction (CAS): oldval is a local variable that holds the previously loaded content of global variable glob, and newval is the new value we want to store there, if no one else has changed glob since loading oldval.

The initial sparse array is stored in "spmx" and not changed (to have it available in the assertions). "gspmx" is a global variable accessed by all threads that holds the yet unprocessed part. Input "x" and output "y" are also globally accessible. Each thread atomically loads some arbitrary part of "gspmx" into local variable "lspmx" (this could be implemented by choosing e.g. a row, a column, or a single entry) and removes it from "gspmx". It then processes the part in "lspmx" with the sequential algorithm as before, except that adding the product to the global result "y" uses a CAS. A ghost variable "ghostlspmxf(tid)" is used to make the local variable "lspmx" of each thread globally visible, and to accomodate for the fact that result "y" can not be updated in the same atomic step as "lspmx".

The algorithm is translated to an automaton that interleaves the steps of all threads: we have to type in the specification up to "end automaton specification", the specifications resulting from the translation are shown below. The state (type "VF3concState") of the automaton includes a global state consisting of the variables defined as "state variables" + a local state and a current program counter for each thread (functions from threadid to the appropriate type). Steps of the algorithm are split into a precondition, and three step functions that update the global state, the localstate of the thread and the pc of the thread, respectively. Every step has an action associated to it. This allows to have deterministic steps for a given action. It is also useful for proving linearizability by refinement (using the invoke and return actions as visible behavior).

The translation allows to give assertions that should hold at specific program lines, either directly at a pc (e.g. L4 has the assertion "lspmx ≠ ∅"), or for whole ranges of pc values in the section "invariants" (an empty range indicates a global invariant that holds all the time). KIV generates three types of proof obligations (colored green, generated axioms for the datatypes and steps functions are yellow):

• step-conditions correspond to Hoare-triples for each step.
• rely-conditions prove that steps of other threads than the ones of "tid" obey a rely predicate (defined under "rely condition") for the global state (state variables with a "0" at the end are used for the state after the step). Steps that do not change the global state do not generate a proof obligation since the rely must be reflexive.
• stable-condictions ensure that all assertions are stable under the rely (and therefore under steps of other threads). If assertions are the same for several pc's (here: for L2 and L3) generating the duplicate condition for L3 is avoided.
Together these proof obligations imply that the algorithm upholds the invariant "VF3concInv" (which splits into the global invariant, and assertions for each local state). The "initial state" predicate specifies the precondition of the algorithm (a corresponding proof obligation "init" ensures that the invariant holds in initial states). In final states all threads have pc = F, so "ghostlspmxf(tid) = ∅" and "gspmx = ∅" holds (see section "invariants"), so "(U(alltids, ghostlspmxf) ∪ gspmx)" is empty.

Therefore the global invariant "y + (U(alltids, ghostlspmxf) ∪ gspmx).toMatrix(r, # x) * x = spmx.toMatrix(r, # x) * x" simplifies to the desired postcondition "y = spmx.toMatrix(r, # x) * x".

The proof obligations generated do not yet prove the absence of exceptions (generating these conditions is still work in progress). Also no liveness/termination is proved (proving lock-freedom would require significantly more effort).

The proof of partial correctness is similar to the one for the sequential algorithm. Extra effort is needed to preserve that "gspmx" and each "lspmxf(tid)" all store disjoint parts of the array. The proof uses the same main lemma that was used in the sequential proof in the proof for the step from L6 to L7 (node 25).

Back to the overview of all KIV projects.