AllChaptersContents of All Chapters on a Single Page

Preface: Introduction to the Course

Welcome

This electronic book is Volume 6 of the Software Foundations series, which presents the mathematical underpinnings of reliable software.
In this volume, you will learn about the foundations of Separation Logic, a practical approach for the modular verification of imperative programs. In particular, this volume presents the building blocks for constructing a program verification tool. It does not, however, focus on reasoning about data structures and algorithms using Separation Logic. This aspect is covered to some extent by Volume 5 of Software Foundations, which presents Verifiable C, a program logic and proof system for C. For OCaml programs, this aspect will be covered in a yet-to-be-written volume presenting CFML, a tool that builds upon all the techniques presented in this volume.
You are only assumed to understand the material in Software Foundations Volume 1 (Logical Foundations), and the two chapters on Hoare Logic (Hoare and Hoare2) from Software Foundations Volume 2 (PL Foundations). The reading of Volume 5 is not a prerequisite. The exposition here is intended for a broad range of readers, from advanced undergraduates to PhD students and researchers.
A large fraction of the contents of this course is also written up in traditional LaTeX-style presentation, in the ICFP'20 article: Separation Logic for Sequential Programs, by Arthur Charguéraud. The long version of this paper is available from: http://www.chargueraud.org/research/2020/seq_seplogic/seq_seplogic.pdf.
This paper includes, in particular, a 5-page historical survey of contributions to mechanized presentations of Separation Logic, featuring 100+ citations. For a broader survey of Separation Logic, we recommend Peter O'Hearn's 2019 survey, which is available from: https://dl.acm.org/doi/10.1145/3211968-- including the supplemental material linked near the bottom of that page.

Separation Logic

Separation Logic is a program logic: it enables one to establish that a program satisfies its specification. Specifications are expressed using triples of the form {H} t {Q}. Whereas in Hoare logic the precondition H and the postcondition Q describe the whole memory state, in Separation Logic, H and Q describe only a fragment of the memory state. This fragment includes the resources necessary to the execution of t.
A key ingredient of Separation Logic is the frame rule, which enables modular proofs. It is stated as follows.
{ H } t { Q }  

{ H \* H' } t { Q \* H' }
The above rule asserts that if a term t executes correctly with the resources H and produces Q, then the term t admits the same behavior in a larger memory state, described by the union of H with a disjoint component H', producing the postcondition Q extended with that same resource H' unmodified. The star symbol \* denotes the separating conjunction operator of Separation Logic.
Separation Logic can be exploited in three kind of tools.
  • Automated proofs: the user provides only the code, and the tool locates sources of potential bugs. A good automated tool provides feedback that, most of time, is relevant.
  • Semi-automated proofs: the user provides not just the code, but also specifications and invariants. The tool then leverages automated solvers (e.g., SMT solvers) to discharge proof obligations.
  • Interactive proofs: the user provides not just the code and its specifications, but also a detailed proof script justifying the correctness of the code. These proofs may be worked on interactively using a proof assistant such as Coq.
The present course focuses on the third approach, that is, the integration of Separation Logic in an interactive proof assistant. This approach has been successfully put to practice throughout the world, using various proof assistants (Coq, Isabelle/HOL, HOL), targeting different languages (Assembly, C, SML, OCaml, Rust...), and for verifying various kind of programs, ranging from low-level operating system kernels to high-level data structures and algorithms.

Separation Logic in a proof assistant

The benefits of exploiting Separation Logic in a proof assistant include at least four major points:
  • higher-order logic provides virtually unlimited expressiveness that enables formulating arbitrarily complex specifications and invariants;
  • a proof assistant provides a unified framework to prove both the implementation details of the code and the underlying mathematical results form, e.g., results from theory or graph theory;
  • proof scripts may be easily maintained to reflect on a change to the source code;
  • the fact that Separation Logic is formalized in the proof assistant provides high confidence in the correctness of the tool.
Pretty much all the tools that leverage Separation Logic in a proof assistant are constructed following the same schema:
  • A formalization of the syntax and semantics of the source language. This is called a deep embedding of the programming language.
  • A definition of Separation Logic predicates as predicates from higher-order logic. This is called a shallow embedding of the program logic.
  • A definition of Separation Logic triples as a predicate, the statements of the reasoning rules as lemmas, and the proof of these reasoning rules with respect to the semantics.
  • An infrastructure that consists of lemmas, tactics and notation, allowing for the verification of concrete programs to be carried out through relatively concise proof scripts.
  • Applications of this infrastructure to the verification of concrete programs.
The purpose of this course is to explain how to set up such a construction. To that end, we consider in this course the simplest possible variant of Separation Logic, and apply it to a minimalistic imperative programming language. The language essentially consists of a lambda-calculus with references. This language admits a simple semantics and avoids in particular the need to distinguish between stack variables and heap- allocated variables. Advanced chapters from the course explains how to add support for loops, records, arrays, and n-ary functions.

Several Possible Depths of Reading

The material is organized in such a way that it can be easily adapted to the amount of time that the reader is ready to invest in the course.
The course contains 13 chapters, not counting the Preface and Postface. These chapters are organized in 3 major parts, as pictured in the dependency diagram.
  • The short curriculum includes only the 5 first chapters (ranging from chapter Basic to chapter Rules).
  • The medium curriculum includes 3 additional chapters (ranging from chapter WPsem to chapter Wand).
  • The full curriculum includes 5 more chapters (ranging from chapter Partial to chapter Rich).
In addition, each chapter is decomposed in three parts.
  • The First Pass section presents the most important ideas only. The course in designed in such a way that it is possible to read only the First Pass section of every chapter. The reader may be interested in going through all these sections to get the big picture, before revisiting each chapter in more details.
  • The More Details section presents additional material explaining in more depth the meaning and the consequences of the key results. This section also contains descriptions of the most important proofs. By default, readers would eventually read all this material.
  • The Optional Material section typically contains the remaining proofs, as well as discussions of alternative definitions. The Optional Material sections are all independent from each other. They would typically be of interest to readers who want to understand every detail, readers who are seeking for a deep understanding of a particular notion, and readers who are looking for answers to specific question.

List of Chapters

The first two chapters, namely chapters Basic and Repr give a primer on how to prove imperative programs in Separation Logic, thus focusing on the end user's perspective. All the other chapters focus on the implementor's perspective, explaining how Separation Logic is defined and how a practical verification tool can be constructed.
The list of chapters appears below. The numbering corresponds to teaching units: if the chapters were taught as part of a University course, one could presumably aim to cover one teaching unit per week.
  • (1) Basic: introduction to the core features of Separation Logic, illustrated using short programs manipulating references.
  • (1) Repr: introduction to representation predicates in Separation Logic, in particular for describing mutable lists and trees.
  • (2) Hprop: definition of the core operators of Separation Logic, of Hoare triples, and of Separation Logic triples.
  • (2) Himpl: definition of the entailment relation, statement and proofs of its fundamental properties, and description of the simplification tactic for entailment.
  • (3) Rules: statement and proofs of the reasoning rules of Separation Logic, and example proofs of programs using these rules.
  • (4) WPsem: definition of the semantic notion of weakest precondition, and statement of rules in weakest-precondition style.
  • (4) WPgen: presentation of a function that effectively computes the weakest precondition of a term, independently of its specification.
  • (5) Wand: introduction of the magic wand operator and of the ramified frame rule, and extension of the weakest-precondition generator for handling local function definitions.
  • (5) Affine: description of a generalization of Separation Logic with affine heap predicates, which are useful, in particular, for handling garbage-collected programming languages.
  • (6) Struct: specification of array and record operations, and encoding of these operations using pointer arithmetic.
  • (6) Rich: description of the treatment of additional language constructs, including loops, assertions, and n-ary functions.
  • (7) Nondet: definition of triples for non-deterministic programming languages.
  • (7) Partial: definition of triples for partial correctness only, i.e., for not requiring termination proofs.

Other Distributed Files

The chapters listed above depend on a number of auxiliary files, which the reader does not need to go through, but might be interested in looking at, either by curiosity, or for checking out a specific implementation detail.
  • LibSepReference: a long file that defines the program verification tool that is used in the first two chapters, and whose implementation is discussed throughout the other chapters. Each chapter from the course imports this module, as opposed to importing the chapters that precedes it.
  • LibSepVar: a formalization of program variables, together with a bunch of notations for parsing variables.
  • LibSepFmap: a formalization of finite maps, which are used for representing the memory state.
  • LibSepSimpl: a functor that implements a powerful tactic for automatically simplifying entailments in Separation Logic.
  • LibSepMinimal: a minimalistic formalization of a soundness proof for Separation Logic, corresponding to the definitions and proofs presented in the ICFP'20 paper mentioned earlier.
  • All other Lib* files are imports from the TLC library, which is described next.
The TLC library is a collection of general purpose theory and tactics developed over the years by Arthur Charguéraud. The TLC library is exploited in this course to streamline the presentation. TLC provides, in particular, extensions for classical logic and tactics that are particularly well-suited for meta-theory. Prior knowledge of TLC is not required, and all exercises can be completed without using TLC tactics.
The classical logic aspects of TLC are presented at the moment they appear in the course. The TLC tactics are also briefly described upon their first occurrence. Moreover, most of these tactics are presented in the chapter UseTactics of Software Foundations Volume 2 (Programming Language Foundations).

Practicalities

Feedback Welcome

If you intend to use this course either in class of for self-study, the author, Arthur Charguéraud, would love to hear from you. Just knowing in which context the course has been used and how much of the course students were able to cover is very valuable information.
If you plan on providing any non-small amount of feedback, do not hesitate to ask the author to be added as contributor to the github repository. In particular, please do not hesitate to improve the formulation of the English sentences throughout this volume, as the author is not a native speaker.

Exercises

Each chapter includes numerous exercises. The star rating scheme is described in the Preface of Software Foundations Volume 1 (Logical Foundations).
Disclaimer: the difficulty ratings currently in place are highly speculative! You feedback is very much welcome.
Disclaimer: the auto-grading system has not been tested for this volume. If you are interested in using auto-grading for this volume, please contact the author.

System Requirements

The Preface of Software Foundations Volume 1 (Logical Foundations) describes how to install Coq. The files you are reading have been tested with Coq 8.12.

Note for CoqIDE Users

CoqIDE typically works better with its asynchronous proof mode disabled. To load all the course files in CoqIDE, use the following command line.
  coqide -async-proofs off -async-proofs-command-error-resilience off -Q . Basic.v &

Recommended Citation Format

If you want to refer to this volume in your own writing, please do so as follows:
   @book {Charguéraud:SF6,
   author = {Arthur Charguéraud},
   title = "Separation Logic Foundations",
   series = "Software Foundations",
   volume = "6",
   year = "2021",
   publisher = "Electronic textbook",
   note = {Version 1.0, \URL{http://softwarefoundations.cis.upenn.edu} },
   }

Thanks

The development of the technical infrastructure for the Software Foundations series has been supported, in part, by the National Science Foundation under the NSF Expeditions grant 1521523, The Science of Deep Specification.

Basic: Basic Proofs in Separation Logic

Set Implicit Arguments.
From SLF Require Import LibSepReference.
Import ProgramSyntax DemoPrograms.

Implicit Types n m : int.
Implicit Types p q : loc.

A First Taste

This chapter gives an overview of the basic features of Separation Logic. Those features are illustrated using example programs, which are specified and verified using a particular Separation Logic framework, the construction of which is presented throughout the course.
This chapter introduces the following notions:
  • "Heap predicates", which are used to describe memory states in Separation Logic.
  • "Specification triples", of the form triple t H Q, which relate a term t, a precondition H, and a postcondition Q.
  • "Entailment proof obligations", of the form H ==> H' or Q ===> Q', which assert that a pre- or post-condition is weaker than another one.
  • "Verification proof obligations", of the form PRE H CODE F POST Q, which internally leverage a form of weakest-precondition.
  • Custom proof tactics, called "x-tactics", which are specialized tactics for carrying out the verification proofs.
The "heap predicates" used to describe memory states are presented throughout the chapter. They include:
  • p ~~> n, which describes a memory cell at location p with contents n,
  • \[], which describes an empty state,
  • \[P], which also describes an empty state, and moreover asserts that the proposition P is true,
  • H1 \* H2, which describes a state made of two disjoint parts: H1 and H2,
  • \ x, H, which is used to quantify variables in postconditions.
All these heap predicates admit the type hprop, which describes predicates over memory states. Technically, hprop is defined as stateProp.
The verification of practical programs is carried out using x-tactics, identified by the leading "x" letter in their name. These tactics include:
  • xwp or xtriple to begin a proof,
  • xapp to reason about an application,
  • xval to reason about a return value,
  • xif to reason about a conditional,
  • xsimpl to simplify or prove entailments (H ==> H' or Q ===> Q').
In addition to x-tactics, the proof scripts exploit standard Coq tactics, as well as tactics from the TLC library. SOONER: Recall what the TLC library is. The relevant TLC tactics, which are described when first use, include:
  • math, which is a variant of lia for proving mathematical goals,
  • induction_wf, which sets up proofs by well-founded induction,
  • gen, which is a shorthand for generalize dependent, a tactic also useful to set up induction principles.
For simplicity, we assume all programs to be written in A-normal form, that is, with all intermediate expressions being named by a let-binding. For each program, we first show its code using OCaml-style syntax, then formally define the code in Coq using an ad-hoc notation system, featuring variable names and operators all prefixed by a quote symbol.

The Increment Function

As first example, consider the function incr, which increments the contents of a mutable cell that stores an integer. In OCaml syntax, this function is defined as:
    fun p
      let n = ! p in
      let m = n + 1 in
      p := m
We describe this program in Coq using a custom set of notations for the syntax of imperative programs. (There is no need to learn how to write programs in this ad-hoc syntax: source code is provided for all the programs involved in this course.) The definition for the function incr appears below. This function is a value, so it has, like all values in our framework, the type val.
Definition incr : val :=
  <{ fun 'p
      let 'n = ! 'p in
      let 'm = 'n + 1 in
      'p := 'm }>.
The quotes that appear in the source code are used to disambiguate between the keywords and variables associated with the source code, and those from the corresponding Coq keywords and variables. The fun keyword should be read like the fun keyword from OCaml.
The specification of incr p, shown below, is expressed using a "Separation Logic triple". A triple is formally expressed by a proposition of the form triple t H Q. By convention, we write the precondition H and the postcondition Q on separate lines.
Lemma triple_incr : (p:loc) (n:int),
  triple (incr p)
    (p ~~> n)
    (fun _ ⇒ (p ~~> (n+1))).
Here p denotes the address in memory of the reference cell provided as argument to the increment function. In technical vocabulary, p is the "location" of a reference cell. All locations have type loc, thus the argument p of incr has type loc.
In Separation Logic, the "heap predicate" p ~~> n describes a memory state in which the contents of the location p is the value n. In the present example, n denotes an integer value.
The behavior of the operation incr p consists of updating the memory state by incrementing the contents of the cell at location p, so that its new contents are n+1. Thus, the memory state posterior to the increment operation can be described by the heap predicate p ~~> (n+1).
The result value returned by incr p is the unit value, which does not carry any useful information. In the specification of incr, the postcondition is of the form fun _ ... to indicate that there is no need to bind a name for the result value.
The general pattern of a specification thus includes:
  • Quantification of the arguments of the functions---here, the variable p.
  • Quantification of the "ghost variables" used to describe the input state---here, the variable n.
  • The application of the predicate triple to the function application incr p, which is the term being specified by the triple.
  • The precondition describing the input state---here, the predicate p ~~> n.
  • The postcondition describing both the output value and the output state. The general pattern is fun r H', where r names the result and H' describes the final state. Here, the final state is described by p ~~> (n+1).
Note that we have to write p ~~> (n+1) using parentheses around n+1, because p ~~> n+1 would get parsed as (p ~~> n) + 1.
Our next step is to prove the specification lemma triple_incr which specifies the behavior of the function incr. We conduct the verification proof using x-tactics.
Proof.
xwp begins the verification proof. The proof obligation is displayed using the custom notation PRE H CODE F POST Q. The CODE part does not look very nice, but one should be able to somehow recognize the body of incr. Indeed, if we ignore the details and perform the alpha-renaming from v to n and v0 to m, the CODE section reads like:
              Let' n := (App val_get p) in
              Let' m := (App val_add n 1) in
              App val_set p m
which is somewhat similar to the original source code.
  xwp.
The remainder of the proof performs some form of symbolic execution. One should not attempt to read the full proof obligation at each step, but instead only look at the current state, described by the PRE part (here, p ~~> n), and at the first line only of the CODE part, where one can read the code of the next operation to reason about. Each function call is handled using the tactic xapp.
We reason about the operation !p that reads into p; this read operation returns the value n.
  xapp.
We reason about the addition operation n+1.
  xapp.
We reason about the update operation p := n+1, thereby updating the state to p ~~> (n+1).
  xapp.
At this stage, the proof obligation takes the form _ ==> _, which require checking that the final state matches what is claimed in the postcondition. We discharge it using the tactic xsimpl.
  xsimpl.
Qed.
The command below associates the specification lemma triple_incr with the function incr in a hint database, so that if we subsequently verify a program that features a call to incr, the xapp tactic is able to automatically invoke the lemma triple_incr.
Hint Resolve triple_incr : triple.
The proof framework can be used without any knowledge about the implementation of the notation PRE H CODE F POST Q nor about the implementation of the x-tactics. Readers with prior experience in program verification may nevertheless be interested to know that PRE H CODE F POST Q is defined as the entailment H ==> F Q, where F is a form of weakest-precondition that describes the behavior of the code.

A Function with a Return Value

As a second example, let's specify a function that performs simple arithmetic computations. The function, whose code appears below, expects an integer argument n, computes a as n+1, then computes b as n-1, and finally returns a+b. The function thus always returns 2*n.
Definition example_let : val :=
  <{ fun 'n
      let 'a = 'n + 1 in
      let 'b = 'n - 1 in
      'a + 'b }>.
We specify this function using the the triple notation, in the form triple (example_let n) H (fun r H'), where r, of type val, denotes the output value.
To denote the fact that the input state is empty, we write \[] in the precondition.
To denote the fact that the output state is empty, we could use \[]. Yet, if we write just fun r \[] as postcondition, we would have said nothing about the output value r produced by a call example_let. Instead, we would like to specify that the result r is equal to 2*n. To that end, we write the postcondition fun r \[r = 2*n], which actually stands for fun (r:val) [r = val_int (2*n)], where the coercion [val_int] translates the integer value [2*n] into the corresponding value of type [val] from the programming language.
Lemma triple_example_let : (n:int),
  triple (example_let n)
    \[]
    (fun r\[r = 2*n]).
The verification proof script is very similar to the previous one. The x-tactics xapp performs symbolic execution of the code. Ultimately, we need to check that the expression computed, (n + 1) + (n - 1), is equal to the specified result, that is, 2*n. We exploit the TLC tactics math to prove this mathematical result.
Proof.
  xwp. xapp. xapp. xapp. xsimpl. math.
Qed.

The Function quadruple

Consider the function quadruple, which expects an integer n and returns its quadruple, that is, the value 4*n.
Definition quadruple : val :=
  <{ fun 'n
       let 'm = 'n + 'n in
       'm + 'm }>.

Exercise: 1 star, standard, especially useful (triple_quadruple)

Specify and verify the function quadruple to express that it returns 4*n, following the template of triple_example_let.
(* FILL IN HERE *)

The Function inplace_double

Consider the function inplace_double, which expects a reference on an integer, reads twice in that reference, then updates the reference with the sum of the two values that were read.
Definition inplace_double : val :=
  <{ fun 'p
       let 'n = !'p in
       let 'm = 'n + 'n in
       'p := 'm }>.

Exercise: 1 star, standard, especially useful (triple_inplace_double)

Specify and verify the function inplace_double, following the template of triple_incr.
(* FILL IN HERE *)

Separation Logic Operators

Increment of Two References

Consider the following function, which expects the addresses of two reference cells, and increments both of them.
Definition incr_two : val :=
  <{ fun 'p 'q
       incr 'p;
       incr 'q }>.
The specification of this function takes the form triple (incr_two p q) H (fun _ H'), where r denotes the result value of type unit.
The precondition describes two references cells: p ~~> n and q ~~> m. To assert that the two cells are distinct from each other, we separate their description with the operator \*. This operator called "separating conjunction" in Separation Logic, and is also known as the "star" operator. Thus, the precondition is (p ~~> n) \* (q ~~> m), or simply p ~~> n \* q ~~> m.
The postcondition describes the final state as is p ~~> (n+1) \* q ~~> (m+1), where the contents of both cells is increased by one unit compared with the precondition.
The specification triple for incr_two is thus as follows.
Lemma triple_incr_two : (p q:loc) (n m:int),
  triple (incr_two p q)
    (p ~~> n \* q ~~> m)
    (fun _p ~~> (n+1) \* q ~~> (m+1)).
The verification proof follows the usual pattern. Note that, from here on, we use the command Proof using. instead of just Proof., to enable asynchronous proof checking, a feature that allows for faster navigation in scripts when using CoqIDE.
Proof using.
  xwp. xapp. xapp. xsimpl.
Qed.
We register the specification triple_incr_two in the database, to enable reasoning about calls to incr_two.
Hint Resolve triple_incr_two : triple.

Aliased Arguments

The specification triple_incr_two correctly describes calls to the function incr_two when providing it with two distinct reference cells. Yet, it says nothing about a call of the form incr_two p p.
Indeed, in Separation Logic, a state described by p ~~> n cannot be matched against a state described by p ~~> n \* p ~~> n, because the star operator requires its operand to correspond to disjoint pieces of state.
What happens if we nevertheless try to exploit triple_incr_two to reason about a call of the form incr_two p p, that is, with aliased arguments?
Let's find out, by considering the operation aliased_call p, which does execute such a call.
Definition aliased_call : val :=
  <{ fun 'p
       incr_two 'p 'p }>.
A call to aliased_call p should increase the contents of p by 2. This property can be specified as follows.
Lemma triple_aliased_call : (p:loc) (n:int),
  triple (aliased_call p)
    (p ~~> n)
    (fun _p ~~> (n+2)).
If we attempt the proof, we get stuck. Observe how xapp reports its failure to make progress.
Proof using.
  xwp. xapp.
Abort.
In the above proof, we get stuck with a proof obligation of the form: \[] ==> (p ~~> ?m) \* _, which requires showing that from an empty state one can extract a reference p ~~> ?m for some integer ?m.
What happened is that when matching the current state p ~~> n against p ~~> ?n \* p ~~> ?m (which corresponds to the precondition of triple_incr_two with q = p), the internal simplification tactic was able to cancel out p ~~> n in both expressions, but then got stuck with matching the empty state against p ~~> ?m.
The issue here is that the specification triple_incr_two is specialized for the case of non-aliased references.
It is possible to state and prove an alternative specification for the function incr_two, to cover the case of aliased arguments. Its precondition mentions only one reference, p ~~> n, and its postcondition asserts that its contents gets increased by two units.
This alternative specification can be stated and proved as follows.
Lemma triple_incr_two_aliased : (p:loc) (n:int),
  triple (incr_two p p)
    (p ~~> n)
    (fun _p ~~> (n+2)).
Proof using.
  xwp. xapp. xapp. xsimpl. math.
Qed.
By exploiting the alternative specification, we are able to verify the specification of aliased_call p, which invokes incr_two p p. In order to indicate to the xapp tactic that it should invoke the lemma triple_incr_two_aliased and not triple_incr_two, we provide that lemma as argument to xapp, by writing xapp triple_incr_two_aliased.
Lemma triple_aliased_call : (p:loc) (n:int),
  triple (aliased_call p)
    (p ~~> n)
    (fun _p ~~> (n+2)).
Proof using.
  xwp. xapp triple_incr_two_aliased. xsimpl.
Qed.

A Function that Takes Two References and Increments One

Consider the following function, which expects the addresses of two reference cells, and increments only the first one.
Definition incr_first : val :=
  <{ fun 'p 'q
       incr 'p }>.
We can specify this function by describing its input state as p ~~> n \* q ~~> m, and describing its output state as p ~~> (n+1) \* q ~~> m. Formally:
Lemma triple_incr_first : (p q:loc) (n m:int),
  triple (incr_first p q)
    (p ~~> n \* q ~~> m)
    (fun _p ~~> (n+1) \* q ~~> m).
Proof using.
  xwp. xapp. xsimpl.
Qed.
Observe, however, that the second reference plays absolutely no role in the execution of the function. In fact, we might equally well have described in the specification only the existence of the reference that the code actually manipulates.
Lemma triple_incr_first' : (p q:loc) (n:int),
  triple (incr_first p q)
    (p ~~> n)
    (fun _p ~~> (n+1)).
Proof using.
  xwp. xapp. xsimpl.
Qed.
Interestingly, the specification triple_incr_first, which mentions the two references, is derivable from the specification triple_incr_first', which mentions only the first reference.
The proof of this fact uses the tactic xtriple, which turns a specification triple of the form triple t H Q into the form PRE H CODE t POST Q, thereby enabling this proof obligation to be processed by xapp.
Here, we invoke the tactic xapp triple_incr_first', to exploit the specification triple_incr_first'.
Lemma triple_incr_first_derived : (p q:loc) (n m:int),
  triple (incr_first p q)
    (p ~~> n \* q ~~> m)
    (fun _p ~~> (n+1) \* q ~~> m).
Proof using.
  xtriple. xapp triple_incr_first'. xsimpl.
Qed.
More generally, in Separation Logic, if a specification triple holds, then this triple remains valid when we add the same heap predicate to both the precondition and the postcondition. This is the "frame" principle, a key modularity feature that we'll come back to later on in the course.

Transfer from one Reference to Another

Consider the transfer function, whose code appears below.
Definition transfer : val :=
  <{ fun 'p 'q
       let 'n = !'p in
       let 'm = !'q in
       let 's = 'n + 'm in
       'p := 's;
       'q := 0 }>.

Exercise: 1 star, standard, especially useful (triple_transfer)

State and prove a lemma called triple_transfer specifying the behavior of transfer p q in the case where p and q denote two distinct references.
(* FILL IN HERE *)

Exercise: 1 star, standard, especially useful (triple_transfer_aliased)

State and prove a lemma called triple_transfer_aliased specifying the behavior of transfer when it is applied twice to the same argument. It should take the form triple (transfer p p) H Q.
(* FILL IN HERE *)

Specification of Allocation

Consider the operation ref v, which allocates a memory cell with contents v. How can we specify this operation using a triple?
The precondition of this triple should be the empty heap predicate, written \[], because the allocation can execute in an empty state.
The postcondition should assert that the output value is a pointer p, such that the final state is described by p ~~> v.
It would be tempting to write the postcondition fun p p ~~> v. Yet, the triple would be ill-typed, because the postcondition of a triple must be of type valhprop, and p is an address of type loc.
Instead, we need to write the postcondition in the form fun (r:val) H', where r denotes the result value, and somehow we need to assert that r is a value of the form val_loc p, for some location p, where val_loc is the constructor that injects locations into the grammar of program values.
To formally quantify the variable, we use an existential quantifier for heap predicates, written \. The correct postcondition for ref v is fun (r:val) \ (p:loc), \[r = val_loc p] \* (p ~~> v).
The complete statement of the specification appears below. Note that the primitive operation ref v is written ref v in the Coq syntax.
(* TODO: explain the notation triple <{   vs triple (). *)
Parameter triple_ref : (v:val),
  triple <{ ref v }>
    \[]
    (fun r\ p, \[r = val_loc p] \* p ~~> v).
The pattern fun r \ p, \[r = val_loc p] \* H) occurs whenever a function returns a pointer. Thus, this pattern appears pervasively. To improve concision, we introduce a specific notation for this pattern, shortening it to funloc p H.
Notation "'funloc' p '=>' H" :=
  (fun r\ p, \[r = val_loc p] \* H)
  (at level 200, p ident, format "'funloc' p '=>' H").
Using this notation, the specification triple_ref can be reformulated more concisely, as follows.
Parameter triple_ref' : (v:val),
  triple <{ ref v }>
    \[]
    (funloc pp ~~> v).
Remark: the CFML tool features a technique that generalizes the notation funloc to all return types, by leveraging type-classes. Unfortunately, the use of type-classes involves a number of technicalities that we wish to avoid in this course. For that reason, we employ only the funloc notation, and use existential quantifiers explicitly for other types.

Allocation of a Reference with Greater Contents

Consider the following function, which takes as argument the address p of a memory cell with contents n, allocates a fresh memory cell with contents n+1, then returns the address of that fresh cell.
Definition ref_greater : val :=
  <{ fun 'p
       let 'n = !'p in
       let 'm = 'n + 1 in
       ref 'm }>.
The precondition of ref_greater needs to assert the existence of a cell p ~~> n. The postcondition of ref_greater should asserts the existence of two cells, p ~~> n and q ~~> (n+1), where q denotes the location returned by the function. The postcondition is thus written funloc q p ~~> n \* q ~~> (n+1), which is a shorthand for fun (r:val) \ q, \[r = val_loc q] \* p ~~> n \* q ~~> (n+1).
The complete specification of ref_greater is:
Lemma triple_ref_greater : (p:loc) (n:int),
  triple (ref_greater p)
    (p ~~> n)
    (funloc qp ~~> n \* q ~~> (n+1)).
Proof using.
  xwp. xapp. xapp. xapp. intros q. xsimpl. auto.
Qed.

Exercise: 2 stars, standard, especially useful (triple_ref_greater_abstract)

State another specification for the function ref_greater, called triple_ref_greater_abstract, with a postcondition that does not reveal the contents of the fresh reference q, but instead only asserts that it is greater than the contents of p. To that end, introduce in the postcondition an existentially quantified variable called m, with m > n.
Then, derive the new specification from the former one, following the proof pattern employed in the proof of triple_incr_first_derived.
(* FILL IN HERE *)

Deallocation in Separation Logic

Separation Logic tracks allocated data. In its simplest form, Separation Logic enforces that all allocated data is eventually deallocated. Technically, the logic is said to "linear" as opposed to "affine".
Let us illustrate what happens if we forget to deallocate a reference.
Consider the following program, which computes the successor of a integer n by storing it into a reference cell, then incrementing that reference, and finally returning its contents.
Definition succ_using_incr_attempt :=
  <{ fun 'n
       let 'p = ref 'n in
       incr 'p;
       ! 'p }>.
The operation succ_using_incr_attempt n admits an empty precondition, and a postcondition asserting that the final result is n+1. Yet, if we try to prove this specification, we get stuck.
Lemma triple_succ_using_incr_attempt : (n:int),
  triple (succ_using_incr_attempt n)
    \[]
    (fun r\[r = n+1]).
Proof using.
  xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }
Abort.
In the above proof script, we get stuck with the entailment p ~~> (n+1) ==> \[], which indicates that the current state contains a reference, whereas the postcondition describes an empty state.
We could attempt to patch the specification to account for the left-over reference. This yields a provable specification.
Lemma triple_succ_using_incr_attempt' : (n:int),
  triple (succ_using_incr_attempt n)
    \[]
    (fun r\[r = n+1] \* \ p, (p ~~> (n+1))).
Proof using.
  xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }
Qed.
However, while the above specification is provable, it is not especially useful, since the piece of postcondition \ p, p ~~> (n+1) is of absolutely no use to the caller of the function. Worse, the caller will have its own state polluted with \ p, p ~~> (n+1) and will have no way to get rid of it apart from incorporating it into its own postcondition.
The right solution is to alter the code to free the reference once it is no longer needed, as shown below. We assume the source language includes a deallocation operation written free p. (This operation does not exist in OCaml, but let us nevertheless continue using OCaml syntax for writing programs.)
Definition succ_using_incr :=
  <{ fun 'n
       let 'p = ref 'n in
       incr 'p;
       let 'x = ! 'p in
       free 'p;
       'x }>.
This program may now be proved correct with respect to the intended specification. Observe in particular the last call to xapp below, which corresponds to the free operation.
The final result is the value of the variable x. To reason about it, we exploit the tactic xval, as illustrated below.
Lemma triple_succ_using_incr : n,
  triple (succ_using_incr n)
    \[]
    (fun r\[r = n+1]).
Proof using.
  xwp. xapp. intros p. xapp. xapp. xapp. xval. xsimpl. auto.
Qed.
Remark: if we verify programs written in a language equipped with a garbage collector (like, e.g., OCaml), we need to tweak the Separation Logic to account for the fact that some heap predicates can be freely discarded from postconditions. This variant of Separation Logic will be described in the chapter Affine.

Combined Reading and Freeing of a Reference

The function get_and_free takes as argument the address p of a reference cell. It reads the contents of that cell, frees the cell, and returns its contents.
Definition get_and_free : val :=
  <{ fun 'p
      let 'v = ! 'p in
      free 'p;
      'v }>.

Exercise: 2 stars, standard, especially useful (triple_get_and_free)

Prove the correctness of the function get_and_free.
Lemma triple_get_and_free : p v,
  triple (get_and_free p)
    (p ~~> v)
    (fun r\[r = v]).
Proof using. (* FILL IN HERE *) Admitted.
Hint Resolve triple_get_and_free : triple.

Recursive Functions

Axiomatization of the Mathematical Factorial Function

Our next example consists of a program that evaluates the factorial function. To specify this function, we consider a Coq axiomatization of the mathematical factorial function, named facto.
Module Import Facto.

Parameter facto : intint.

Parameter facto_init : n,
  0 ≤ n ≤ 1 →
  facto n = 1.

Parameter facto_step : n,
  n > 1 →
  facto n = n * (facto (n-1)).

End Facto.
Note that we have purposely not specified the value of facto on negative arguments.

A Partial Recursive Function, Without State

In the rest of the chapter, we consider recursive functions that manipulate the state. To gently introduce the necessary techniques for reasoning about recursive functions, we first consider a recursive function that does not involve any mutable state.
The function factorec computes the factorial of its argument.
    let rec factorec n =
      if n ≤ 1 then 1 else n * factorec (n-1)
The corresonding code in A-normal form is slightly more verbose.
Definition factorec : val :=
  <{ fix 'f 'n
       let 'b = 'n ≤ 1 in
       if 'b
         then 1
         else let 'x = 'n - 1 in
              let 'y = 'f 'x in
              'n * 'y }>.
A call factorec n can be specified as follows:
  • the initial state is empty,
  • the final state is empty,
  • the result value r is such that r = facto n, when n 0.
In case the argument is negative (i.e., n < 0), we have two choices:
  • either we explicitly specify that the result is 1 in this case,
  • or we rule out this possibility by requiring n 0.
Let us follow the second approach, in order to illustrate the specification of partial functions.
There are two possibilities for expressing the constraint n 0:
  • either we use as precondition \[n 0],
  • or we place an assumption (n 0) _ to the front of the triple, and use an empty precondition, that is, \[].
The two presentations are totally equivalent. By convention, we follow the second presentation, which tends to improve both the readability of specifications and the conciseness of proof scripts.
The specification of factorec is thus stated as follows.
Lemma triple_factorec : n,
  n ≥ 0 →
  triple (factorec n)
    \[]
    (fun r\[r = facto n]).
Let's walk through the proof script in detail, to see in particular how to set up the induction, how to reason about the recursive call, and how to deal with the precondition n 0.
Proof using. unfold factorec.
We set up a proof by induction on n to obtain an induction hypothesis for the recursive calls. Recursive calls are made each time on smaller values, and the last recursive call is made on n = 1. The well-founded relation downto 1 captures this recursion pattern. The tactic induction_wf is provided by the TLC library to assist in setting up well-founded inductions. It is exploited as follows.
  intros n. induction_wf IH: (downto 1) n.
Observe the induction hypothesis IH. By unfolding downto as done in the next step, this hypothesis asserts that the specification that we are trying to prove already holds for arguments that are smaller than the current argument n, and that are greater than or equal to 1.
  unfold downto in IH.
We may then begin the interactive verification proof.
  intros Hn. xwp.
We reason about the evaluation of the boolean condition n 1.
  xapp.
The result of the evaluation of n 1 in the source program is described by the boolean value isTrue (n 1), which appears in the CODE section after Ifval. The operation isTrue is provided by the TLC library as a conversion function from Prop to bool. The use of such a conversion function (which leverages classical logic) greatly simplifies the process of automatically performing substitutions after calls to xapp. We next perform the case analysis on the test n 1.
  xif.
Doing so gives two cases.
In the "then" branch, we can assume n 1.
  { intros C.
Here, the return value is 1.
    xval. xsimpl.
We check that 1 = facto n when n 1.
    rewrite facto_init; math. }
In the "else" branch, we can assume n > 1.
  { intros C.
We reason about the evaluation of n-1
    xapp.
We reason about the recursive call, implicitly exploiting the induction hypothesis IH with n-1.
    xapp.
We justify that the recursive call is indeed made on a smaller argument than the current one, that is, n.
    { math. }
We justify that the recursive call is made to a nonnegative argument, as required by the specification.
    { math. }
We reason about the multiplication n * facto(n-1).
    xapp.
We check that n * facto (n-1) matches facto n.
    xsimpl. rewrite (@facto_step n); math. }
Qed.

A Recursive Function with State

The example of factorec was a warmup. Let's now tackle a recursive function involving mutable state.
The function repeat_incr p m makes m times a call to incr p. Here, m is assumed to be a nonnegative value.
    let rec repeat_incr p m =
      if m > 0 then (
        incr p;
        repeat_incr p (m - 1)
      )
In the concrete syntax for programs, conditionals without an 'else' branch are written if t1 then t2 end. The keyword end avoids ambiguities in cases where this construct is followed by a semi-column.
Definition repeat_incr : val :=
  <{ fix 'f 'p 'm
       let 'b = 'm > 0 in
       if 'b then
         incr 'p;
         let 'x = 'm - 1 in
         'f 'p 'x
       end }>.
The specification for repeat_incr p requires that the initial state contains a reference p with some integer contents n, that is, p ~~> n. Its postcondition asserts that the resulting state is p ~~> (n+m), which is the result after incrementing m times the reference p. Observe that this postcondition is only valid under the assumption that m 0.
Lemma triple_repeat_incr : (m n:int) (p:loc),
  m ≥ 0 →
  triple (repeat_incr p m)
    (p ~~> n)
    (fun _p ~~> (n + m)).

Exercise: 2 stars, standard, especially useful (triple_repeat_incr)

Prove the specification of the function repeat_incr. Hint: the structure of the proof resembles that of triple_factorec'.
Proof using. unfold repeat_incr. (* FILL IN HERE *) Admitted.
In the previous examples of recursive functions, the induction was always performed on the first argument quantified in the specification. When the decreasing argument is not the first one, additional manipulations are required for re-generalizing into the goal the variables that may change during the course of the induction. Here is an example illustrating how to deal with such a situation.
Lemma triple_repeat_incr' : (p:loc) (n m:int),
  m ≥ 0 →
  triple (repeat_incr p m)
    (p ~~> n)
    (fun _p ~~> (n + m)).
Proof using.
  (* First, introduces all variables and hypotheses. *)
  intros n m Hm.
  (* Next, generalize those that are not constant during the recursion. *)
  gen n Hm.
  (* Then, set up the induction. *)
  induction_wf IH: (downto 0) m. unfold downto in IH.
  (* Finally, re-introduce the generalized hypotheses. *)
  intros.
Abort.

Trying to Prove Incorrect Specifications

We established for repeat_incr p n a specification with the constraint m 0. What if we did omit it? Where would we get stuck in the proof?
Clearly, something should break, because when m < 0, the call repeat_incr p m terminates immediately. Thus, when m < 0 the final state is like the initial state p ~~> n, and not equal to p ~~> (n + m). Let us investigate how the proof breaks.
Lemma triple_repeat_incr_incorrect : (p:loc) (n m:int),
  triple (repeat_incr p m)
    (p ~~> n)
    (fun _p ~~> (n + m)).
Proof using.
  intros. revert n. induction_wf IH: (downto 0) m. unfold downto in IH.
  intros. xwp. xapp. xif; intros C.
  { (* In the 'then' branch: m > 0 *)
    xapp. xapp. xapp. { math. } xsimpl. math. }
  { (* In the 'else' branch: m 0 *)
    xval.
At this point, we are requested to justify that the current state p ~~> n matches the postcondition p ~~> (n + m), which amounts to proving n = n + m.
    xsimpl.
Abort.
When the specification features the assumption m 0, we can prove this equality because the fact that we are in the else branch means that m 0, thus m = 0. However, without the assumption m 0, the value of m could very well be negative.
Note that there exists a valid specification for repeat_incr that does not constrain m but instead specifies that the state always evolves from p ~~> n to p ~~> (n + max 0 m).
The corresponding proof scripts exploits two properties of the max function.
Lemma max_l : n m,
  nm
  max n m = n.
Proof using. introv M. unfold max. case_if; math. Qed.

Lemma max_r : n m,
  nm
  max n m = m.
Proof using. introv M. unfold max. case_if; math. Qed.
Here is the most general specification for the function repeat_incr.
Lemma triple_repeat_incr' : (p:loc) (n m:int),
  triple (repeat_incr p m)
    (p ~~> n)
    (fun _p ~~> (n + max 0 m)).
Proof using.
  intros. gen n. induction_wf IH: (downto 0) m.
  xwp. xapp. xif; intros C.
  { xapp. xapp. xapp. { math. }
    xsimpl. repeat rewrite max_r; math. }
  { xval. xsimpl. rewrite max_l; math. }
Qed.

A Recursive Function Involving two References

Consider the function step_transfer p q, which repeatedly increments a reference p and decrements a reference q, until q reaches zero.
    let rec step_transfer p q =
      if !q > 0 then (
        incr p;
        decr q;
        step_transfer p q
      )
Definition step_transfer :=
  <{ fix 'f 'p 'q
       let 'm = !'q in
       let 'b = 'm > 0 in
       if 'b then
         incr 'p;
         decr 'q;
         'f 'p 'q
       end }>.
The specification of step_transfer is essentially the same as that of the function transfer presented previously, the only difference being that we here assume the contents of q to be nonnegative.
Lemma triple_step_transfer : p q n m,
  m ≥ 0 →
  triple (step_transfer p q)
    (p ~~> n \* q ~~> m)
    (fun _p ~~> (n + m) \* q ~~> 0).

Exercise: 2 stars, standard, especially useful (triple_step_transfer)

Verify the function step_transfer. Hint: to set up the induction, follow the pattern shown in the proof of triple_repeat_incr'.
Proof using. (* FILL IN HERE *) Admitted.

Historical Notes

The key ideas of Separation Logic were devised by John Reynolds, inspired in part by older work by [Burstall 1972]. Reynolds presented his ideas in lectures given in the fall of 1999. The proposed rules turned out to be unsound, but [Ishtiaq and O'Hearn 2001] noticed a strong relationship with the logic of bunched implications by [O'Hearn and Pym 1999], leading to ideas on how to set up a sound program logic. Soon afterwards, the seminal publications on Separation Logic appeared at the CSL workshop [O'Hearn, Reynolds, and Yang 2001] and at the LICS conference [Reynolds 2002].
The Separation Logic specifications and proof scripts using x-tactics presented in this file are directly adapted from the CFML tool (2010-2020), which is developed mainly by Arthur Charguéraud. The notations for Separation Logic predicates are directly inspired from those introduced in the Ynot project (2006-2008). See chapter Postface for references.

Repr: Representation Predicates

Set Implicit Arguments.
From SLF Require Import LibSepReference.
Import ProgramSyntax DemoPrograms.
From SLF Require Import Basic.
Open Scope liblist_scope.

Implicit Types n m : int.
Implicit Types p q s c : loc.

First Pass

SOONER AC: polish As explained in the preface chapters from there on are made of 3 parts: first pass, more details, optional material.
This chapter introduces the notion of representation predicate, which is typically used for describing mutable data structures. For example, this chapter introduces the heap predicate MList L p to describe a mutable linked list whose head cell is at location p, and whose elements are described by the Coq list L.
This chapter explains how to specify and verify functions that operate on mutable linked lists and mutable trees. It also presents representation predicates for functions that capture local state, such as a counter function.
This chapter assumes a programming language featuring records, and it assumes the Separation Logic to support heap predicates for describing records. For example, p ~~~>`{ head := x; tail := q} describes a record allocated at location p, with a head field storing the value x and a tail field storing the value q. In this course, for simplicity, we implement field names as natural numbers---e.g., head is defined as 0 and tail as 1. Details on the formalization of records are postponed to chapter Struct.
This chapter introduces a few additional tactics:
  • xfun to reason about a function definition.
  • xchange for exploiting transitivity of ==>.

Formalization of the List Representation Predicate

A mutable list cell is a two-cell record, featuring a head field and a tail field.
Definition head : field := 0%nat.
Definition tail : field := 1%nat.
The heap predicate p ~~~>`{ head := x; tail := q} describes a record allocated at location p, with a head field storing x and a tail field storing q.
A mutable list consists of a chain of cells, terminated by null.
The heap predicate MList L p describes a list whose head cell is at location p, and whose elements are described by the list L. This predicate is defined recursively on the structure of L:
  • if L is empty, then p is the null pointer,
  • if L is of the form x::L', then p is not null, the head field of p contains x, and the tail field of p contains a pointer q such that MList L' q describes the tail of the list.
Fixpoint MList (L:list val) (p:loc) : hprop :=
  match L with
  | nil\[p = null]
  | x::L'\ q, (p ~~~> `{ head := x; tail := q}) \* (MList L' q)
  end.

Alternative Characterizations of MList

The following reformulations are helpful in proofs, allowing us to fold or unfold the definition using a tactic called xchange.
Lemma MList_nil : p,
  (MList nil p) = \[p = null].
Proof using. auto. Qed.

Lemma MList_cons : p x L',
  MList (x::L') p =
  \ q, (p ~~~> `{ head := x; tail := q}) \* (MList L' q).
Proof using. auto. Qed.
Another characterization of MList L p is useful for proofs. Whereas the original definition is by case analysis on whether L is empty, the alternative one is by case analysis on whether p is null:
  • if p is null, then L is empty,
  • otherwise, L decomposes as x::L', the head field of p contains x, and the tail field of p contains a pointer q such that MList L' q describes the tail of the list.
The lemma below is stated using an entailment. The reciprocal entailment is also true, but we do not need it so we do not bother proving it here.
Lemma MList_if : (p:loc) (L:list val),
      (MList L p)
  ==> (If p = null
        then \[L = nil]
        else \ x q L', \[L = x::L']
             \* (p ~~~> `{ head := x; tail := q}) \* (MList L' q)).
Proof using.
  (* Let's prove this result by case analysis on L. *)
  intros. destruct L as [|x L'].
  { (* Case L = nil. By definition of MList, we have p = null. *)
    xchange MList_nil. intros M.
    (* We have to justify L = nil, which is trivial.
       The TLC case_if tactic performs a case analysis on the argument
       of the first visible if conditional. *)

    case_if. xsimpl. auto. }
  { (* Case L = x::L'. *)
  (* One possibility is to perform a rewrite operation using MList_cons
     on its first occurrence. Then using CFML the tactic xpull to extract
     the existential quantifiers out from the precondition:
     rewrite MList_cons. xpull. intros q.
     A more efficient approach is to use the dedicated CFML tactic xchange,
     which is specialized for performing updates in the current state. *)

    xchange MList_cons. intros q.
    (* Because a record is allocated at location p, the pointer p
       cannot be null. The lemma hrecord_not_null allows us to exploit
       this property, extracting the hypothesis p null. We use again
       the tactic case_if to simplify the case analysis. *)

    xchange hrecord_not_null. intros N. case_if.
    (* To conclude, it suffices to correctly instantiate the existential
       quantifiers. The tactic xsimpl is able to guess the appropriate
       instantiations. *)

     xsimpl. auto. }
Qed.

Global Opaque MList.

In-place Concatenation of Two Mutable Lists

The function append shown below expects a nonempty list p1 and a list p2, and it updates p1 in place so that its tail gets extended by the cells from p2.
    let rec append p1 p2 =
      if p1.tail == null
        then p1.tail <- p2
        else append p1.tail p2
Definition append : val :=
  <{ fix 'f 'p1 'p2
       let 'q1 = 'p1`.tail in
       let 'b = ('q1 = null) in
       if 'b
         then 'p1`.tail := 'p2
         else 'f 'q1 'p2 }>.
The append function is specified and verified as follows. The proof pattern is representative of that of many list-manipulating functions, so it is essential that the reader follow through every step of the proof.
Lemma Triple_append : (L1 L2:list val) (p1 p2:loc),
  p1null
  triple (append p1 p2)
    (MList L1 p1 \* MList L2 p2)
    (fun _MList (L1++L2) p1).
Proof using.
The induction principle provides an hypothesis for the tail of L1, i.e., for the list L1', such that the relation list_sub L1' L1 holds.
  introv K. gen p1. induction_wf IH: list_sub L1. introv N. xwp.
To begin the proof, we reveal the head cell of p1. We let q1 denote the tail of p1, and L1' the tail of L1.
  xchange (MList_if p1). case_if. xpull. intros x q1 L1' →.
We then reason about the case analysis.
  xapp. xapp. xif; intros Cq1.
If q1' is null, then L1' is empty.
  { xchange (MList_if q1). case_if. xpull. intros →.
In this case, we set the pointer, then we fold back the head cell.
    xapp. xchange <- MList_cons. }
Otherwise, if q1' is not null, we reason about the recursive call using the induction hypothesis, then we fold back the head cell.
  { xapp. xchange <- MList_cons. }
Qed.

Smart Constructors for Linked Lists

Implicit Types x : val.
This section introduces two smart constructors for linked lists, called mnil and mcons.
The operation mnil() is intended to create an empty list. Its implementation simply returns the value null. Its specification asserts that the return value is a pointer p such that MList nil p holds.
Definition mnil : val :=
  <{ fun 'u
       null }>.

Lemma triple_mnil :
  triple (mnil ())
    \[]
    (funloc pMList nil p).
Proof using. xwp. xval. xchanges* <- (MList_nil null). Qed.

Hint Resolve triple_mnil : triple.
Observe that the specification triple_mnil does not mention the null pointer anywhere. This specification may thus be used to specify the behavior of operations on mutable lists without having to reveal low-level implementation details that involve the null pointer.
The operation mcons x q is intended to allocate a fresh list cell, with x in the head field and q in the tail field. The implementation of this operation allocates and initializes a fresh record made of two fields. The allocation operation leverages the New construct, which is notation for a operation called val_new_hrecord_2, which we here view as a primitive operation. (The chapter Struct describes an encoding of this function in terms of the allocation and write operations.)
Definition mcons : val :=
  <{ fun 'x 'q
       `{ head := 'x; tail := 'q } }>.
The operation mcons admits two specifications: one that describes only the fresh record allocated, and one that combines it with a list representation of the form Mlist q L to produce as postcondition an extended list of the form Mlist p (x::L).
The first specification is as follows.
Lemma triple_mcons : x q,
  triple (mcons x q)
    \[]
    (funloc pp ~~~> `{ head := x ; tail := q }).
Proof using. xwp. xapp triple_new_hrecord_2; auto. xsimpl*. Qed.
The second specification is derived from the first.
Lemma triple_mcons' : L x q,
  triple (mcons x q)
    (MList L q)
    (funloc pMList (x::L) p).
Proof using.
  intros. xtriple. xapp triple_mcons.
  intros p. xchange <- MList_cons. xsimpl*.
Qed.

Hint Resolve triple_mcons' : triple.

Copy Function for Lists

The function mcopy takes a mutable linked list and builds an independent copy of it.
This program illustrates the use the of functions mnil and mcons, and gives another example of a proof by induction.
    let rec mcopy p =
      if p == null
        then mnil ()
        else mcons (p.head) (mcopy p.tail)
Definition mcopy : val :=
  <{ fix 'f 'p
       let 'b = ('p = null) in
       if 'b
         then mnil ()
         else
           let 'x = 'p`.head in
           let 'q = 'p`.tail in
           let 'q2 = ('f 'q) in
           mcons 'x 'q2 }>.
The precondition of mcopy requires a linked list MList L p. Its postcondition asserts that the function returns a pointer p' and a list MList L p', in addition to the original list MList L p. The two lists are totally disjoint and independent, as captured by the separating conjunction symbol (the star).
Lemma triple_mcopy : L p,
  triple (mcopy p)
    (MList L p)
    (funloc p' ⇒ (MList L p) \* (MList L p')).
The proof script is very similar in structure to the previous ones. While playing the script, try to spot the places where:
  • mnil produces an empty list of the form MList nil p',
  • the recursive call produces a list of the form MList L' q',
  • mcons produces a list of the form MList (x::L') p'.
Proof using.
  intros. gen p. induction_wf IH: list_sub L.
  xwp. xapp. xchange MList_if. xif; intros C; case_if; xpull.
  { intros →. xapp. xsimpl*. subst. xchange* <- MList_nil. }
  { intros x q L' →. xapp. xapp. xapp. intros q'.
    xapp. intros p'. xchange <- MList_cons. xsimpl*. }
Qed.

Length Function for Lists

The function mlength takes a mutable linked list and computes its length.
    let rec mlength p =
      if p == null
        then 0
        else 1 + mlength p.tail
Definition mlength : val :=
  <{ fix 'f 'p
       let 'b = ('p = null) in
       if 'b
         then 0
         else (let 'q = 'p`.tail in
               let 'n = 'f 'q in
               'n + 1) }>.

Exercise: 3 stars, standard, especially useful (triple_mlength)

Prove the correctness of the function mlength.
Lemma triple_mlength : L p,
  triple (mlength p)
    (MList L p)
    (fun r\[r = val_int (length L)] \* (MList L p)).
Proof using. (* FILL IN HERE *) Admitted.

Alternative Length Function for Lists

In the previous section, the function mlength computes the length using a recursive function that returns an integer. In this section, we consider an alternative implementation that uses an auxiliary reference cell to keep track of the number of cells traversed so far. The list is traversed recursively, incrementing the contents of the reference once for every cell. The corresponding implementation is shown below. The variable c denotes the location of the auxiliary reference cell.
    let rec listacc c p =
      if p == null
        then ()
        else (incr c; listacc c p.tail)

    let mlength' p =
      let c = ref 0 in
      listacc c p;
      !c
Definition acclength : val :=
  <{ fix 'f 'c 'p
       let 'b = ('pnull) in
       if 'b then
         incr 'c;
         let 'q = 'p`.tail in
         'f 'c 'q
       end }>.

Definition mlength' : val :=
  <{ fun 'p
       let 'c = ref 0 in
       acclength 'c 'p;
       get_and_free 'c }>.

Exercise: 3 stars, standard, especially useful (triple_mlength')

Prove the correctness of the function mlength'. Hint: start by stating a lemma triple_acclength expressing the specification of the recursive function acclength. Then, register that specification for subsequent use by xapp using the command Hint Resolve triple_acclength : triple. Finally, complete the proof of the specification triple_mlength'.
(* FILL IN HERE *)

Lemma triple_mlength' : L p,
  triple (mlength' p)
    (MList L p)
    (fun r\[r = val_int (length L)] \* (MList L p)).
Proof using. (* FILL IN HERE *) Admitted.

In-Place Reversal Function for Lists

The function mrev takes as argument a pointer on a mutable list, and returns a pointer on the reverse list, that is, a list with elements in the reverse order. The cells from the input list are reused for constructing the output list---the operation is "in-place". Its implementation appears next.
    let rec mrev_aux p1 p2 =
      if p2 == null
        then p1
        else (let p3 = p2.tail in
              p2.tail <- p1;
              mrev_aux p2 p3)

    let mrev p =
      mrev_aux p null
Definition mrev_aux : val :=
  <{ fix 'f 'p1 'p2
       let 'b = ('p2 = null) in
       if 'b
         then 'p1
         else (
           let 'p3 = 'p2`.tail in
           'p2`.tail := 'p1;
           'f 'p2 'p3) }>.

Definition mrev : val :=
  <{ fun 'p
       mrev_aux null 'p }>.

Exercise: 5 stars, standard, optional (triple_mrev)

Prove the correctness of the functions mrev_aux and mrev.
Hint: start by stating a lemma mtriple_rev_aux expressing the specification of the recursive function mrev_aux. Then, register the specification using Hint Resolve triple_acclength : triple.. Finally, complete the proof of triple_mrev.
Hint: don't forget to generalize all the necessary variables before applying the well-founded induction principle.
(* FILL IN HERE *)

Lemma triple_mrev : L p,
  triple (mrev p)
    (MList L p)
    (funloc qMList (rev L) q).
Proof using. (* FILL IN HERE *) Admitted.

More Details

Sized Stack

Module SizedStack.
In this section, we consider the implementation of a mutable stack with constant-time access to its size, i.e., to the number of items it stores. The stack structure consists of a 2-field record, storing a pointer on a mutable linked list, and an integer storing the length of that list. The stack implementation exposes a function to create an empty stack, a function for reading the size, and push and pop methods for operating at the top of the stack.
    type 'a stack = { data : 'a list; size : int }

    let create () =
      { data = nil; size = 0 }

    let sizeof s =
      s.size

    let push p x =
      s.data <- mcons x s.data;
      s.size <- s.size + 1

    let pop s =
      let p = s.data in
      let x = p.head in
      let q = p.tail in
      delete p;
      s.data <- q in
      s.size <- s.size - 1;
      x
The representation predicate for the stack takes the form Stack L s, where s denotes the location of the record describing the stack, and where L denotes the list of items stored in the stack. The underlying mutable list is described as MList L p, where p is the location p stored in the first field of the record. The definition of Stack is as follows.
Definition data : field := 0%nat.
Definition size : field := 1%nat.

Definition Stack (L:list val) (s:loc) : hprop :=
  \ p, s ~~~>`{ data := p; size := length L } \* (MList L p).
Observe that the predicate Stack does not expose the address of the mutable list; this adress is existentially quantified in the definition. The predicate Stack also does not expose the size of the stack, as this value can be deduced as length L.
Definition create : val :=
  <{ fun 'u
      `{ data := null; size := 0 } }>.

Lemma triple_create :
  triple (create ())
    \[]
    (funloc sStack nil s).
Proof using.
  xwp. xapp triple_new_hrecord_2; auto. intros s.
  unfold Stack. xsimpl*. xchange* <- (MList_nil null).
Qed.
The sizeof operation returns the contents of the size field of a stack.
Definition sizeof : val :=
  <{ fun 'p
      'p`.size }>.

Lemma triple_sizeof : L s,
  triple (sizeof s)
    (Stack L s)
    (fun r\[r = length L] \* Stack L s).
Proof using.
  xwp. unfold Stack. xpull. intros p. xapp. xsimpl*.
Qed.
The push operation extends the head of the list, and increments the size field.
Definition push : val :=
  <{ fun 's 'x
       let 'p = 's`.data in
       let 'p2 = mcons 'x 'p in
       's`.data := 'p2;
       let 'n = 's`.size in
       let 'n2 = 'n + 1 in
       's`.size := 'n2 }>.

Exercise: 3 stars, standard, especially useful (triple_push)

Prove the following specification for the push operation.
Lemma triple_push : L s x,
  triple (push s x)
    (Stack L s)
    (fun uStack (x::L) s).
Proof using. (* FILL IN HERE *) Admitted.
The push operation extracts the element at the head of the list, updates the data field to the tail of the list, and decrements the size field.
Definition pop : val :=
  <{ fun 's
       let 'p = 's`.data in
       let 'x = 'p`.head in
       let 'p2 = 'p`.tail in
       delete 'p;
       's`.data := 'p2;
       let 'n = 's`.size in
       let 'n2 = 'n - 1 in
       's`.size := 'n2;
       'x }>.

Exercise: 4 stars, standard, especially useful (triple_pop)

Prove the following specification for the pop operation.
Lemma triple_pop : L s,
  Lnil
  triple (pop s)
    (Stack L s)
    (fun x\ L', \[L = x::L'] \* Stack L' s).
Proof using. (* FILL IN HERE *) Admitted.
End SizedStack.

Formalization of the Tree Representation Predicate

So far in this chapter, we have presented the representation of mutable lists in Separation Logic. We next consider the generalization to mutable binary trees.
Just like mutable lists are specified with respect to Coq's purely functional lists, mutable binary trees are specified with respect to purely functional trees. For simplicity, let us consider a definition specialized for trees storing integer values in the nodes.
The corresponding inductive definition is as shown below. A node from a logical tree takes the form Node n T1 T2, where n is an integer and T1 and T2 denote the two subtrees. A leaf is represented by the constructor Leaf.
Inductive tree : Type :=
  | Leaf : tree
  | Node : inttreetreetree.

Implicit Types T : tree.
A mutable tree node consists of a three-cell record, featuring an field storing an integer named "item", and two pointers named "left" and "right", denoting the pointers to the left and right subtrees, respectively.
Definition item : field := 0%nat.
Definition left : field := 1%nat.
Definition right : field := 2%nat.
The heap predicate p ~~~>`{ item := n; left := p1; right := p2 } describes a record allocated at location p, describing a node storing the integer n, with its two subtrees at locations p1 and p2. An empty tree is represented using the null pointer.
The representation predicate MTree T p, of type hprop, asserts that the mutable tree structure with root at location p describes the logical tree T. The predicate is defined recursively on the structure of T:
  • if T is a Leaf, then p is the null pointer,
  • if T is a node Node n T1 T2, then p is not null, and at location p one finds a record with field contents n, p1 and p2, with MTree T1 p1 and MTree T2 p2 describing the two subtrees.
Fixpoint MTree (T:tree) (p:loc) : hprop :=
  match T with
  | Leaf\[p = null]
  | Node n T1 T2\ p1 p2,
         (p ~~~>`{ item := n; left := p1; right := p2 })
      \* (MTree T1 p1)
      \* (MTree T2 p2)
  end.

Alternative Characterization of MTree

Just like for MList, it is very useful for proofs to state three lemmas that paraphrase the definition of MTree. The first two lemmas correspond to folding/unfolding rules for leaves and nodes.
Lemma MTree_Leaf : p,
  (MTree Leaf p) = \[p = null].
Proof using. auto. Qed.

Lemma MTree_Node : p n T1 T2,
  (MTree (Node n T1 T2) p) =
  \ p1 p2,
       (p ~~~>`{ item := n; left := p1; right := p2 })
    \* (MTree T1 p1) \* (MTree T2 p2).
Proof using. auto. Qed.
The third lemma reformulates MTree T p using a case analysis on whether p is the null pointer. This formulation matches the case analysis typically perform in the code of functions operating on trees.
Lemma MTree_if : (p:loc) (T:tree),
      (MTree T p)
  ==> (If p = null
        then \[T = Leaf]
        else \ n p1 p2 T1 T2, \[T = Node n T1 T2]
             \* (p ~~~>`{ item := n; left := p1; right := p2 })
             \* (MTree T1 p1) \* (MTree T2 p2)).
Proof using.
  intros. destruct T as [|n T1 T2].
  { xchange MTree_Leaf. intros M. case_if. xsimpl*. }
  { xchange MTree_Node. intros p1 p2.
    xchange hrecord_not_null. intros N. case_if. xsimpl*. }
Qed.

Opaque MTree.

Additional Tooling for MTree

For reasoning about recursive functions over trees, it is useful to exploit the well-founded order associated with "immediate subtrees". This order, named tree_sub, can be exploited for a tree T by invoking the tactic induction_wf IH: tree_sub T.
Inductive tree_sub : binary (tree) :=
  | tree_sub_1 : n T1 T2,
      tree_sub T1 (Node n T1 T2)
  | tree_sub_2 : n T1 T2,
      tree_sub T2 (Node n T1 T2).

Lemma tree_sub_wf : wf tree_sub.
Proof using.
  intros T. induction T; constructor; intros t' H; inversions¬H.
Qed.

Hint Resolve tree_sub_wf : wf.
For allocating fresh tree nodes as a 3-field record, we introduce the operation mnode n p1 p2, defined and specified as follows.
Definition mnode : val :=
  val_new_hrecord_3 item left right.
The first specification of mnode describes the allocation of a record.
Lemma triple_mnode : n p1 p2,
  triple (mnode n p1 p2)
    \[]
    (funloc pp ~~~> `{ item := n ; left := p1 ; right := p2 }).
Proof using. intros. applys* triple_new_hrecord_3. Qed.
The second specification, derived from the first, asserts that, if provided the description of two subtrees T1 and T2 at locations p1 and p2, the operation mnode n p1 p2 builds, at a fresh location p, a tree described by Mtree [Node n T1 T2] p.

Exercise: 2 stars, standard, optional (triple_mnode')

Prove the specification triple_mnode' for node allocation.
Lemma triple_mnode' : T1 T2 n p1 p2,
  triple (mnode n p1 p2)
    (MTree T1 p1 \* MTree T2 p2)
    (funloc pMTree (Node n T1 T2) p).
Proof using. (* FILL IN HERE *) Admitted.

Hint Resolve triple_mnode' : triple.

Tree Copy

The operation tree_copy takes as argument a pointer p on a mutable tree and returns a fresh copy of that tree. It is constructed in similar way to the function mcopy for lists.
    let rec tree_copy p =
      if p = null
        then null
        else mnode p.item (tree_copy p.left) (tree_copy p.right)
Definition tree_copy :=
  <{ fix 'f 'p
       let 'b = ('p = null) in
       if 'b then null else (
         let 'n = 'p`.item in
         let 'p1 = 'p`.left in
         let 'p2 = 'p`.right in
         let 'q1 = 'f 'p1 in
         let 'q2 = 'f 'p2 in
         mnode 'n 'q1 'q2
      ) }>.

Exercise: 3 stars, standard, optional (triple_tree_copy)

Prove the specification of tree_copy.
Lemma triple_tree_copy : p T,
  triple (tree_copy p)
    (MTree T p)
    (funloc q ⇒ (MTree T p) \* (MTree T q)).
Proof using. (* FILL IN HERE *) Admitted.

Computing the Sum of the Items in a Tree

The operation mtreesum takes as argument the location p of a mutable tree, and it returns the sum of all the integers stored in the nodes of that tree. Consider the implementation that traverses the tree and uses an auxiliary reference cell for maintaining the sum of all the items visited so far.
    let rec treeacc c p =
      if pnull then (
        c := !c + p.item;
        treeacc c p.left;
        treeacc c p.right)

    let mtreesum p =
      let c = ref 0 in
      treeacc c p;
      !c
Definition treeacc : val :=
  <{ fix 'f 'c 'p
       let 'b = ('pnull) in
       if 'b then
         let 'm = ! 'c in
         let 'x = 'p`.item in
         let 'm2 = 'm + 'x in
         'c := 'm2;
         let 'p1 = 'p`.left in
         'f 'c 'p1;
         let 'p2 = 'p`.right in
         'f 'c 'p2
       end }>.

Definition mtreesum : val :=
  <{ fun 'p
       let 'c = ref 0 in
       treeacc 'c 'p;
       get_and_free 'c }>.
The specification of mtreesum is expressed in terms of the logical operation treesum which computes the sum of the node items stored in a logical tree. This operation is defined in Coq as a fixed point.
Fixpoint treesum (T:tree) : int :=
  match T with
  | Leaf ⇒ 0
  | Node n T1 T2n + treesum T1 + treesum T2
  end.

Exercise: 4 stars, standard, optional (triple_mtreesum)

Prove the correctness of the function mlength'. Hint: start by stating a lemma triple_treeacc, then register it using the command Hint Resolve triple_treeacc : triple.. Finally, complete the proof of the specification triple_mtreesum.
(* FILL IN HERE *)

Lemma triple_mtreesum : T p,
  triple (mtreesum p)
    (MTree T p)
    (fun r\[r = treesum T] \* (MTree T p)).
Proof using. (* FILL IN HERE *) Admitted.

Verification of a Counter Function with Local State

In this section, we present another kind of representation predicate: one that describes a function with an internal, mutable state.
Implicit Type f : val.
A counter function is a function f such that, each time it is invoked, returns the next integer, starting from 1.
The function create_counter produces a fresh counter function. Concretely, if create_counter() produces the counter f, then the first call to f() returns 1, the second call to f() returns 2, the third call to f() returns 3, and so on.
The counter function can be implemented using a reference cell, named p, that stores the integer last returned by the counter. Initially, the contents of the reference p is zero. Each time the counter is queried, the contents of p is increased by one unit, and the new value of the contents is returned to the caller.
    let create_counter () =
       let p = ref 0 in
       (fun () → (incr p; !p))
Definition create_counter : val :=
  <{ fun 'u
       let 'p = ref 0 in
       (fun_ 'u ⇒ (incr 'p; !'p)) }>.
The remaining of this section presents two set of specifications, each covering both the specification of a counter function and the specification of a function that creates a fresh counter function.
The first specification is the most direct: it exposes the existence of a reference p. Doing so, however, reveals implementation details about the counter function.
The second specification is more abstract: it successfully hides the description of the internal representation of the counter, and states specification only in terms of a representation predicate, called IsCounter f n, that exposes only the identity of the counter function and its current value, but not the existence of the reference p.
Let's begin with the simple, direct specification. The proposition CounterSpec f p asserts that the counter function f has its internal state stored in a reference cell at location p, in the sense that invoking f in a state p ~~> m updates the state to p ~~> (m+1) and produces the output value m+1.
Definition CounterSpec (f:val) (p:loc) : Prop :=
   m, triple (f ())
                    (p ~~> m)
                    (fun r\[r = m+1] \* p ~~> (m+1)).
The function create_counter creates a fresh counter. Its precondition is empty. Its postcondition describes the existence of a reference cell p ~~> 0, for an existentially quantified location p, and such that the proposition CounterSpec f p holds. Observe the use of the tactic xfun for reasoning about a function definition.
Lemma triple_create_counter :
  triple (create_counter ())
    \[]
    (fun f\ p, (p ~~> 0) \* \[CounterSpec f p]).
Proof using.
  xwp. xapp. intros p. xfun. intros f Hf. xsimpl.
  { intros m. applys Hf. xapp. xapp. xsimpl. auto. }
Qed.
Given a counter function described by CounterSpec f p, a call to the function f on a unit argument, in an input state of the form p ~~> n for some n, produces the state p ~~> (n+1) and the output value n+1. Observe how this specification applies to any function f satisfying CounterSpec f p.
Lemma triple_apply_counter : f p n,
  CounterSpec f p
  triple (f ())
    (p ~~> n)
    (fun r\[r = n+1] \* (p ~~> (n+1))).
Proof using. introv Hf. applys Hf. Qed.
In the proof above, we were able to directly apply the assumption Hf. In general, however, for reasoning about a call to an abstract function f, it is generally needed to first invoke the tactic xwp.
Lemma triple_apply_counter_alternative_proof : f p n,
  CounterSpec f p
  triple (f ())
    (p ~~> n)
    (fun r\[r = n+1] \* (p ~~> (n+1))).
Proof using.
  introv Hf. xtriple. lets H: Hf n. xapp. xsimpl. auto.
Qed.
Let's now make the specifications more abstract, by hiding away the description of the reference cells from the client of the specifications.
To that end, we introduce the heap predicate IsCounter f n, which relates a function f, its current value n, and the piece of memory state involved in the implementation of this function. This piece of memory is of the form p ~~> n for some location p, such that CounterSpec f p holds. Here, p gets quantified existentially.
Definition IsCounter (f:val) (n:int) : hprop :=
  \ p, p ~~> n \* \[CounterSpec f p].
Using IsCounter, we can reformulate the specification of create_counter is a more abstract and more concise manner, with a postcondition simply asserting that the fresh counter function f initialized at zero is described by the heap predicate IsCounter f 0.
Lemma triple_create_counter_abstract :
  triple (create_counter ())
    \[]
    (fun fIsCounter f 0).
Proof using. unfold IsCounter. applys triple_create_counter. Qed.
Likewise, we can reformulate the specification of a call to a counter function f. Such a call, under precondition IsCounter f n for some n, admits the precondition IsCounter f (n+1), and returns n+1.

Exercise: 3 stars, standard, especially useful (triple_apply_counter_abstract)

Prove the abstract specification for a counter function. Hint: use xtriple at some point.
Lemma triple_apply_counter_abstract : f n,
  triple (f ())
    (IsCounter f n)
    (fun r\[r = n+1] \* (IsCounter f (n+1))).
Proof using. (* FILL IN HERE *) Admitted.
Opaque IsCounter.

Optional Material

Specification of a Higher-Order Repeat Operator

As a warm-up before presenting the specifications of higher-order iterators such as iter and fold over mutable lists, we present the specification of an higher-order operator named repeat. A call to repeat f n executes n successives calls to f on the unit value, that is, calls to f().
    let rec repeat f n =
      if n > 0 then (f(); repeat f (n-1))
Definition repeat : val :=
  <{ fix 'g 'f 'n
       let 'b = ('n > 0) in
       if 'b then
         'f ();
         let 'n2 = 'n - 1 in
         'g 'f 'n2
       end }>.
For simplicity, let us assume for now n 0. The specification of repeat n f is expressed in terms of an invariant, named I, describing the state in-between every two calls to f. Initially, we assume the state to satisfy f 0. We assume that, for every index i in the range from 0 (inclusive) to n (exclusive), a call f() takes the state from one satisfying f i to one satisfying f (i+1). The specification below asserts that, under these assumptions, after the n calls to f(), the final state satisfies f n.
The specification takes the form:
    n ≥ 0 →
    Hypothesis_on_f
    triple (repeat f n)
      (I 0)
      (fun uI n).
where the hypothesis of f captures the following specification:
    0 ≤ i < n
    triple (f ())
      (I i)
      (fun uI (i+1))
The complete specification of repeat n f appears below.
Lemma triple_repeat : (I:inthprop) (f:val) (n:int),
  n ≥ 0 →
  ( i, 0 ≤ i < n
    triple (f ())
      (I i)
      (fun uI (i+1))) →
  triple (repeat f n)
    (I 0)
    (fun uI n).
To establish that specification, we carry out a proof by induction over the judgment:
    triple (repeat f m)
      (I (n-m))
      (fun uI n)).
which asserts that, when there remains m calls to perform, the current state is described by I (n-m), that is, the current index i is equal to n-m.
Proof using.
  introv Hn Hf.
  cuts G: ( m, 0 ≤ mn
    triple (repeat f m)
      (I (n-m))
      (fun uI n)).
  { applys_eq G. { fequals. math. } { math. } }
  intros m. induction_wf IH: (downto 0) m. intros Hm.
  xwp. xapp. xif; intros C.
  { xapp. { math. } xapp. xapp. { math. } { math. }
    math_rewrite ((n - m) + 1 = n - (m - 1)). xsimpl. }
  { xval. math_rewrite (n - m = n). xsimpl. }
Qed.

Specification of an Iterator on Mutable Lists

The operation miter takes as argument a function f and a pointer p on a mutable list, and invokes f on each of the items stored in that list.
    let rec miter f p =
      if pnull then (f p.head; miter f p.tail)
Definition miter : val :=
  <{ fix 'g 'f 'p
       let 'b = ('pnull) in
       if 'b then
         let 'x = 'p`.head in
         'f 'x;
         let 'q = 'p`.tail in
         'g 'f 'q
       end }>.
The specification of miter follows the same structure as that of the function repeat from the previous section, with two main differences. The first difference is that the invariant is expressed not in terms of an index i ranging from 0 to n, but in terms of a prefix of the list L being traversed, ranging from nil to the full list L. The second difference is that the operation miter f p requires in its precondition, in addition to I nil, the description of the mutable list MList L p. This predicate is returned in the postcondition, unchanged, reflecting on the fact that the iteration does not alter the list.

Exercise: 5 stars, standard, especially useful (triple_miter)

Prove the correctness of triple_miter.
Lemma triple_miter : (I:list valhprop) L (f:val) p,
  ( x L1 L2, L = L1++x::L2
    triple (f x)
      (I L1)
      (fun uI (L1++(x::nil)))) →
  triple (miter f p)
    (MList L p \* I nil)
    (fun uMList L p \* I L).
Proof using. (* FILL IN HERE *) Admitted.
For exploiting the specification triple_miter to reason about a call to miter, it is necessary to provide an invariant I of type list val hprop, that is, of the form fun (K:list val) .... This invariant, which cannot be inferred automatically, should describe the state at the point where the iteration has traversed a prefix K of the list L.
Concretely, for reasoning about a call to miter, one should exploit the tactic xapp (triple_iter (fun K ...)).

Computing the Length of a Mutable List using an Iterator

We can compute the length of a mutable list by iterating over that list a function that increments a reference cell once for every item.
    let mlength_using_miter p =
      let c = ref 0 in
      miter (fun xincr c) p;
      !c

Exercise: 4 stars, standard, especially useful (triple_mlength_using_miter)

Prove the correctness of mlength_using_iter. Hint: use xfun. intros f Hf for reasoning about the function definition, then use xapp for reasoning about a call to f by inlining its code.
Definition mlength_using_miter : val :=
  <{ fun 'p
       let 'c = ref 0 in
       let 'f = (fun_ 'xincr 'c) in
       miter 'f 'p;
       get_and_free 'c }>.

Lemma triple_mlength_using_miter : p L,
  triple (mlength_using_miter p)
    (MList L p)
    (fun r\[r = length L] \* MList L p).
Proof using. (* FILL IN HERE *) Admitted.

A Continuation-Passing-Style, In-Place Concatenation Function

The following program was proposed in the original article on Separation Logic by John Reynolds as a challenge for verification.
This function, called cps_append, is similar to the function append presented previously: it also performs in-place concatenation of two lists. It differs in that it is implemented using a recursive, continuation-passing style function to perform the work.
The presentation of cps_append p1 p2 is also slightly different: this operation returns a pointer p3 that describes the head of the result of the concatenation, and it works even if p1 corresponds to an empty list.
The code of cps_append involves an auxiliary function cps_append_aux. This code appears at first like a puzzle: it takes a good drawing and several minutes to figure out how it works. Observe in particular how the recursive call is invoked as part of the continuation.
    let rec cps_append_aux p1 p2 k =
      if p1 == null
        then k p2
        else cps_append_aux p1.tail p2 (fun r ⇒ (p1.tail <- r); k p1)

    let cps_append p1 p2 =
      cps_append_aux p1 p2 (fun rr)
Definition cps_append_aux : val :=
  <{ fix 'f 'p1 'p2 'k
       let 'b = ('p1 = null) in
       if 'b
         then 'k 'p2
         else
           let 'q1 = 'p1`.tail in
           let 'k2 = (fun_ 'r ⇒ ('p1`.tail := 'r; 'k 'p1)) in
           'f 'q1 'p2 'k2 }>.

Definition cps_append : val :=
  <{ fun 'p1 'p2
      let 'f = (fun_ 'r ⇒ 'r) in
      cps_append_aux 'p1 'p2 'f }>.

Exercise: 5 stars, standard, optional (triple_cps_append)

Specify and verify cps_append_aux, then verify cps_append.
(* FILL IN HERE *)
The main function cps_append simply invokes cps_append_aux with a trivial continuation.
Lemma triple_cps_append : (L1 L2:list val) (p1 p2:loc),
  triple (cps_append p1 p2)
    (MList L1 p1 \* MList L2 p2)
    (funloc p3MList (L1++L2) p3).
Proof using. (* FILL IN HERE *) Admitted.
This concludes the formal verification of Reynolds' verification challenge.

Historical Notes

The representation predicate for lists appears in the seminal papers on Separation Logic: the notes by Reynolds from the summer 1999, updated the next summer [Reynolds 2000], and the publication by [O’Hearn, Reynolds, and Yang 2001].
Most presentations of Separation Logic target partial correctness, whereas this chapters targets total correctness. The specifications of recursive functions are established using the built-in induction mechanism offered by Coq.
The specification of higher-order iterators requires higher-order Separation Logic. Being embedded in the higher-order logic of Coq, the Separation Logic that we work with is inherently higher-order. Further information on the history of higher-order Separation Logic for higher-order programs may be found in section 10.2 of http://www.chargueraud.org/research/2020/seq_seplogic/seq_seplogic.pdf.

Hprop: Heap Predicates

Set Implicit Arguments.
From SLF Require Export LibSepReference.
Import ProgramSyntax.
Tweak to simplify the use of definitions and lemmas from LibSepFmap.v.
Arguments Fmap.single {A} {B}.
Arguments Fmap.union {A} {B}.
Arguments Fmap.disjoint {A} {B}.
Arguments Fmap.union_comm_of_disjoint {A} {B}.
Arguments Fmap.union_empty_l {A} {B}.
Arguments Fmap.union_empty_r {A} {B}.
Arguments Fmap.union_assoc {A} {B}.
Arguments Fmap.disjoint_empty_l {A} {B}.
Arguments Fmap.disjoint_empty_r {A} {B}.
Arguments Fmap.disjoint_union_eq_l {A} {B}.
Arguments Fmap.disjoint_union_eq_r {A} {B}.

First Pass

In the programming language that we consider, a concrete memory state is described as a finite map from locations to values.
  • A location has type loc.
  • A value has type val.
  • A state has type state.
Details will be presented in the chapter Rules.
To help distinguish between full states and pieces of state, we let the type heap be a synonymous for state but with the intention of representing only a piece of state. Throughout the course, we write s for a full memory state (of type state), and we write h for a piece of memory state (of type heap).
In Separation Logic, a piece of state is described by a "heap predicate", i.e., a predicate over heaps. A heap predicate has type hprop, defined as heapProp, which is equivalent to stateProp.
By convention, throughout the course:
  • H denotes a heap predicate of type hprop; it describes a piece of state,
  • Q denotes a postcondition, of type valhprop; it describes both a result value and a piece of state. Observe that valhprop is equivalent to valstateProp.
This chapter presents the definition of the key heap predicate operators from Separation Logic:
  • \[] denotes the empty heap predicate,
  • \[P] denotes a pure fact,
  • p ~~> v denotes a singleton heap,
  • H1 \* H2 denotes the separating conjunction,
  • Q1 \*+ H2 denotes the separating conjunction, between a postcondition and a heap predicate,
  • \ x, H denotes an existential.
This chapter also introduces the formal definition of triples:
  • a Hoare triple, written hoare t H Q, features a precondition H and a postcondition Q that describe the whole memory state in which the execution of the term t takes place.
  • a Separation Logic triple, written triple t H Q, features a pre- and a postcondition that describes only the piece of the memory state in which the execution of the term t takes place.
This chapter and the following ones exploit a few additional TLC tactics to enable concise proofs.
  • applys is an enhanced version of eapply.
  • applys_eq is a variant of applys that enables matching the arguments of the predicate that appears in the goal "up to equality" rather than "up to conversion".
  • specializes is an enhanced version of specialize.
  • lets and forwards are forward-chaining tactics that enable instantiating a lemma.
What these tactics do should be fairly intuitive where they are used. Note that all exercises can be carried out without using TLC tactics. For details, the chapter UseTactics.v from the "Programming Language Foundations" volume explains the behavior of these tactics.

Syntax and Semantics

We assume an imperative programming language with a formal semantics. We do not need to know about the details of the language construct for now. All we need to know is that there exists:
  • a type of terms, written trm,
  • a type of values, written val,
  • a type of states, written state (i.e., finite map from loc to val),
  • a big-step evaluation judgment, written eval h1 t h2 v, asserting that, starting from state s1, the evaluation of the term t terminates in the state s2 and produces the value v.
        Check eval : statetrmstatevalProp.
The corresponding definitions are described in the chapter Rules.
At this point, we don't need to know the exact grammar of terms and values. Let's just give one example to make things concrete. Consider the function: fun x if x then 0 else 1.
In the language that we consider, it can be written in raw syntax as follows.
Definition example_val : val :=
  val_fun "x" (trm_if (trm_var "x") (trm_val (val_int 0)) (trm_val (val_int 1))).
Thanks to a set of coercions and notation, this term can be written in a somewhat more readable way, as follows.
Definition example_val' : trm :=
  <{ fun "x" ⇒
       if "x" then 0 else 1 }>.

Description of the State

Locations, of type loc, denote the addresses of allocated objects. Locations are a particular kind of values.
A state is a finite map from locations to values.
The file LibSepFmap.v provides a self-contained formalization of finite maps, but we do need to know about the details.
Definition state : Type := fmap loc val.
By convention, we use the type state describes a full state of memory, and introduce the type heap to describe just a piece of state.
Definition heap : Type := state.
In particular, the library LibSepFmap.v, whose module name is abbreviated as Fmap, exports the following definitions.
  • Fmap.empty denotes the empty state,
  • Fmap.single p v denotes a singleton state, that is, a unique cell at location p with contents v,
  • Fmap.union h1 h2 denotes the union of the two states h1 and h2.
  • Fmap.disjoint h1 h2 asserts that h1 and h2 have disjoint domains.
The types of these definitions are as follows.
    Check Fmap.empty : heap.
    Check Fmap.single : locvalheap.
    Check Fmap.union : heapheapheap.
    Check Fmap.disjoint : heapheapProp.
Note that the union operation is commutative only when its two arguments have disjoint domains. Throughout the Separation Logic course, we will only consider disjoint unions, for which commutativity holds.

Heap Predicates

In Separation Logic, the state is described using "heap predicates". A heap predicate is a predicate over a piece of state. Let hprop denote the type of heap predicates.
Definition hprop := heapProp.
Thereafter, let H range over heap predicates.
Implicit Type H : hprop.
An essential aspect of Separation Logic is that all heap predicates defined and used in practice are built using a few basic combinators. In other words, except for the definition of the combinators themselves, we will never define a custom heap predicate directly as a function of the state.
We next describe the most important combinators of Separation Logic.
The hempty predicate, usually written \[], characterizes an empty state.
Definition hempty : hprop :=
  fun (h:heap) ⇒ (h = Fmap.empty).

Notation "\[]" := (hempty) (at level 0).
The pure fact predicate, written \[P], characterizes an empty state and moreover asserts that the proposition P is true.
Definition hpure (P:Prop) : hprop :=
  fun (h:heap) ⇒ (h = Fmap.empty) ∧ P.

Notation "\[ P ]" := (hpure P) (at level 0, format "\[ P ]").
The singleton heap predicate, written p ~~> v, characterizes a state with a single allocated cell, at location p, storing the value v.
Definition hsingle (p:loc) (v:val) : hprop :=
  fun (h:heap) ⇒ (h = Fmap.single p v).

Notation "p '~~>' v" := (hsingle p v) (at level 32).
The "separating conjunction", written H1 \* H2, characterizes a state that can be decomposed in two disjoint parts, one that satisfies H1, and another that satisfies H2. In the definition below, the two parts are named h1 and h2.
Definition hstar (H1 H2 : hprop) : hprop :=
  fun (h:heap) ⇒ h1 h2, H1 h1
                              ∧ H2 h2
                              ∧ Fmap.disjoint h1 h2
                              ∧ h = Fmap.union h1 h2.

Notation "H1 '\*' H2" := (hstar H1 H2) (at level 41, right associativity).
The existential quantifier for heap predicates, written \ x, H characterizes a heap that satisfies H for some x. The variable x has type A, for some arbitrary type A.
The notation \ x, H stands for hexists (fun x H). The generalized notation \ x1 ... xn, H is also available.
The definition of hexists is a bit technical. It is not essential to master it at this point. Additional explanations are provided near the end of this chapter.
Definition hexists (A:Type) (J:Ahprop) : hprop :=
  fun (h:heap) ⇒ x, J x h.

Notation "'\exists' x1 .. xn , H" :=
  (hexists (fun x1 ⇒ .. (hexists (fun xnH)) ..))
  (at level 39, x1 binder, H at level 50, right associativity,
   format "'[' '\exists' '/ ' x1 .. xn , '/ ' H ']'").
All the definitions above are eventually turned Opaque, after the appropriate introduction and elimination lemmas are established for them. Thus, at some point it is no longer possible to involve, say unfold hstar. Opacity ensures that all the proofs that are constructed do not depend on the details of how these definitions of heap predicates are set up.

Extensionality for Heap Predicates

To work in Separation Logic, it is extremely convenient to be able to state equalities between heap predicates. For example, in the next chapter, we will establish the associativity property for the star operator, that is:
Parameter hstar_assoc : H1 H2 H3,
  (H1 \* H2) \* H3 = H1 \* (H2 \* H3).
How can we prove a goal of the form H1 = H2 when H1 and H2 have type hprop, that is, heapProp?
Intuitively, H and H' are equal if and only if they characterize exactly the same set of heaps, that is, if (h:heap), H1 h H2 h.
This reasoning principle, a specific form of extensionality property, is not available by default in Coq. But we can safely assume it if we extend the logic of Coq with a standard axiom called "predicate extensionality".
Axiom predicate_extensionality : (A:Type) (P Q:AProp),
  ( x, P xQ x) →
  P = Q.
By specializing P and Q above to the type hprop, we obtain exactly the desired extensionality principle.
Lemma hprop_eq : (H1 H2:hprop),
  ( (h:heap), H1 hH2 h) →
  H1 = H2.
Proof using. applys predicate_extensionality. Qed.

Type and Syntax for Postconditions

A postcondition characterizes both an output value and an output state. In Separation Logic, a postcondition is thus a relation of type val state Prop, which is equivalent to val hprop.
Thereafter, we let Q range over postconditions.
Implicit Type Q : valhprop.
One common operation is to augment a postcondition with a piece of state. This operation is described by the operator Q \*+ H, which is just a convenient notation for fun x (Q x \* H).
Notation "Q \*+ H" := (fun xhstar (Q x) H) (at level 40).
Intuitively, in order to prove that two postconditions Q1 and Q2 are equal, it suffices to show that the heap predicates Q1 v and Q2 v (both of type hprop) are equal for any value v.
Again, the extensionality property that we need is not built-in to Coq. We need the axiom called "functional extensionality", stated next.
Axiom functional_extensionality : A B (f g:AB),
  ( x, f x = g x) →
  f = g.
The desired equality property for postconditions follows directly from that axiom.
Lemma qprop_eq : (Q1 Q2:valhprop),
  ( (v:val), Q1 v = Q2 v) →
  Q1 = Q2.
Proof using. applys functional_extensionality. Qed.

Separation Logic Triples and the Frame Rule

A Separation Logic triple is a generalization of a Hoare triple that integrate built-in support for an essential rule called "the frame rule". Before we give the definition of a Separation Logic triple, let us first give the definition of a Hoare triple and state the much-desired frame rule.
A (total correctness) Hoare triple, written {H} t {Q} on paper, and here written hoare t H Q, asserts that starting from a state s satisfying the precondition H, the term t evaluates to a value v and to a state s' that, together, satisfy the postcondition Q. It is formalized in Coq as shown below.
Definition hoare (t:trm) (H:hprop) (Q:valhprop) : Prop :=
   (s:state), H s
   (s':state) (v:val), eval s t s' vQ v s'.
Note that Q has type valhprop, thus Q v has type hprop. Recall that hprop = heapProp. Thus Q v s' has type Prop.
The frame rule asserts that if one can derive a specification of the form triple H t Q for a term t, then one should be able to automatically derive triple (H \* H') t (Q \*+ H') for any H'.
Intuitively, if t executes safely in a heap H, it should behave similarly in any extension of H with a disjoint part H'. Moreover, its evaluation should leave this piece of state H' unmodified throughout the execution of t.
The following definition of a Separation Logic triple builds upon that of a Hoare triple by "baking in" the frame rule.
Definition triple (t:trm) (H:hprop) (Q:valhprop) : Prop :=
   (H':hprop), hoare t (H \* H') (Q \*+ H').
This definition inherently satisfies the frame rule, as we show below. The proof only exploits the associativity of the star operator.
Lemma triple_frame : t H Q H',
  triple t H Q
  triple t (H \* H') (Q \*+ H').
Proof using.
  introv M. unfold triple in *. rename H' into H1. intros H2.
  specializes M (H1 \* H2).
  (* M matches the goal up to rewriting for associativity. *)
  applys_eq M.
  { rewrite hstar_assoc. auto. }
  { applys functional_extensionality. intros v. rewrite hstar_assoc. auto. }
Qed.

Example of a Triple: the Increment Function

Recall the function incr introduced in the chapter Basic.
Parameter incr : val.
An application of this function, written incr p, is technically a term of the form trm_app (trm_val incr) (trm_val (val_loc p)), where trm_val injects values in the grammar of terms, and val_loc injects locations in the grammar of locations.
The abbreviation incr p parses correctly because trm_app, trm_val, and val_loc are registered as coercions. Let us check this claim with Coq.
Lemma incr_applied : (p:loc) (n:int),
    trm_app (trm_val incr) (trm_val (val_loc p))
  = incr p.
Proof using. reflexivity. Qed.
The operation incr p is specified using a triple as shown below.
Parameter triple_incr : (p:loc) (n:int),
  triple (trm_app incr p)
    (p ~~> n)
    (fun rp ~~> (n+1)).

More Details

Example Applications of the Frame Rule

The frame rule asserts that a triple remains true in any extended heap.
Calling incr p in a state where the memory consists of two memory cells, one at location p storing an integer n and one at location q storing an integer m has the effect of updating the contents of the cell p to n+1, while leaving the contents of q unmodified.
Lemma triple_incr_2 : (p q:loc) (n m:int),
  triple (incr p)
    ((p ~~> n) \* (q ~~> m))
    (fun _ ⇒ (p ~~> (n+1)) \* (q ~~> m)).
The above specification lemma is derivable from the specification lemma triple_incr by applying the frame rule to augment both the precondition and the postcondition with q ~~> m.
Proof using.
  intros. lets M: triple_incr p n.
  lets N: triple_frame (q ~~> m) M. apply N.
Qed.
Here, we have framed on q ~~> m, but we could similarly frame on any heap predicate H, as captured by the following specification lemma.
Parameter triple_incr_3 : (p:loc) (n:int) (H:hprop),
  triple (incr p)
    ((p ~~> n) \* H)
    (fun _ ⇒ (p ~~> (n+1)) \* H).
Remark: in practice, we always prefer writing "small-footprint specifications", such as triple_incr, that describe the minimal piece of state necessary for the function to execute. Indeed, other specifications that describe a larger piece of state can be derived by application of the frame rule.

Power of the Frame Rule with Respect to Allocation

Consider the specification lemma for an allocation operation. This rule states that, starting from the empty heap, one obtains a single memory cell at some location p with contents v.
Parameter triple_ref : (v:val),
  triple (val_ref v)
    \[]
    (funloc pp ~~> v).
Applying the frame rule to the above specification, and to another memory cell, say l' ~~> v', we obtain:
Parameter triple_ref_with_frame : (l':loc) (v':val) (v:val),
  triple (val_ref v)
    (l' ~~> v')
    (funloc pp ~~> v \* l' ~~> v').
This derived specification captures the fact that the newly allocated cell at address p is distinct from the previously allocated cell at address l'.
More generally, through the frame rule, we can derive that any piece of freshly allocated data is distinct from any piece of previously existing data.
This independence principle is extremely powerful. It is an inherent strength of Separation Logic.

Notation for Heap Union

Thereafter, to improve readability of statements in proofs, we introduce the following notation for heap union.
Notation "h1 \u h2" := (Fmap.union h1 h2) (at level 37, right associativity).

Introduction and Inversion Lemmas for Basic Operators

The following lemmas help getting a better understanding of the meaning of the Separation Logic combinators. For each operator, we present one introduction lemma and one inversion lemma.
Implicit Types P : Prop.
Implicit Types v : val.
The introduction lemmas show how to prove goals of the form H h, for various forms of the heap predicate H.
Lemma hempty_intro :
  \[] Fmap.empty.
Proof using. hnf. auto. Qed.

Lemma hpure_intro : P,
  P
  \[P] Fmap.empty.
Proof using. introv M. hnf. auto. Qed.

Lemma hsingle_intro : p v,
  (p ~~> v) (Fmap.single p v).
Proof using. intros. hnf. auto. Qed.

Lemma hstar_intro : H1 H2 h1 h2,
  H1 h1
  H2 h2
  Fmap.disjoint h1 h2
  (H1 \* H2) (h1 \u h2).
Proof using. intros. * h1 h2. Qed.

Lemma hexists_intro : A (x:A) (J:Ahprop) h,
  J x h
  (\ x, J x) h.
Proof using. introv M. hnf. eauto. Qed.
The inversion lemmas show how to extract information from hypotheses of the form H h, for various forms of the heap predicate H.
Lemma hempty_inv : h,
  \[] h
  h = Fmap.empty.
Proof using. introv M. hnf in M. auto. Qed.

Lemma hpure_inv : P h,
  \[P] h
  Ph = Fmap.empty.
Proof using. introv M. hnf in M. autos*. Qed.

Lemma hsingle_inv: p v h,
  (p ~~> v) h
  h = Fmap.single p v.
Proof using. introv M. hnf in M. auto. Qed.

Lemma hstar_inv : H1 H2 h,
  (H1 \* H2) h
   h1 h2, H1 h1H2 h2Fmap.disjoint h1 h2h = h1 \u h2.
Proof using. introv M. hnf in M. eauto. Qed.

Lemma hexists_inv : A (J:Ahprop) h,
  (\ x, J x) h
   x, J x h.
Proof using. introv M. hnf in M. eauto. Qed.

Exercise: 4 stars, standard, especially useful (hstar_hpure_l)

Prove that a heap h satisfies \[P] \* H if and only if P is true and h it satisfies H. The proof requires two lemmas on finite maps from LibSepFmap.v:
    Check Fmap.union_empty_l : h,
      Fmap.empty \u h = h.

    Check Fmap.disjoint_empty_l : h,
      Fmap.disjoint Fmap.empty h.
Hint: begin the proof by appyling propositional_extensionality.
Lemma hstar_hpure_l : P H h,
  (\[P] \* H) h = (PH h).
Proof using. (* FILL IN HERE *) Admitted.

Optional Material

Alternative, Equivalent Definitions for Separation Logic Triples

We have previously defined triple on top of hoare, with the help of the separating conjunction operator, as: (H':hprop), hoare (H \* H') t (Q \*+ H'). In what follows, we give an equivalent characterization, expressed directly in terms of heaps and heap unions.
The alternative definition of triple t H Q asserts that if h1 satisfies the precondition H and h2 describes the rest of the state, then the evaluation of t produces a value v in a final state made that can be decomposed between a part h1' and h2 unchanged, in such a way that v and h1' together satisfy the postcondition Q. Formally:
Definition triple_lowlevel (t:trm) (H:hprop) (Q:valhprop) : Prop :=
   h1 h2,
  Fmap.disjoint h1 h2
  H h1
   h1' v,
       Fmap.disjoint h1' h2
    ∧ eval (h1 \u h2) t (h1' \u h2) v
    ∧ Q v h1'.
Let us establish the equivalence between this alternative definition of triple and the original one.

Exercise: 4 stars, standard, especially useful (triple_iff_triple_lowlevel)

Prove the equivalence between triple and triple_low_level. Warning: this is probably a very challenging exercise.
Lemma triple_iff_triple_lowlevel : t H Q,
  triple t H Qtriple_lowlevel t H Q.
Proof using. (* FILL IN HERE *) Admitted.

Alternative Definitions for Heap Predicates

In what follows, we discuss alternative, equivalent definitions for the fundamental heap predicates. We write these equivalence using equalities of the form H1 = H2. Recall that lemma hprop_eq enables deriving such equalities by invoking predicate extensionality.
The empty heap predicate \[] is equivalent to the pure fact predicate \[True].
Lemma hempty_eq_hpure_true :
  \[] = \[True].
Proof using.
  unfold hempty, hpure. apply hprop_eq. intros h. iff Hh.
  { auto. }
  { jauto. }
Qed.
Thus, hempty could be defined in terms of hpure, as hpure True, written \[True].
Definition hempty' : hprop :=
  \[True].
The pure fact predicate [\P] is equivalent to the existential quantification over a proof of P in the empty heap, that is, to the heap predicate \ (p:P), \[].
Lemma hpure_eq_hexists_proof : P,
  \[P] = (\ (p:P), \[]).
Proof using.
  unfold hempty, hpure, hexists. intros P.
  apply hprop_eq. intros h. iff Hh.
  { destruct Hh as (E&p). p. auto. }
  { destruct Hh as (p&E). auto. }
Qed.
Thus, hpure could be defined in terms of hexists and hempty, as hexists (fun (p:P) hempty), also written \ (p:P), \[].
Definition hpure' (P:Prop) : hprop :=
  \ (p:P), \[].
It is useful to minimize the number of combinators, both for elegance and to reduce the proof effort.
Since we cannot do without hexists, we have a choice between considering either hpure or hempty as primitive, and the other one as derived. The predicate hempty is simpler and appears as more fundamental.
Hence, in the subsequent chapters (and in the CFML tool), we define hpure in terms of hexists and hempty, like in the definition of hpure' shown above. In other words, we assume the definition:
  Definition hpure (P:Prop) : hprop :=
    \ (p:P), \[].

Additional Explanations for the Definition of \

The heap predicate \ (n:int), p ~~> (val_int n) characterizes a state that contains a single memory cell, at address p, storing the integer value n, for "some" (unspecified) integer n.
  Parameter (p:loc).
  Check (\ (n:int), p ~~> (val_int n)) : hprop.
The type of \, which operates on hprop, is very similar to that of , which operates on Prop.
The notation x, P stands for ex (fun x P), where ex has the following type:
    Check ex : A : Type, (AProp) → Prop.
Likewise, \ x, H stands for hexists (fun x H), where hexists has the following type:
    Check hexists : A : Type, (Ahprop) → hprop.
Remark: the notation for \ is directly adapted from that of , which supports the quantification an arbitrary number of variables, and is defined in Coq.Init.Logic as follows.
    Notation "'exists' x .. y , p" := (ex (fun x ⇒ .. (ex (fun yp)) ..))
      (at level 200, x binder, right associativity,
       format "'[' 'exists' '/ ' x .. y , '/ ' p ']'").

Formulation of the Extensionality Axioms

Module Extensionality.
To establish extensionality of entailment, we have used the predicate extensionality axiom. In fact, this axiom can be derived by combining the axiom of "functional extensionality" with another one called "propositional extensionality".
The axiom of "propositional extensionality" asserts that two propositions that are logically equivalent (in the sense that they imply each other) can be considered equal.
Axiom propositional_extensionality : (P Q:Prop),
  (PQ) →
  P = Q.
The axiom of "functional extensionality" asserts that two functions are equal if they provide equal result for every argument.
Axiom functional_extensionality : A B (f g:AB),
  ( x, f x = g x) →
  f = g.

Exercise: 1 star, standard, especially useful (predicate_extensionality_derived)

Using the two axioms propositional_extensionality and functional_extensionality, show how to derive predicate_extensionality.
Lemma predicate_extensionality_derived : A (P Q:AProp),
  ( x, P xQ x) →
  P = Q.
Proof using. (* FILL IN HERE *) Admitted.
End Extensionality.

Historical Notes

In this chapter, we defined the predicate triple t H Q for Separation Logic triples on top of the predicate hoare t H Q for Hoare triples, by quantifying universally on a heap predicate H', which describes the "rest of the word". This technique, known as the "baked-in frame rule", was introduced by [Birkedal, Torp-Smith and Yang 2006], who developed the first Separation Logic for a higher-order programming language. It was later employed successfully in numerous formalizations of Separation Logic.
Compared to the use of a "low-level" definition of Separation Logic triples such as the predicate triple_lowlevel, which quantifies over disjoint pieces of heaps, the "high-level" definition that bakes in the frame rule leads to more elegant, simpler proofs.

Himpl: Heap Entailment

Foundations of Separation Logic
Chapter: "Himpl".
Author: Arthur Charguéraud. License: CC-by 4.0.
Set Implicit Arguments.
From SLF Require LibSepReference.
From SLF Require Export Hprop.
Implicit Types
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : valhprop.

First Pass

In the previous chapter, we have introduced the key heap predicate operators, and we have defined the notion of Separation Logic triple.
Before we can state and prove reasoning rules for establishing triples, we need to introduce the "entailment relation". This relation, written H1 ==> H2, asserts that any heap that satisfies H1 also satisfies H2.
We also need to extend the entailment relation to postconditions. We write Q1 ===> Q2 to asserts that, for any result value v, the entailment Q1 v ==> Q2 v holds.
The two entailment relations appear in the statement of the rule of consequence, which admits the same statement in Separation Logic as in Hoare logic. It asserts that precondition can be strengthened and postcondition can be weakened in a specification triple.
    Lemma triple_conseq : t H Q H' Q',
      triple t H' Q'
      H ==> H'
      Q' ===> Q
      triple t H Q.
This chapter presents:
  • the formal definition of the entailment relations,
  • the fundamental properties of the Separation Logic operators: these properties are expressed either as entailments, or as equalities, which denote symmetric entailments,
  • the 4 structural rules of Separation Logic: the rule of consequence, the frame rule (which can be combined with the rule of consequence), and the extractions rules for pure facts and for quantifiers,
  • the tactics xsimpl and xchange that are critically useful for manipulating entailments in practice,
  • (optional) details on how to prove the fundamental properties and the structural rules.

Definition of Entailment

The "entailment relationship" H1 ==> H2 asserts that any heap h that satisfies the heap predicate H1 also satisfies the heap predicate H2.
Definition himpl (H1 H2:hprop) : Prop :=
   (h:heap), H1 hH2 h.

Notation "H1 ==> H2" := (himpl H1 H2) (at level 55).
H1 ==> H2 captures the fact that H1 is a stronger precondition than H2, in the sense that it is more restrictive.
As we show next, the entailment relation is reflexive, transitive, and antisymmetric. It thus forms an order relation on heap predicates.
Lemma himpl_refl : H,
  H ==> H.
Proof using. intros h. hnf. auto. Qed.

Lemma himpl_trans : H2 H1 H3,
  (H1 ==> H2) →
  (H2 ==> H3) →
  (H1 ==> H3).
Proof using. introv M1 M2. intros h H1h. eauto. Qed.

Exercise: 1 star, standard, especially useful (himpl_antisym)

Prove the antisymmetry of entailment result shown below using extensionality for heap predicates, as captured by lemma predicate_extensionality (or lemma hprop_eq) introduced in the previous chapter (Hprop).
Lemma himpl_antisym : H1 H2,
  (H1 ==> H2) →
  (H2 ==> H1) →
  H1 = H2.
Proof using. (* FILL IN HERE *) Admitted.
Remark: as the proof scripts show, the fact that entailment on hprop constitutes an order relation is a direct consequence of the fact that implication on Prop, that is, , is an order relation on Prop (when assuming the propositional extensionality axiom).
The lemma himpl_antisym may, for example, be used to establish commutativity of separating conjunction: (H1 \* H2) = (H2 \* H1) by proving that each side entails the other side, that is, by proving (H1 \* H2) ==> (H2 \* H1) and (H2 \* H1) ==> (H1 \* H2).

Entailment for Postconditions

The entailment ==> relates heap predicates. It is used to capture that a precondition "entails" another one. We need a similar judgment to assert that a postcondition "entails" another one.
For that purpose, we introduce Q1 ===> Q2, which asserts that for any value v, the heap predicate Q1 v entails Q2 v.
Definition qimpl (Q1 Q2:valhprop) : Prop :=
   (v:val), Q1 v ==> Q2 v.

Notation "Q1 ===> Q2" := (qimpl Q1 Q2) (at level 55).
Remark: equivalently, Q1 ===> Q2 holds if for any value v and any heap h, the proposition Q1 v h implies Q2 v h.
Entailment on postconditions also forms an order relation: it is reflexive, transitive, and antisymmetric.
Lemma qimpl_refl : Q,
  Q ===> Q.
Proof using. intros Q v. applys himpl_refl. Qed.

Lemma qimpl_trans : Q2 Q1 Q3,
  (Q1 ===> Q2) →
  (Q2 ===> Q3) →
  (Q1 ===> Q3).
Proof using. introv M1 M2. intros v. applys himpl_trans; eauto. Qed.

Lemma qimpl_antisym : Q1 Q2,
  (Q1 ===> Q2) →
  (Q2 ===> Q1) →
  (Q1 = Q2).
Proof using.
  introv M1 M2. apply functional_extensionality.
  intros v. applys himpl_antisym; eauto.
Qed.

Fundamental Properties of Separation Logic Operators

The 5 fundamental properties of Separation Logic operators are described next. Many other properties are derivable from those.
(1) The star operator is associative.
Parameter hstar_assoc : H1 H2 H3,
  (H1 \* H2) \* H3 = H1 \* (H2 \* H3).
(2) The star operator is commutative.
Parameter hstar_comm : H1 H2,
   H1 \* H2 = H2 \* H1.
(3) The empty heap predicate is a neutral for the star. Because star is commutative, it is equivalent to state that hempty is a left or a right neutral for hstar. We chose, arbitrarily, to state the left-neutral property.
Parameter hstar_hempty_l : H,
  \[] \* H = H.
(4) Existentials can be "extruded" out of stars, that is: (\ x, J x) \* H is equivalent to \ x, (J x \* H).
Parameter hstar_hexists : A (J:Ahprop) H,
  (\ x, J x) \* H = \ x, (J x \* H).
(5) The star operator is "monotone" with respect to entailment, meaning that if H1 ==> H1' then (H1 \* H2) ==> (H1' \* H2).
Viewed the other way around, if we have to prove the entailment relation (H1 \* H2) ==> (H1' \* H2), we can "cancel out" H2 on both sides. In this view, the monotonicity property is a sort of "frame rule for the entailment relation".
Here again, due to commutativity of star, it only suffices to state the left version of the monotonicity property.
Parameter himpl_frame_l : H2 H1 H1',
  H1 ==> H1'
  (H1 \* H2) ==> (H1' \* H2).

Exercise: 1 star, standard, especially useful (hstar_comm_assoc)

The commutativity and associativity results combine into one result that is sometimes convenient to exploit in proofs.
Lemma hstar_comm_assoc : H1 H2 H3,
  H1 \* H2 \* H3 = H2 \* H1 \* H3.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard, especially useful (himpl_frame_r)

The monotonicity property of the star operator w.r.t. entailment can also be stated in a symmetric fashion, as shown next. Prove this result. Hint: exploit the transitivity of entailment (himpl_trans) and the asymmetric monotonicity result (himpl_frame_l).
Lemma himpl_frame_r : H1 H2 H2',
  H2 ==> H2'
  (H1 \* H2) ==> (H1 \* H2').
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard, especially useful (himpl_frame_lr)

The monotonicity property of the star operator w.r.t. entailment can also be stated in a symmetric fashion, as shown next. Prove this result. Hint: exploit the transitivity of entailment (himpl_trans) and the asymmetric monotonicity result (himpl_frame_l).
Lemma himpl_frame_lr : H1 H1' H2 H2',
  H1 ==> H1'
  H2 ==> H2'
  (H1 \* H2) ==> (H1' \* H2').
Proof using. (* FILL IN HERE *) Admitted.

Introduction and Elimination Rules w.r.t. Entailments

The rules for introducing and eliminating pure facts and existential quantifiers in entailments are essential. They are presented next.
Consider an entailment of the form H ==> (\[P] \* H'). To establish this entailment, one must prove that P is true, and that H entails H'.

Exercise: 2 stars, standard, especially useful (himpl_hstar_hpure_r).

Prove the rule himpl_hstar_hpure_r. Hint: recall from Hprop the lemma hstar_hpure_l, which asserts the equality (\[P] \* H) h = (P H h).
Lemma himpl_hstar_hpure_r : P H H',
  P
  (H ==> H') →
  H ==> (\[P] \* H').
Proof using. (* FILL IN HERE *) Admitted.
Reciprocally, consider an entailment of the form (\[P] \* H) ==> H'. To establish this entailment, one must prove that H entails H' under the assumption that P is true.
Indeed, the former proposition asserts that if a heap h satisfies H and that P is true, then h satisfies H', while the latter asserts that if P is true, then if a heap h satisfies H it also satisfies H'.
The "extraction rule for pure facts in the left of an entailment" captures the property that the pure fact \[P] can be extracted into the Coq context. It is stated as follows.

Exercise: 2 stars, standard, especially useful (himpl_hstar_hpure_l).

Prove the rule himpl_hstar_hpure_l.
Lemma himpl_hstar_hpure_l : (P:Prop) (H H':hprop),
  (PH ==> H') →
  (\[P] \* H) ==> H'.
Proof using. (* FILL IN HERE *) Admitted.
Consider an entailment of the form H ==> (\ x, J x), where x has some type A and J has type Ahprop. To establish this entailment, one must exhibit a value for x for which H entails J x.

Exercise: 2 stars, standard, especially useful (himpl_hexists_r).

Prove the rule himpl_hexists_r.
Lemma himpl_hexists_r : A (x:A) H J,
  (H ==> J x) →
  H ==> (\ x, J x).
Proof using. (* FILL IN HERE *) Admitted.
Reciprocally, consider an entailment (\ x, (J x)) ==> H. To establish this entailment, one has to prove that, whatever the value of x, the predicate J x entails H.
Indeed the former proposition asserts that if a heap h satisfies J x for some x, then h satisfies H', while the latter asserts that if, for some x, the predicate h satisfies J x, then h satisfies H'.
Observe how the existential quantifier on the left of the entailment becomes an universal quantifier outside of the arrow.
The "extraction rule for existentials in the left of an entailment" captures the property that existentials can be extracted into the Coq context. It is stated as follows.

Exercise: 2 stars, standard, especially useful (himpl_hexists_l).

Prove the rule himpl_hexists_l.
Lemma himpl_hexists_l : (A:Type) (H:hprop) (J:Ahprop),
  ( x, J x ==> H) →
  (\ x, J x) ==> H.
Proof using. (* FILL IN HERE *) Admitted.

Extracting Information from Heap Predicates

We next present an example showing how entailments can be used to state lemmas allowing to extract information from particular heap predicates. We show that from a heap predicate of the form (p ~~> v1) \* (p ~~> v2) describes two "disjoint" cells that are both "at location p", one can extract a contradiction.
Indeed, such a state cannot exist. The underlying contraction is formally captured by the following entailment relation, which concludes False.
Lemma hstar_hsingle_same_loc : (p:loc) (v1 v2:val),
  (p ~~> v1) \* (p ~~> v2) ==> \[False].
The proof of this result exploits a result on finite maps. Essentially, the domain of a single singleton map that binds a location p to some value is the singleton set \{p}, thus such a singleton map cannot be disjoint from another singleton map that binds the same location p.
    Check disjoint_single_single_same_inv : (p:loc) (v1 v2:val),
      Fmap.disjoint (Fmap.single p v1) (Fmap.single p v2) →
      False.
Using this lemma, we can prove hstar_hsingle_same_loc by unfolding the definition of hstar to reveal the contradiction on the disjointness assumption.
Proof using.
  intros. unfold hsingle. intros h (h1&h2&E1&E2&D&E). false.
  subst. applys Fmap.disjoint_single_single_same_inv D.
Qed.
More generally, a heap predicate of the form H \* H is generally suspicious in Separation Logic. In (the simplest variant of) Separation Logic, such a predicate can only be satisfied if H covers no memory cell at all, that is, if H is a pure predicate of the form \[P] for some proposition P.

Consequence, Frame, and their Combination

The rule of consequence in Separation Logic is similar to that in Hoare logic.
Parameter triple_conseq : t H Q H' Q',
  triple t H' Q'
  H ==> H'
  Q' ===> Q
  triple t H Q.
Recall the frame rule introduced in the previous chapter.
Parameter triple_frame : t H Q H',
  triple t H Q
  triple t (H \* H') (Q \*+ H').
Observe that, stated as such, it is very unlikely for the frame rule to be applicable in practice, because the precondition must be exactly of the form H \* H' and the postcondition exactly of the form Q \*+ H', for some H'. For example, the frame rule would not apply to a proof obligation of the form triple t (H' \* H) (Q \*+ H'), simply because H' \* H does not match H \* H'.
This limitation of the frame rule can be addressed by combining the frame rule with the rule of consequence into a single rule, called the "consequence-frame rule". This rule, shown below, enables deriving a triple from another triple, without any restriction on the exact shape of the precondition and postcondition of the two triples involved.
Lemma triple_conseq_frame : H2 H1 Q1 t H Q,
  triple t H1 Q1
  H ==> H1 \* H2
  Q1 \*+ H2 ===> Q
  triple t H Q.

Exercise: 1 star, standard, especially useful (triple_conseq_frame)

Prove the combined consequence-frame rule.
Proof using. (* FILL IN HERE *) Admitted.

The Extraction Rules for Triples

A judgment of the form triple t (\[P] \* H) Q is equivalent to P triple t H Q. This structural rule is called triple_hpure and formalized as shown below. It captures the extraction of the pure facts out of the precondition of a triple, in a similar way as himpl_hstar_hpure_l for entailments.
Parameter triple_hpure : t (P:Prop) H Q,
  (Ptriple t H Q) →
  triple t (\[P] \* H) Q.
A judgment of the form triple t (\ x, J x) Q is equivalent to x, triple t (J x) Q. This structural rule is called triple_hexists and formalized as shown below. It captures the extraction of an existential quantifier out of the precondition of a triple, in a similar way as himpl_hexists_l for entailments.
Parameter triple_hexists : t (A:Type) (J:Ahprop) Q,
  ( x, triple t (J x) Q) →
  triple t (\ x, J x) Q.

More Details

Module XsimplTactic.
Import LibSepReference.
Notation "'hprop''" := (Hprop.hprop).

Identifying True and False Entailments

Module CaseStudy.

Implicit Types p q : loc.
Implicit Types n m : int.

End CaseStudy.

Proving Entailments by Hand

Module EntailmentProofs.
Implicit Types p : loc.
Implicit Types n : int.
Proving an entailment by hand is generally a tedious task. This is why most Separation Logic based framework include an automated tactic for simplifying entailments. In this course, the relevant tactic is named xsimpl. Further in this chapter, we describe by means of examples the behavior of this tactic. But in order to best appreciate what the tactic provides and best understand how it works, it is very useful to complete a few proofs by hand.

Exercise: 3 stars, standard, optional (himpl_example_1)

Prove the example entailment below. Hint: exploit hstar_comm, hstar_assoc, or hstar_comm_assoc which combines the two, and exploit himpl_frame_l or himpl_frame_r to cancel out matching pieces.
Lemma himpl_example_1 : p1 p2 p3 p4,
      p1 ~~> 6 \* p2 ~~> 7 \* p3 ~~> 8 \* p4 ~~> 9
  ==> p4 ~~> 9 \* p3 ~~> 8 \* p2 ~~> 7 \* p1 ~~> 6.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard, optional (himpl_example_2)

Prove the example entailment below. Hint: use himpl_hstar_hpure_r to extract pure facts, once they appear at the head of the left-hand side of the entailment.
Lemma himpl_example_2 : p1 p2 p3 n,
      p1 ~~> 6 \* \[n > 0] \* p2 ~~> 7 \* \[n < 0]
  ==> p3 ~~> 8.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard, optional (himpl_example_3)

Prove the example entailment below. Hint: use lemmas among himpl_hexists_r, himpl_hexists_l, himpl_hstar_hpure_r and himpl_hstar_hpure_r to deal with pure facts and quantifiers.
Lemma himpl_example_3 : p,
      \ n, p ~~> n \* \[n > 0]
  ==> \ n, \[n > 1] \* p ~~> (n-1).
Proof using. (* FILL IN HERE *) Admitted.
End EntailmentProofs.

The xsimpl Tactic

Performing manual simplifications of entailments by hand is an extremely tedious task. Fortunately, it can be automated using specialized Coq tactic. This tactic, called xsimpl, applies to an entailment and implements a 3-step process:
1. extract pure facts and existential quantifiers from the LHS, 2. cancel out equal predicates occurring both in the LHS and RHS, 3. generate subgoals for the pure facts occurring in the RHS, and instantiate the existential quantifiers from the RHS (using either unification variables or user-provided hints).
These steps are detailed and illustrated next.
The tactic xpull is a degraded version of xsimpl that only performs the first step. We will also illustrate its usage.

xsimpl to Extract Pure Facts and Quantifiers in LHS

The first feature of xsimpl is its ability to extract the pure facts and the existential quantifiers from the left-hand side out into the Coq context.
In the example below, the pure fact n > 0 appears in the LHS. After calling xsimpl, this pure fact is turned into an hypothesis, which may be introduced with a name into the Coq context.
Lemma xsimpl_demo_lhs_hpure : H1 H2 H3 H4 (n:int),
  H1 \* H2 \* \[n > 0] \* H3 ==> H4.
Proof using.
  intros. xsimpl. intros Hn.
Abort.
In case the LHS includes a contradiction, such as the pure fact False, the goal gets solved immediately by xsimpl.
Lemma xsimpl_demo_lhs_hpure : H1 H2 H3 H4,
  H1 \* H2 \* \[False] \* H3 ==> H4.
Proof using.
  intros. xsimpl.
Qed.
The xsimpl tactic also extracts existential quantifier from the LHS. It turns them into universally quantified variables outside of the entailment relation, as illustrated through the following example.
Lemma xsimpl_demo_lhs_hexists : H1 H2 H3 H4 (p:loc),
  H1 \* \ (n:int), (p ~~> n \* H2) \* H3 ==> H4.
Proof using.
  intros. xsimpl. intros n.
Abort.
A call to xsimpl, or to its degraded version xpull, extract at once all the pure facts and quantifiers from the LHS, as illustrated next.
Lemma xsimpl_demo_lhs_several : H1 H2 H3 H4 (p q:loc),
  H1 \* \ (n:int), (p ~~> n \* \[n > 0] \* H2) \* \[pq] \* H3 ==> H4.
Proof using.
  intros.
  xsimpl. (* or xpull *)
  intros n Hn Hp.
Abort.

xsimpl to Cancel Out Heap Predicates from LHS and RHS

The second feature of xsimpl is its ability to cancel out similar heap predicates that occur on both sides of an entailment.
In the example below, H2 occurs on both sides, so it is canceled out by xsimpl.
Lemma xsimpl_demo_cancel_one : H1 H2 H3 H4 H5 H6 H7,
  H1 \* H2 \* H3 \* H4 ==> H5 \* H6 \* H2 \* H7.
Proof using.
  intros. xsimpl.
Abort.
xsimpl actually cancels out at once all the heap predicates that it can spot appearing on both sides. In the example below, H2, H3, and H4 are canceled out.
Lemma xsimpl_demo_cancel_many : H1 H2 H3 H4 H5,
  H1 \* H2 \* H3 \* H4 ==> H4 \* H3 \* H5 \* H2.
Proof using.
  intros. xsimpl.
Abort.
If all the pieces of heap predicate get canceled out, the remaining proof obligation is \[] ==> \[]. In this case, xsimpl automatically solves the goal by invoking the reflexivity property of entailment.
Lemma xsimpl_demo_cancel_all : H1 H2 H3 H4,
  H1 \* H2 \* H3 \* H4 ==> H4 \* H3 \* H1 \* H2.
Proof using.
  intros. xsimpl.
Qed.

xsimpl to Instantiate Pure Facts and Quantifiers in RHS

The third feature of xsimpl is its ability to extract pure facts from the RHS as separate subgoals, and to instantiate existential quantifiers from the RHS.
Let us first illustrate how it deals with pure facts. In the example below, the fact n > 0 gets spawned in a separated subgoal.
Lemma xsimpl_demo_rhs_hpure : H1 H2 H3 (n:int),
  H1 ==> H2 \* \[n > 0] \* H3.
Proof using.
  intros. xsimpl.
Abort.
When it encounters an existential quantifier in the RHS, the xsimpl tactic introduces a unification variable denoted by a question mark, that is, an "evar", in Coq terminology. In the example below, the xsimpl tactic turns \ n, .. p ~~> n .. into .. p ~~> ?x ...
Lemma xsimpl_demo_rhs_hexists : H1 H2 H3 H4 (p:loc),
  H1 ==> H2 \* \ (n:int), (p ~~> n \* H3) \* H4.
Proof using.
  intros. xsimpl.
Abort.
The "evar" often gets subsequently instantiated as a result of a cancellation step. For example, in the example below, xsimpl instantiates the existentially quantified variable n as ?x, then cancels out p ~~> ?x from the LHS against p ~~> 3 on the right-hand-side, thereby unifying ?x with 3.
Lemma xsimpl_demo_rhs_hexists_unify : H1 H2 H3 H4 (p:loc),
  H1 \* (p ~~> 3) ==> H2 \* \ (n:int), (p ~~> n \* H3) \* H4.
Proof using.
  intros. xsimpl.
Abort.
The instantiation of the evar ?x can be observed if there is another occurrence of the same variable in the entailment. In the next example, which refines the previous one, observe how n > 0 becomes 3 > 0.
Lemma xsimpl_demo_rhs_hexists_unify_view : H1 H2 H4 (p:loc),
  H1 \* (p ~~> 3) ==> H2 \* \ (n:int), (p ~~> n \* \[n > 0]) \* H4.
Proof using.
  intros. xsimpl.
Abort.
(Advanced.) In some cases, it is desirable to provide an explicit value for instantiating an existential quantifier that occurs in the RHS. The xsimpl tactic accepts arguments, which will be used to instantiate the existentials (on a first-match basis). The syntax is xsimpl v1 .. vn, or xsimpl (>> v1 .. vn) in the case n > 3.
Lemma xsimpl_demo_rhs_hints : H1 (p q:loc),
  H1 ==> \ (n m:int), (p ~~> n \* q ~~> m).
Proof using.
  intros. xsimpl 3 4.
Abort.
(Advanced.) If two existential quantifiers quantify variables of the same type, it is possible to provide a value for only the second quantifier by passing as first argument to xsimpl the special value __. The following example shows how, on LHS of the form \ n m, ..., the tactic xsimpl __ 4 instantiates m with 4 while leaving n as an unresolved evar.
Lemma xsimpl_demo_rhs_hints_evar : H1 (p q:loc),
  H1 ==> \ (n m:int), (p ~~> n \* q ~~> m).
Proof using.
  intros. xsimpl __ 4.
Abort.

xsimpl on Entailments Between Postconditions

The tactic xsimpl also applies on goals of the form Q1 ===> Q2.
For such goals, it unfolds the definition of ===> to reveal an entailment of the form ==>, then invokes the xsimpl tactic.
Lemma qimpl_example_1 : (Q1 Q2:valhprop) (H2 H3:hprop),
  Q1 \*+ H2 ===> Q2 \*+ H2 \*+ H3.
Proof using. intros. xsimpl. intros r. Abort.

Example of Entailment Proofs using xsimpl

Lemma himpl_example_1 : (p:loc),
  p ~~> 3 ==>
  \ (n:int), p ~~> n.
Proof using. xsimpl. Qed.

Lemma himpl_example_2 : (p q:loc),
  p ~~> 3 \* q ~~> 3 ==>
  \ (n:int), p ~~> n \* q ~~> n.
Proof using. xsimpl. Qed.

Lemma himpl_example_4 : (p:loc),
  \ (n:int), p ~~> n ==>
  \ (m:int), p ~~> (m + 1).
Proof using.
  intros. (* observe that xsimpl here does not work well. *)
  xpull. intros n. xsimpl (n-1).
  replace (n-1+1) with n. { auto. } { math. }
  (* variant for the last line:
  applys_eq himpl_refl. fequal. math. *)

Qed.

Lemma himpl_example_5 : (H:hprop),
  \[False] ==> H.
Proof using. xsimpl. Qed.

The xchange Tactic

The tactic xchange is to entailment what rewrite is to equality.
Assume an entailment goal of the form H1 \* H2 \* H3 ==> H4. Assume an entailment assumption M, say H2 ==> H2'. Then xchange M turns the goal into H1 \* H2' \* H3 ==> H4, effectively replacing H2 with H2'.
Lemma xchange_demo_base : H1 H2 H2' H3 H4,
  H2 ==> H2'
  H1 \* H2 \* H3 ==> H4.
Proof using.
  introv M. xchange M.
  (* Note that freshly produced items appear to the front *)
Abort.
The tactic xchange can also take as argument equalities. The tactic xchange M exploits the left-to-right direction of an equality M, whereas xchange <- M exploits the right-to-left direction .
Lemma xchange_demo_eq : H1 H2 H3 H4 H5,
  H1 \* H3 = H5
  H1 \* H2 \* H3 ==> H4.
Proof using.
  introv M. xchange M.
  xchange <- M.
Abort.
The tactic xchange M does accept a lemma or hypothesis M featuring universal quantifiers, as long as its conclusion is an equality or an entailment. In such case, xchange M instantiates M before attemting to perform a replacement.
Lemma xchange_demo_inst : H1 (J J':inthprop) H3 H4,
  ( n, J n = J' (n+1)) →
  H1 \* J 3 \* H3 ==> H4.
Proof using.
  introv M. xchange M.
  (* Note that freshly produced items appear to the front *)
Abort.
How does the xchange tactic work? Consider a goal of the form H ==> H' and assume xchange is invoked with an hypothesis of type H1 ==> H1' as argument. The tactic xchange should attempt to decompose H as the star of H1 and the rest of H, call it H2. If it succeeds, then the goal H ==> H' can be rewritten as H1 \* H2 ==> H'. To exploit the hypothesis H1 ==> H1', the tactic should replace the goal with the entailment H1' \* H2 ==> H'. The lemma shown below captures this piece of reasoning implemented by the tactic xchange.

Exercise: 2 stars, standard, especially useful (xchange_lemma)

Prove, without using the tactic xchange, the following lemma which captures the internal working of xchange.
Lemma xchange_lemma : H1 H1' H H' H2,
  H1 ==> H1'
  H ==> H1 \* H2
  H1' \* H2 ==> H'
  H ==> H'.
Proof using. (* FILL IN HERE *) Admitted.
End XsimplTactic.

Optional Material

Proofs for the Separation Algebra

Module FundamentalProofs.
We next show the details of the proofs establishing the fundamental properties of the Separation Logic operators.
Note that all these results must be proved without help of the tactic xsimpl, because the implementation of the tactic xsimpl itself depends on these fundamental properties.
We begin with the frame property, which is the simplest to prove.

Exercise: 1 star, standard, especially useful (himpl_frame_l)

Prove the frame property for entailment. Hint: unfold the definition of hstar.
Lemma himpl_frame_l : H2 H1 H1',
  H1 ==> H1'
  (H1 \* H2) ==> (H1' \* H2).
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard, especially useful (himpl_frame_r)

Prove the lemma himpl_frame_r, symmetric to himpl_frame_l.
Lemma himpl_frame_r : H1 H2 H2',
  H2 ==> H2'
  (H1 \* H2) ==> (H1 \* H2').
Proof using. (* FILL IN HERE *) Admitted.
The second simplest result is the extrusion property for existentials. To begin with, we exploit the antisymmetry of entailment to turn the equality into a conjunction of two entailments. Then, it is simply a matter of unfolding the definitions of hexists, hstar and ==>.
By default, Coq does not display any of the parentheses written in the statement below, making the proof obligation somewhat confusing. This is a small price to pay in exchange for maximal flexibility in allowing the parsing of unparenthesized expressions such as H1 \* \ x, H2 and \ x, H1 \* H2.
As a result, for this proof and the subsequent ones form this section, you should consider activating the display of parentheses. In CoqIDE, use the "View" menu, check "Display Parentheses". Alternatively, use the command "Set Printing Parentheses", or even "Set Printing All".
Lemma hstar_hexists : A (J:Ahprop) H,
  (\ x, J x) \* H = \ x, (J x \* H).
Proof using.
  intros. applys himpl_antisym.
  { intros h (h1&h2&M1&M2&D&U). destruct M1 as (x&M1). * x h1 h2. }
  { intros h (x&M). destruct M as (h1&h2&M1&M2&D&U). h1 h2.
    splits¬. * x. }
Qed.
There remains to establish the commutativity and associativity of the star operator, and the fact that the empty heap predicate is its neutral element. To establish these properties, we need to exploit a few basic facts about finite maps. We introduce them as we go along.
To prove the commutativity of the star operator, i.e. H1 \* H2 = H2 \* H1, it is sufficient to prove the entailement in one direction, e.g., H1 \* H2 ==> H2 \* H1. Indeed, the other other direction is symmetrical. The symmetry argument is captured by the following lemma, which we will exploit in the proof of hstar_comm.
Lemma hprop_op_comm : (op:hprophprophprop),
  ( H1 H2, op H1 H2 ==> op H2 H1) →
  ( H1 H2, op H1 H2 = op H2 H1).
Proof using. introv M. intros. applys himpl_antisym; applys M. Qed.
To prove commutativity of star, we need to exploit the fact that the union of two finite maps with disjoint domains commutes. This fact is captured by the following lemma.
    Check Fmap.union_comm_of_disjoint : h1 h2,
      Fmap.disjoint h1 h2
      h1 \u h2 = h2 \u h1.
The commutativity result is then proved as follows. Observe the use of the lemma hprop_op_comm, which allows establishing the entailment in only one of the two directions.
Lemma hstar_comm : H1 H2,
   H1 \* H2 = H2 \* H1.
Proof using.
  applys hprop_op_comm. intros. intros h (h1&h2&M1&M2&D&U). h2 h1.
  subst. splits¬. { rewrite Fmap.union_comm_of_disjoint; auto. }
Qed.

Exercise: 3 stars, standard, especially useful (hstar_hempty_l)

Prove that the empty heap predicate is a neutral element for the star. You will need to exploit the fact that the union with an empty map is the identity, as captured by lemma Fmap.union_empty_l.
  Check Fmap.union_empty_l : h,
    Fmap.empty \u h = h.
Lemma hstar_hempty_l : H,
  \[] \* H = H.
Proof using. (* FILL IN HERE *) Admitted.
The lemma showing that hempty is a right neutral can be derived from the previous result (hempty is a left neutral) and commutativity.
Lemma hstar_hempty_r : H,
  H \* \[] = H.
Proof using.
  intros. rewrite hstar_comm. rewrite hstar_hempty_l. auto.
Qed.
Associativity of star is the most tedious result to derive. We need to exploit the associativity of union on finite maps, as well as lemmas about the disjointness of a map with the result of the union of two maps.
  Check Fmap.union_assoc : h1 h2 h3,
    (h1 \u h2) \u h3 = h1 \u (h2 \u h3).

  Check Fmap.disjoint_union_eq_l : h1 h2 h3,
      Fmap.disjoint (h2 \u h3) h1
    = (Fmap.disjoint h1 h2Fmap.disjoint h1 h3).

  Check Fmap.disjoint_union_eq_r : h1 h2 h3,
     Fmap.disjoint h1 (h2 \u h3)
   = (Fmap.disjoint h1 h2Fmap.disjoint h1 h3).

Exercise: 1 star, standard, optional (hstar_assoc)

Complete the right-to-left part of the proof of associativity of the star operator.
Lemma hstar_assoc : H1 H2 H3,
  (H1 \* H2) \* H3 = H1 \* (H2 \* H3).
Proof using.
  intros. applys himpl_antisym.
  { intros h (h'&h3&M1&M2&D&U). destruct M1 as (h1&h2&M3&M4&D'&U').
    subst h'. rewrite Fmap.disjoint_union_eq_l in D.
     h1 (h2 \u h3). splits.
    { applys M3. }
    { * h2 h3. }
    { rewrite* @Fmap.disjoint_union_eq_r. }
    { rewrite* @Fmap.union_assoc in U. } }
(* FILL IN HERE *) Admitted.
End FundamentalProofs.

Proof of the Consequence Rule

Module ProveConsequenceRules.
Recall the statement of the rule of consequence.
Lemma triple_conseq' : t H Q H' Q',
  triple t H' Q'
  H ==> H'
  Q' ===> Q
  triple t H Q.
A direct proof of triple_conseq goes through the low-level interpretation of Separation Logic triples in terms of heaps.
Proof using.
  (* No need to follow through this low-level proof. *)
  introv M WH WQ. rewrite triple_iff_triple_lowlevel in *.
  intros h1 h2 D HH. forwards (v&h1'&D'&R&HQ): M D. applys WH HH.
   v h1'. splits¬. applys WQ HQ.
Qed.
An alternative proof consists of first establishing the consequence rule for the hoare judgment, then derive its generalization to the triple judgment of Separation Logic.

Exercise: 3 stars, standard, especially useful (hoare_conseq)

Prove the consequence rule for Hoare triples.
Lemma hoare_conseq : t H Q H' Q',
  hoare t H' Q'
  H ==> H'
  Q' ===> Q
  hoare t H Q.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard, especially useful (triple_conseq)

Prove the consequence rule by leveraging the lemma hoare_conseq. Hint: unfold the definition of triple, apply the lemma hoare_conseq with the appropriate arguments, then exploit himpl_frame_l to prove the entailment relations.
Lemma triple_conseq : t H Q H' Q',
  triple t H' Q'
  H ==> H'
  Q' ===> Q
  triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
End ProveConsequenceRules.

Proof of the Extraction Rules for Triples

Module ProveExtractionRules.
Recall the extraction rule for pure facts.
Parameter triple_hpure' : t (P:Prop) H Q,
  (Ptriple t H Q) →
  triple t (\[P] \* H) Q.
To prove this lemma, we first the establish corresponding result on hoare, then derive its version for triple.
Lemma hoare_hpure : t (P:Prop) H Q,
  (Phoare t H Q) →
  hoare t (\[P] \* H) Q.
Proof using.
  introv M. intros h (h1&h2&M1&M2&D&U). destruct M1 as (M1&HP).
  subst. rewrite Fmap.union_empty_l. applys M HP M2.
Qed.

Lemma triple_hpure : t (P:Prop) H Q,
  (Ptriple t H Q) →
  triple t (\[P] \* H) Q.
Proof using.
  introv M. unfold triple. intros H'.
  rewrite hstar_assoc. applys hoare_hpure.
  intros HP. applys M HP.
Qed.
Similarly, the extraction rule for existentials for triple can be derived from that for hoare.

Exercise: 2 stars, standard, especially useful (triple_hexists)

Prove the extraction rule triple_hexists. Hint: start by stating and proving the corresponding lemma for hoare triples.
Lemma hoare_hexists : t (A:Type) (J:Ahprop) Q,
  ( x, hoare t (J x) Q) →
  hoare t (\ x, J x) Q.
Proof using. introv M. intros h (x&Hh). applys M Hh. Qed.

Lemma triple_hexists : t (A:Type) (J:Ahprop) Q,
  ( x, triple t (J x) Q) →
  triple t (\ x, J x) Q.
Proof using. (* FILL IN HERE *) Admitted.
Remark: the rules for extracting existentials out of entailments and out of preconditions can be stated in a slightly more concise way by exploiting the combinator hexists rather than its associated notation \ x, H, which stands for hexists (fun x H).
These formulation, shown below, tend to behave slightly better with respect to Coq unification, hence we use them in the CFML framework.
Parameter hstar_hexists' : A (J:Ahprop) H,
  (hexists J) \* H = hexists (J \*+ H).

Parameter triple_hexists' : t (A:Type) (J:Ahprop) Q,
  ( x, triple t (J x) Q) →
  triple t (hexists J) Q.
Remark: in chapter Hprop, we observed that \[P] can be encoded as \ (p:P), \[]. When this encoding is used, the rule triple_hpure turns out to be a particular instance of the rule triple_hexists, as we prove next.
Parameter hpure_encoding : P,
  \[P] = (\ (p:P), \[]).

Lemma triple_hpure_from_triple_hexists : t (P:Prop) H Q,
  (Ptriple t H Q) →
  triple t (\[P] \* H) Q.
Proof using.
  introv M. rewrite hpure_encoding.
  rewrite hstar_hexists. (* disable notation printing to see the effect *)
  applys triple_hexists. (*  (p:P), ... is the same as P ... *)
  rewrite hstar_hempty_l. applys M.
Qed.

End ProveExtractionRules.

Rules for Naming Heaps

Thereafter, we write = h for fun h' h' = h, that is, the heap predicate that only accepts heaps exactly equal to h.

Exercise: 3 stars, standard, optional (hexists_named_eq)

Prove that a heap predicate H is equivalent to the heap predicate which asserts that the heap is, for a heap h such that H h, exactly equal to H.
Hint: use hstar_hpure_l and hexists_intro, as well as the extraction rules himpl_hexists_l and himpl_hstar_hpure_l.
Lemma hexists_named_eq : H,
  H = (\ h, \[H h] \* (= h)).
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard, optional (hoare_named_heap)

Prove that the proposition hoare t H Q is equivalent to: for any heap h satisfying the precondition H, the Hoare triple whose precondition requires the input heap to be exactly equal to h, and whose postcondition is Q holds.
Lemma hoare_named_heap : t H Q,
  ( h, H hhoare t (= h) Q) →
  hoare t H Q.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard, optional (triple_named_heap)

Prove the counterpart of hoare_named_heap for Separation Logic triples.
It is possible to exploit the lemma hoare_named_heap, yet there exists a simpler, more direct proof that exploits the lemma hexists_name_eq, which is stated above.
Lemma triple_named_heap : t H Q,
  ( h, H htriple t (= h) Q) →
  triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.

Alternative Structural Rule for Existentials

Module AlternativeExistentialRule.
Traditional papers on Separation Logic do not include triple_hexists, but instead a rule called triple_hexists2 that includes an existential quantifier both in the precondition and the postcondition.
As we show next, in the presence of the consequence rule, the two rules are equivalent.
The formulation of triple_hexists is not only more concise, it is also better suited for practical applications.
Lemma triple_hexists2 : A (Hof:Ahprop) (Qof:Avalhprop) t,
  ( x, triple t (Hof x) (Qof x)) →
  triple t (\ x, Hof x) (fun v\ x, Qof x v).
Proof using.
  introv M.
  applys triple_hexists. intros x.
  applys triple_conseq (M x).
  { applys himpl_refl. }
  { intros v. applys himpl_hexists_r x. applys himpl_refl. }
Qed.

Lemma triple_hexists_of_triple_hexists2 : t (A:Type) (Hof:Ahprop) Q,
  ( x, triple t (Hof x) Q) →
  triple t (\ x, Hof x) Q.
Proof using.
  introv M.
  applys triple_conseq (\ x, Hof x) (fun (v:val) ⇒ \ (x:A), Q v).
  { applys triple_hexists2. intros x. applys M. }
  { applys himpl_refl. }
  { intros v. applys himpl_hexists_l. intros _. applys himpl_refl. }
Qed.

End AlternativeExistentialRule.

Historical Notes

Nearly every project that aims for practical program verification using Separation Logic features, in one way or another, some amount of tooling for automatically simplifying Separation Logic assertion.
The tactic used here, xsimpl, was developed for the CFML tool. Its specification may be found in Appendix K from the paper: http://www.chargueraud.org/research/2020/seq_seplogic/seq_seplogic.pdf though it makes sense to wait until chapter Wand for reading it.

Rules: Reasoning Rules

Set Implicit Arguments.
This file imports LibSepDirect.v instead of Hprop.v and Himpl.v. The file LibSepDirect.v contains definitions that are essentially similar to those from Hprop.v and Himpl.v, yet with one main difference: LibSepDirect makes the definition of Separation Logic operators opaque.
As a result, one cannot unfold the definition of hstar, hpure, etc. To carry out reasoning, one must use the introduction and elimination lemmas (e.g. hstar_intro, hstar_elim). These lemmas enforce abstraction: they ensure that the proofs do not depend on the particular choice of the definitions used for constructing Separation Logic.
From SLF Require Export LibSepReference.
From SLF Require Basic.

First Pass

In the previous chapters, we have:
  • introduced the key heap predicate operators,
  • introduced the notion of Separation Logic triple,
  • introduced the entailment relation,
  • introduced the structural rules of Separation Logic.
We are now ready to present the other reasoning rules, which enable establishing properties of concrete programs.
These reasoning rules are proved correct with respect to the semantics of the programming language in which the programs are expressed. Thus, a necessary preliminary step is to present the syntax and the semantics of a (toy) programming language, for which we aim to provide Separation Logic reasoning rules.
The present chapter is thus organized as follows:
  • definition of the syntax of the language,
  • definition of the semantics of the language,
  • statements of the reasoning rules associated with each of the term constructions from the language,
  • specification of the primitive operations of the language, in particular those associated with memory operations,
  • review of the 4 structural rules introduced in prior chapters,
  • examples of practical verification proofs.
The bonus section (optional) also includes:
  • proofs of the reasoning rules associated with each term construct,
  • proofs of the specification of the primitive operations.

Semantic of Terms

Module SyntaxAndSemantics.

Syntax

The syntax described next captures the "abstract syntax tree" of a programming language. It follows a presentation that distiguishes between closed values and terms. This presentation is intended to simplify the definition and evaluation of the substitution function: because values are always closed (i.e., no free variables in them), the substitution function never needs to traverse through values.
The grammar for values includes unit, boolean, integers, locations, functions, recursive functions, and primitive operations. For example, val_int 3 denotes the integer value 3. The value val_fun x t denotes the function fun x t, and the value val_fix f x t denotes the function fix f x t, which is written let rec f x = t in f in OCaml syntax.
For conciseness, we include just a few primitive operations: ref, get, set and free for manipulating the mutable state, the operation add to illustrate a simple arithmetic operation, and the operation div to illustrate a partial operation.
Inductive val : Type :=
  | val_unit : val
  | val_bool : boolval
  | val_int : intval
  | val_loc : locval
  | val_fun : vartrmval
  | val_fix : varvartrmval
  | val_ref : val
  | val_get : val
  | val_set : val
  | val_free : val
  | val_add : val
  | val_div : val

The grammar for terms includes values, variables, function definitions, recursive function definitions, function applications, sequences, let-bindings, and conditionals.

with trm : Type :=
  | trm_val : valtrm
  | trm_var : vartrm
  | trm_fun : vartrmtrm
  | trm_fix : varvartrmtrm
  | trm_app : trmtrmtrm
  | trm_seq : trmtrmtrm
  | trm_let : vartrmtrmtrm
  | trm_if : trmtrmtrmtrm.
Note that trm_fun and trm_fix denote functions that may feature free variables, unlike val_fun and val_fix which denote closed values. The intention is that the evaluation of a trm_fun in the empty context produces a val_fun value. Likewise, a trm_fix eventually evaluates to a val_fix.
Several value constructors are declared as coercions, to enable more concise statements. For example, val_loc is declared as a coercion, so that a location p of type loc can be viewed as the value val_loc p where an expression of type val is expected. Likewise, a boolean b may be viewed as the value val_bool b, and an integer n may be viewed as the value val_int n.
Coercion val_loc : loc >-> val.
Coercion val_bool : bool >-> val.
Coercion val_int : Z >-> val.

State

The language we consider is an imperative language, with primitive functions for manipulating the state. Thus, the statement of the evaluation rules involve a memory state.
Recall from Hprop that a state is represented as a finite map from location to values. Finite maps are presented using the type fmap. Details of the construction of finite maps are beyond the scope of this course; details may be found in the the file LibSepFmap.v.
Definition state : Type := fmap loc val.
For technical reasons related to the internal representation of finite maps, to enable reading in a state, we need to justify that the grammar of values is inhabited. This property is captured by the following command, whose details are not relevant for understanding the rest of the chapter.
Instance Inhab_val : Inhab val.
Proof using. apply (Inhab_of_val val_unit). Qed.

Substitution

The semantics of the evaluation of function is described by means of a substitution function. The substitution function, written subst y w t, replaces all occurrences of a variable y with a value w inside a term t.
The substitution function is always the identity function on values, because our language only considers closed values. In other words, we define subst y w (trm_val v) = (trm_val v).
The substitution function, when reaching a variable, performs a comparison between two variables. To that end, it exploits the comparison function var_eq x y, which produces a boolean value indicating whether x and y denote the same variable.
"Optional contents": the remaining of this section describes further details about the substitution function that may be safely skipped over in first reading.
The substitution operation traverses all other language constructs in a structural manner. It takes care of avoiding "variable capture" when traversing binders: subst y w t does not recurse below the scope of binders whose name is equal to y. For example, the result of subst y w (trm_let x t1 t2) is defined as trm_let x (subst y w t1) (if var_eq x y then t2 else (subst y w t2)).
The auxiliary function if_y_eq, which appears in the definition of subst shown below, helps performing the factorizing the relevant checks that prevent variable capture.
Fixpoint subst (y:var) (w:val) (t:trm) : trm :=
  let aux t := subst y w t in
  let if_y_eq x t1 t2 := if var_eq x y then t1 else t2 in
  match t with
  | trm_val vtrm_val v
  | trm_var xif_y_eq x (trm_val w) t
  | trm_fun x t1trm_fun x (if_y_eq x t1 (aux t1))
  | trm_fix f x t1trm_fix f x (if_y_eq f t1 (if_y_eq x t1 (aux t1)))
  | trm_app t1 t2trm_app (aux t1) (aux t2)
  | trm_seq t1 t2trm_seq (aux t1) (aux t2)
  | trm_let x t1 t2trm_let x (aux t1) (if_y_eq x t2 (aux t2))
  | trm_if t0 t1 t2trm_if (aux t0) (aux t1) (aux t2)
  end.

Implicit Types and Coercions

To improve the readability of the evaluation rules stated further, we take advantage of both implicit types and coercions.
The implicit types are defined as shown below. For example, the first command indicates that variables whose name begins with the letter 'b' are, by default, variables of type bool.
Implicit Types b : bool.
Implicit Types v r : val.
Implicit Types t : trm.
Implicit Types s : state.
We next introduce two key coercions. First, we declare trm_val as a coercion, so that, instead of writing trm_val v, we may write simply v wherever a term is expected.
Coercion trm_val : val >-> trm.
Second, we declare trm_app as a "Funclass" coercion. This piece of magic enables us to write t1 t2 as a shorthand for trm_app t1 t2. The idea of associating trm_app as the "Funclass" coercion for the type trm is that if a term t1 of type trm is applied like a function to an argument, then t1 should be interpreted as trm_app t1.
Coercion trm_app : trm >-> Funclass.
Interestingly, the "Funclass" coercion for trm_app can be iterated. The expression t1 t2 t3 is parsed by Coq as (t1 t2) t3. The first application t1 t2 is interpreted as trm_app t1 t2. This expression, which itself has type trm, is applied to t3. Hence, it is interpreted as trm_app (trm_app t1 t2) t3, which indeed corresponds to a function applied to two arguments.

Big-Step Semantics

The semantics is presented in big-step style. This presentation makes it slightly easier to establish reasoning rules than with small-step reduction rules, because both the big-step judgment and a triple judgment describe complete execution, relating a term with the value that it produces.
The big-step evaluation judgment, written eval s t s' v, asserts that, starting from state s, the evaluation of the term t terminates in a state s', producing an output value v.
For simplicity, we assume terms to be in "A-normal form": the arguments of applications and of conditionals are restricted to variables and value. Such a requirement does not limit expressiveness, yet it simplifies the statement of the evaluation rules.
For example, if a source program includes a conditional trm_if t0 t1 t2, then it is required that t0 be either a variable or a value. This is not a real restriction, because trm_if t0 t1 t2 can always be encoded as let x = t0 in if x then t1 else t2.
The big-step judgment is inductively defined as follows.
Inductive eval : statetrmstatevalProp :=

  
1. eval for values and function definitions.
A value evaluates to itself. A term function evaluates to a value function. Likewise for a recursive function.

  | eval_val : s v,
      eval s (trm_val v) s v
  | eval_fun : s x t1,
      eval s (trm_fun x t1) s (val_fun x t1)
  | eval_fix : s f x t1,
      eval s (trm_fix f x t1) s (val_fix f x t1)

  
2. eval for function applications.
The beta reduction rule asserts that (val_fun x t1) v2 evaluates to the same result as subst x v2 t1.
In the recursive case, (val_fix f x t1) v2 evaluates to subst x v2 (subst f v1 t1), where v1 denotes the recursive function itself, that is, val_fix f x t1.

  | eval_app_fun : s1 s2 v1 v2 x t1 v,
      v1 = val_fun x t1
      eval s1 (subst x v2 t1) s2 v
      eval s1 (trm_app v1 v2) s2 v
  | eval_app_fix : s1 s2 v1 v2 f x t1 v,
      v1 = val_fix f x t1
      eval s1 (subst x v2 (subst f v1 t1)) s2 v
      eval s1 (trm_app v1 v2) s2 v

  
3. eval for structural constructs.
A sequence trm_seq t1 t2 first evaluates t1, taking the state from s1 to s2, drops the result of t1, then evaluates t2, taking the state from s2 to s3.
The let-binding trm_let x t1 t2 is similar, except that the variable x gets substituted for the result of t1 inside t2.

  | eval_seq : s1 s2 s3 t1 t2 v1 v,
      eval s1 t1 s2 v1
      eval s2 t2 s3 v
      eval s1 (trm_seq t1 t2) s3 v
  | eval_let : s1 s2 s3 x t1 t2 v1 r,
      eval s1 t1 s2 v1
      eval s2 (subst x v1 t2) s3 r
      eval s1 (trm_let x t1 t2) s3 r

  
4. eval for conditionals.
A conditional in a source program is assumed to be of the form if t0 then t1 else t2, where t0 is either a variable or a value. If it is a variable, then by the time it reaches an evaluation position, the variable must have been substituted by a value. Thus, the evaluation rule only considers the form if v0 then t1 else t2. The value v0 must be a boolean value, otherwise evaluation gets stuck.
The term trm_if (val_bool true) t1 t2 behaves like t1, whereas the term trm_if (val_bool false) t1 t2 behaves like t2. This behavior is described by a single rule, leveraging Coq's "if" constructor to factor out the two cases.

  | eval_if : s1 s2 b v t1 t2,
      eval s1 (if b then t1 else t2) s2 v
      eval s1 (trm_if (val_bool b) t1 t2) s2 v

  
5. eval for primitive stateless operations.
For similar reasons as explained above, the behavior of applied primitive functions only need to be described for the case of value arguments.
An arithmetic operation expects integer arguments. The addition of val_int n1 and val_int n2 produces val_int (n1 + n2).
The division operation, on the same arguments, produces the quotient n1 / n2, under the assumption that the dividor n2 is non-zero. In other words, if a program performs a division by zero, then it cannot satisfy the eval judgment.

  | eval_add : s n1 n2,
      eval s (val_add (val_int n1) (val_int n2)) s (val_int (n1 + n2))
  | eval_div : s n1 n2,
      n2 ≠ 0 →
      eval s (val_div (val_int n1) (val_int n2)) s (val_int (Z.quot n1 n2))

  
6. eval for primitive operations on memory.
There remains to describe operations that act on the mutable store.
val_ref v allocates a fresh cell with contents v. The operation returns the location, written p, of the new cell. This location must not be previously in the domain of the store s.
val_get (val_loc p) reads the value in the store s at location p. The location must be bound to a value in the store, else evaluation is stuck.
val_set (val_loc p) v updates the store at a location p assumed to be bound in the store s. The operation modifies the store and returns the unit value.
val_free (val_loc p) deallocates the cell at location p.

  | eval_ref : s v p,
      ¬Fmap.indom s p
      eval s (val_ref v) (Fmap.update s p v) (val_loc p)
  | eval_get : s p,
      Fmap.indom s p
      eval s (val_get (val_loc p)) s (Fmap.read s p)
  | eval_set : s p v,
      Fmap.indom s p
      eval s (val_set (val_loc p) v) (Fmap.update s p v) val_unit
  | eval_free : s p,
      Fmap.indom s p
      eval s (val_free (val_loc p)) (Fmap.remove s p) val_unit.

End SyntaxAndSemantics.

Loading of Definitions from Direcŧ

Throughout the rest of this file, we rely not on the definitions shown above, but on the definitions from LibSepDirect.v. The latter are slightly more general, yet completely equivalent to the ones presented above for the purpose of establishing the reasoning rules that we are interested in.
To reduce the clutter in the statement of lemmas, we associate default types to a number of common meta-variables.
Implicit Types x f : var.
Implicit Types b : bool.
Implicit Types p : loc.
Implicit Types n : int.
Implicit Types v w r : val.
Implicit Types t : trm.
Implicit Types h : heap.
Implicit Types s : state.
Implicit Types H : hprop.
Implicit Types Q : valhprop.

Rules for Terms

We next present reasoning rule for terms. Most of these Separation Logic rules have a statement essentially identical to the statement of the corresponding Hoare Logic rule. The main difference lies in their interpretation: whereas Hoare Logic pre- and post-conditions describe the full state, a Separation Logic rule describes only a fraction of the mutable state.

Reasoning Rule for Sequences

Let us begin with the reasoning rule for sequences. The Separation Logic reasoning rule for a sequence t1;t2 is essentially the same as that from Hoare logic. The rule is:
{H} t1 {fun v => H1}     {H1} t2 {Q}  

{H} (t1;t2) {Q}
The Coq statement corresponding to the above rule is:
Parameter triple_seq : t1 t2 H Q H1,
  triple t1 H (fun vH1) →
  triple t2 H1 Q
  triple (trm_seq t1 t2) H Q.
The variable v denotes the result of the evaluation of t1. For well-typed programs, this result would always be val_unit. Yet, because we here consider an untyped language, we do not bother adding the constraint v = val_unit. Instead, we simply treat the result of t1 as a value irrelevant to the remaining of the evaluation.

Reasoning Rule for Let-Bindings

Next, we present the reasoning rule for let-bindings. Here again, there is nothing specific to Separation Logic, the rule would be exactly the same in Hoare Logic.
The reasoning rule for a let binding let x = t1 in t2 could be stated, in informal writing, in the form:
{H} t1 {Q1}     (forall x, {Q1 x} t2 {Q})  

{H} (let x = t1 in t2) {Q}
Yet, such a presentation makes a confusion between the x that denotes a program variable in let x = t1 in t2, and the x that denotes a value when quantified as x.
The correct statement involves a substitution from the variable x to a value quantified as (v:val).
{H} t1 {Q1}     (forall v, {Q1 v} (subst x v t2) {Q})  

{H} (let x = t1 in t2) {Q}
The corresponding Coq statement is thus as follows.
Parameter triple_let : x t1 t2 Q1 H Q,
  triple t1 H Q1
  ( v1, triple (subst x v1 t2) (Q1 v1) Q) →
  triple (trm_let x t1 t2) H Q.

Reasoning Rule for Conditionals

The rule for a conditional is, again, exactly like in Hoare logic.
b = true -> {H} t1 {Q}     b = false -> {H} t2 {Q}  

{H} (if b then t1 in t2) {Q}
The corresponding Coq statement appears next.
Parameter triple_if_case : b t1 t2 H Q,
  (b = truetriple t1 H Q) →
  (b = falsetriple t2 H Q) →
  triple (trm_if (val_bool b) t1 t2) H Q.
Note that the two premises may be factorized into a single one using Coq's conditional construct. Such an alternative statement is discussed further in this chapter.

Reasoning Rule for Values

The rule for a value v can be written as a triple with an empty precondition and a postcondition asserting that the result value r is equal to v, in the empty heap. Formally:
    

{\[]} v {fun r => \[r = v]}
It is however more convenient in practice to work with a judgment whose conclusion is of the form {H} v {Q}, for an arbitrary H and Q. For this reason, we prever the following rule for values.
      H ==> Q v
      ---------
      {H} v {Q}
It may not be completely obvious at first sight why this alternative rule is equivalent to the former. We prove the equivalence further in this chapter.
The Coq statement of the rule for values is thus as follows.
Parameter triple_val : v H Q,
  H ==> Q v
  triple (trm_val v) H Q.
Let us prove that the "minimalistic" rule {\[]} v {fun r \[r = v]} is equivalent to triple_val.

Exercise: 1 star, standard, especially useful (triple_val_minimal)

Prove that the alternative rule for values derivable from triple_val. Hint: use the tactic xsimpl to conclude the proof.
Lemma triple_val_minimal : v,
  triple (trm_val v) \[] (fun r\[r = v]).
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard, especially useful (triple_val')

More interestingly, prove that triple_val is derivable from triple_val_minimal. Hint: you will need to exploit the appropriate structural rule(s).
Lemma triple_val' : v H Q,
  H ==> Q v
  triple (trm_val v) H Q.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 4 stars, standard, especially useful (triple_let_val)

Consider a term of the form let x = v1 in t2, that is, where the argument of the let-binding is already a value. State and prove a reasoning rule of the form:
      Lemma triple_let_val : x v1 t2 H Q,
        ... →
        triple (trm_let x v1 t2) H Q.
Hint: you'll need to guess the intermediate postcondition Q1 associated with the let-binding rule, and exploit the appropriate structural rules.
(* FILL IN HERE *)

Reasoning Rule for Functions

In addition to the reasoning rule for values, we need reasoning rules for functions and recursive functions that appear as terms in the source program (as opposed to appearing as values).
A function definition trm_fun x t1, expressed as a subterm in a program, evaluates to a value, more precisely to val_fun x t1. Again, we could consider a rule with an empty precondition:
    

{\[]} (trm_fun x t1) {fun r => \[r = val_fun x t1]}
However, we prefer a conclusion of the form {H} (trm_fun x t1) {Q}. We thus consider the following rule, very similar to triple_val.
Parameter triple_fun : x t1 H Q,
  H ==> Q (val_fun x t1) →
  triple (trm_fun x t1) H Q.
The rule for recursive functions is similar. It is presented further in the file.
Last but not least, we need a reasoning rule to reason about a function application. Consider an application trm_app v1 v2. Assume v1 to be a function, that is, to be of the form (*     val_fun x t1. Then, according to the beta-reduction rule, the *)
semantics of trm_app v1 v2 is the same as that of subst x v2 t1. This reasoning rule is thus:
v1 = val_fun x t1     {H} (subst x v2 t1) {Q}  

{H} (trm_app v1 v2) {Q}
The corresponding Coq statement is as shown below.
Parameter triple_app_fun : x v1 v2 t1 H Q,
  v1 = val_fun x t1
  triple (subst x v2 t1) H Q
  triple (trm_app v1 v2) H Q.
The generalization to the application of recursive functions is straightforward. It is discussed further in this chapter.

Specification of Primitive Operations

Before we can tackle verification of actual programs, there remains to present the specifications for the primitive operations. Let us begin with basic arithmetic operations: addition and division.

Specification of Arithmetic Primitive Operations

Consider a term of the form val_add n1 n2, which is short for trm_app (trm_app (trm_val val_add) (val_int n1)) (val_int n2). Indeed, recall that val_int is declared as a coercion.
The addition operation may execute in an empty state. It does not modify the state, and returns the value val_int (n1+n2).
In the specification shown below, the precondition is written \[] and the postcondition binds a return value r of type val specified to be equal to val_int (n1+n2).
Parameter triple_add : n1 n2,
  triple (val_add n1 n2)
    \[]
    (fun r\[r = val_int (n1 + n2)]).
The specification of the division operation val_div n1 n2 is similar, yet with the extra requirement that the argument n2 must be nonzero. This requirement n2 0 is a pure fact, which can be asserted as part of the precondition, as follows.
Parameter triple_div : n1 n2,
  triple (val_div n1 n2)
    \[n2 ≠ 0]
    (fun r\[r = val_int (Z.quot n1 n2)]).
Equivalently, the requirement n2 0 may be asserted as an hypothesis to the front of the triple judgment, in the form of a standard Coq hypothesis, as shown below.
Parameter triple_div' : n1 n2,
  n2 ≠ 0 →
  triple (val_div n1 n2)
    \[]
    (fun r\[r = val_int (Z.quot n1 n2)]).
This latter presentation with pure facts such as n2 0 placed to the front of the triple turns out to be more practical to exploit in proofs. Hence, we always follow this style of presentation, and reserve the precondition for describing pieces of mutable state.

Specification of Primitive Operations Acting on Memory

There remains to describe the specification of operations on the heap.
Recall that val_get denotes the operation for reading a memory cell. A call of the form val_get v' executes safely if v' is of the form val_loc p for some location p, in a state that features a memory cell at location p, storing some contents v. Such a state is described as p ~~> v. The read operation returns a value r such that r = v, and the memory state of the operation remains unchanged. The specification of val_get is thus expressed as follows.
Parameter triple_get : v p,
  triple (val_get (val_loc p))
    (p ~~> v)
    (fun r\[r = v] \* (p ~~> v)).
Remark: val_loc is registered as a coercion, so val_get (val_loc p) could be written simply as val_get p, where p has type loc. We here chose to write val_loc explicitly for clarity.
Recall that val_set denotes the operation for writing a memory cell. A call of the form val_set v' w executes safely if v' is of the form val_loc p for some location p, in a state p ~~> v. The write operation updates this state to p ~~> w, and returns the unit value, which can be ignored. Hence, val_set is specified as follows.
Parameter triple_set : w p v,
  triple (val_set (val_loc p) w)
    (p ~~> v)
    (fun _p ~~> w).
Recall that val_ref denotes the operation for allocating a cell with a given contents. A call to val_ref v does not depend on the contents of the existing state. It extends the state with a fresh singleton cell, at some location p, assigning it v as contents. The fresh cell is then described by the heap predicate p ~~> v. The evaluation of val_ref v produces the value val_loc p. Thus, if r denotes the result value, we have r = val_loc p for some p. In the corresponding specification shown below, observe how the location p is existentially quantified in the postcondition.
Parameter triple_ref : v,
  triple (val_ref v)
    \[]
    (fun (r:val) ⇒ \ (p:loc), \[r = val_loc p] \* p ~~> v).
Using the notation funloc p H as a shorthand for fun (r:val) \ (p:loc), \[r = val_loc p] \* H, the specification for val_ref becomes more concise.
Parameter triple_ref' : v,
  triple (val_ref v)
    \[]
    (funloc pp ~~> v).
Recall that val_free denotes the operation for deallocating a cell at a given address. A call of the form val_free p executes safely in a state p ~~> v. The operation leaves an empty state, and asserts that the return value, named r, is equal to unit.
Parameter triple_free : p v,
  triple (val_free (val_loc p))
    (p ~~> v)
    (fun _\[]).

Review of the Structural Rules

Let us review the essential structural rules, which were introduced in the previous chapters. These structural rules are involved in the practical verification proofs carried out further in this chapter.
The frame rule asserts that the precondition and the postcondition can be extended together by an arbitrary heap predicate. Recall that the definition of triple was set up precisely to validate this frame rule, so in a sense in holds "by construction".
Parameter triple_frame : t H Q H',
  triple t H Q
  triple t (H \* H') (Q \*+ H').
The consequence rule allows to strengthen the precondition and weaken the postcondition.
Parameter triple_conseq : t H' Q' H Q,
  triple t H' Q'
  H ==> H'
  Q' ===> Q
  triple t H Q.
In practice, it is most convenient to exploit a rule that combines both frame and consequence into a single rule, as stated next. (Note that this "combined structural rule" was proved as an exercise in chapter Himpl.)
Parameter triple_conseq_frame : H2 H1 Q1 t H Q,
  triple t H1 Q1
  H ==> H1 \* H2
  Q1 \*+ H2 ===> Q
  triple t H Q.
The two extraction rules enable to extract pure facts and existentially quantified variables, from the precondition into the Coq context.
Parameter triple_hpure : t (P:Prop) H Q,
  (Ptriple t H Q) →
  triple t (\[P] \* H) Q.

Parameter triple_hexists : t (A:Type) (J:Ahprop) Q,
  ( (x:A), triple t (J x) Q) →
  triple t (\ (x:A), J x) Q.

Exercise: 1 star, standard, optional (triple_hpure')

The extraction rule triple_hpure assumes a precondition of the form \[P] \* H. The variant rule triple_hpure', stated below, assumes instead a precondition with only the pure part, i.e., of the form \[P]. Prove that triple_hpure' is indeed a corollary of triple_hpure.
Lemma triple_hpure' : t (P:Prop) Q,
  (Ptriple t \[] Q) →
  triple t \[P] Q.
Proof using. (* FILL IN HERE *) Admitted.

Verification Proof in Separation Logic

We have at hand all the necessary rules for carrying out actual verification proofs in Separation Logic. Let's do it!
Module ExamplePrograms.
Export ProgramSyntax.

Proof of incr

First, we consider the verification of the increment function, which is written in OCaml syntax as:
   let incr p =
      p := !p + 1
Recall that, for simplicity, we assume programs to be written in "A-normal form", that is, with all intermediate expressions named by a let-binding. Thereafter, we thus consider the following definition for the incr.
   let incr p =
      let n = !p in
      let m = n+1 in
      p := m
Using the construct from our toy programming language, the definition of incr is written as shown below.
Definition incr : val :=
  val_fun "p" (
    trm_let "n" (trm_app val_get (trm_var "p")) (
    trm_let "m" (trm_app (trm_app val_add
                             (trm_var "n")) (val_int 1)) (
    trm_app (trm_app val_set (trm_var "p")) (trm_var "m")))).
Alternatively, using notation and coercions, the same program can be written as shown below.
Definition incr' : val :=
  <{ fun 'p
       let 'n = ! 'p in
       let 'm = 'n + 1 in
       'p := 'm }>.
Let us check that the two definitions are indeed the same.
Lemma incr_eq_incr' :
  incr = incr'.
Proof using. reflexivity. Qed.
Recall from the first chapter the specification of the increment function. Its precondition requires a singleton state of the form p ~~> n. Its postcondition describes a state of the form p ~~> (n+1).
Lemma triple_incr : (p:loc) (n:int),
  triple (trm_app incr p)
    (p ~~> n)
    (fun _p ~~> (n+1)).
We next show a detailed proof for this specification. It exploits:
  • the structural reasoning rules,
  • the reasoning rules for terms,
  • the specification of the primitive functions,
  • the xsimpl tactic for simplifying entailments.
Proof using.
  (* initialize the proof *)
  intros. applys triple_app_fun. { reflexivity. } simpl.
  (* reason about let n = .. *)
  applys triple_let.
  (* reason about !p *)
  { apply triple_get. }
  (* name n' the result of !p *)
  intros n'. simpl.
  (* substitute away the equality n' = n *)
  apply triple_hpure. intros →.
  (* reason about let m = .. *)
  applys triple_let.
  (* apply the frame rule to put aside p ~~> n *)
  { applys triple_conseq_frame.
    (* reason about n+1 in the empty state *)
    { applys triple_add. }
    { xsimpl. }
    { xsimpl. } }
  (* name m' the result of n+1 *)
  intros m'. simpl.
  (* substitute away the equality m' = m *)
  apply triple_hpure. intros →.
  (* reason about p := m *)
  { applys triple_set. }
Qed.

Proof of succ_using_incr

Recall from Basic the function succ_using_incr.
Definition succ_using_incr : val :=
  <{ fun 'n
       let 'r = val_ref 'n in
       incr 'r;
       let 'x = ! 'r in
       val_free 'r;
       'x }>.
Recall the specification of succ_using_incr.
Lemma triple_succ_using_incr : (n:int),
  triple (trm_app succ_using_incr n)
    \[]
    (fun v\[v = val_int (n+1)]).

Exercise: 4 stars, standard, especially useful (triple_succ_using_incr)

Verify the function triple_succ_using_incr. Hint: follow the pattern of triple_incr. Hint: use applys triple_seq for reasoning about a sequence. Hint: use applys triple_val for reasoning about the final return value, namely x.
Proof using. (* FILL IN HERE *) Admitted.

Proof of factorec

Import Basic.Facto.
Recall from Basic the function repeat_incr.
    let rec factorec n =
      if n ≤ 1 then 1 else n * factorec (n-1)
Definition factorec : val :=
  <{ fix 'f 'n
       let 'b = ('n ≤ 1) in
       if 'b
         then 1
         else let 'x = 'n - 1 in
              let 'y = 'f 'x in
              'n * 'y }>.

Exercise: 4 stars, standard, especially useful (triple_factorec)

Verify the function factorec. Hint: exploit triple_app_fix for reasoning about the recursive function. Hint: triple_hpure', the corollary of triple_hpure, is helpful. Hint: exploit triple_le and triple_sub and triple_mul to reason about the behavior of the primitive operations involved. Hint: exploit applys triple_if. case_if as C. to reason about the conditional; alternatively, if using triple_if_case, you'll need to use the tactic rew_bool_eq in * to simplify, e.g., the expression isTrue (m 1)) = true.
Lemma triple_factorec : n,
  n ≥ 0 →
  triple (factorec n)
    \[]
    (fun r\[r = facto n]).
Proof using. (* FILL IN HERE *) Admitted.
End ExamplePrograms.

What's Next

The matter of the next chapter is to introduce additional technology to streamline the proof process, notably by:
  • automating the application of the frame rule
  • eliminating the need to manipulate program variables and substitutions during the verification proof.
The rest of this chapter is concerned with alternative statements for the reasoning rules, and with the proofs of the reasoning rules.

More Details

Alternative Specification Style for Pure Preconditions

Module DivSpec.
Recall the specification for division.
Parameter triple_div : n1 n2,
  n2 ≠ 0 →
  triple (val_div n1 n2)
    \[]
    (fun r\[r = val_int (Z.quot n1 n2)]).
Equivalently, we could place the requirement n2 0 in the precondition:
Parameter triple_div' : n1 n2,
  triple (val_div n1 n2)
    \[n2 ≠ 0]
    (fun r\[r = val_int (Z.quot n1 n2)]).
Let us formally prove that the two presentations are equivalent.

Exercise: 1 star, standard, especially useful (triple_div_from_triple_div')

Prove triple_div by exploiting triple_div'. Hint: the key proof step is applys triple_conseq
Lemma triple_div_from_triple_div' : n1 n2,
  n2 ≠ 0 →
  triple (val_div n1 n2)
    \[]
    (fun r\[r = val_int (Z.quot n1 n2)]).
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard, especially useful (triple_div'_from_triple_div)

Prove triple_div' by exploiting triple_div. Hint: the first key proof step is applys triple_hpure. Yet some preliminary rewriting is required for this tactic to apply. Hint: the second key proof step is applys triple_conseq.
Lemma triple_div'_from_triple_div : n1 n2,
  triple (val_div n1 n2)
    \[n2 ≠ 0]
    (fun r\[r = val_int (Z.quot n1 n2)]).
Proof using. (* FILL IN HERE *) Admitted.
As we said, placing pure preconditions outside of the triples makes it slightly more convenient to exploit specifications. For this reason, we adopt the style that precondition only contain the description of heap-allocated data structures.
End DivSpec.

The Combined let-frame Rule Rule

Module LetFrame.
Recall the Separation Logic let rule.
Parameter triple_let : x t1 t2 Q1 H Q,
  triple t1 H Q1
  ( v1, triple (subst x v1 t2) (Q1 v1) Q) →
  triple (trm_let x t1 t2) H Q.
At first sight, it seems that, to reason about let x = t1 in t2 in a state described by precondition H, we need to first reason about t1 in that same state. Yet, t1 may well require only a subset of the state H to evaluate, and not all of H.
The "let-frame" rule combines the rule for let-bindings with the frame rule to make it more explicit that the precondition H may be decomposed in the form H1 \* H2, where H1 is the part needed by t1, and H2 denotes the rest of the state. The part of the state covered by H2 remains unmodified during the evaluation of t1, and appears as part of the precondition of t2.
The corresponding statement is as follows.
Lemma triple_let_frame : x t1 t2 Q1 H H1 H2 Q,
  triple t1 H1 Q1
  H ==> H1 \* H2
  ( v1, triple (subst x v1 t2) (Q1 v1 \* H2) Q) →
  triple (trm_let x t1 t2) H Q.

Exercise: 3 stars, standard, especially useful (triple_let_frame)

Prove the let-frame rule.
Proof using. (* FILL IN HERE *) Admitted.
End LetFrame.

Proofs for the Rules for Terms

Module Proofs.
The proofs for the Separation Logic reasoning rules all follow a similar pattern: first establish a corresponding rule for Hoare triples, then generalize it to a Separation Logic triple.
To establish a reasoning rule w.r.t. a Hoare triple, we reveal the definition expressed in terms of the big-step semantics.
      Definition hoare (t:trm) (H:hprop) (Q:valhprop) : Prop :=
         s, H s
         s' v, eval s t s' vQ v s'.
Concretely, we consider a given initial state s satisfying the precondition, and we have to provide witnesses for the output value v and output state s' such that the reduction holds and the postcondition holds.
Then, to lift the reasoning rule from Hoare logic to Separation Logic, we reveal the definition of a Separation Logic triple.
      Definition triple t H Q :=
        H', hoare t (H \* H') (Q \*+ H').
Recall that we already employed this two-step scheme in the previous chapter, e.g., to establish the consequence rule (rule_conseq).

Proof of triple_val

The big-step evaluation rule for values asserts that a value v evaluates to itself, without modification to the current state s.
Parameter eval_val : s v,
  eval s v s v.
The Hoare version of the reasoning rule for values is as follows.
Lemma hoare_val : v H Q,
  H ==> Q v
  hoare (trm_val v) H Q.
Proof using.
1. We unfold the definition of hoare.
  introv M. intros s K0.
2. We provide the witnesses for the output value and heap. These witnesses are dictated by the statement of eval_val.
   s v. splits.
3. We invoke the big-step rule eval_val
  { applys eval_val. }
4. We establish the postcondition, exploiting the entailment hypothesis.
  { applys M. auto. }
Qed.
The Separation Logic version of the rule then follows.
Lemma triple_val : v H Q,
  H ==> Q v
  triple (trm_val v) H Q.
Proof using.
1. We unfold the definition of triple to reveal a hoare judgment.
  introv M. intros H'.
2. We invoke the reasoning rule hoare_val that was just established.
  applys hoare_val.
3. We exploit the assumption and conclude using xchange.
  xchange M.
Qed.

Proof of triple_seq

The big-step evaluation rule for a sequence is given next.
Parameter eval_seq : s1 s2 s3 t1 t2 v1 v,
  eval s1 t1 s2 v1
  eval s2 t2 s3 v
  eval s1 (trm_seq t1 t2) s3 v.
The Hoare triple version of the reasoning rule is proved as follows. This lemma, called hoare_seq, has the same statement as triple_seq, except with occurrences of triple replaced with hoare.
Lemma hoare_seq : t1 t2 H Q H1,
  hoare t1 H (fun vH1) →
  hoare t2 H1 Q
  hoare (trm_seq t1 t2) H Q.
Proof using.
1. We unfold the definition of hoare. Let K0 describe the initial state.
  introv M1 M2. intros s K0. (* optional: *) unfolds hoare.
2. We exploit the first hypothesis to obtain information about the evaluation of the first subterm t1. The state before t1 executes is described by K0. The state after t1 executes is described by K1.
  forwards (s1'&v1&R1&K1): (rm M1) K0.
3. We exploit the second hypothesis to obtain information about the evaluation of the first subterm t2. The state before t2 executes is described by K1. The state after t2 executes is described by K2.
  forwards (s2'&v2&R2&K2): (rm M2) K1.
4. We provide witness for the output value and heap. They correspond to those produced by the evaluation of t2.
   s2' v2. split.
5. We invoke the big-step rule.
  { applys eval_seq R1 R2. }
6. We establish the final postcondition, which is directly inherited from the reasoning on t2.
  { apply K2. }
Qed.
The Separation Logic reasoning rule is proved as follows.
Lemma triple_seq : t1 t2 H Q H1,
  triple t1 H (fun vH1) →
  triple t2 H1 Q
  triple (trm_seq t1 t2) H Q.
Proof using.
1. We unfold the definition of triple to reveal a hoare judgment.
  introv M1 M2. intros H'. (* optional: *) unfolds triple.
2. We invoke the reasoning rule hoare_seq that we just established.
  applys hoare_seq.
3. For the hypothesis on the first subterm t1, we can invoke directly our first hypothesis.
  { applys M1. }
  { applys M2. }
Qed.

Proof of triple_let

Recall the big-step evaluation rule for a let-binding.
Parameter eval_let : s1 s2 s3 x t1 t2 v1 v,
  eval s1 t1 s2 v1
  eval s2 (subst x v1 t2) s3 v
  eval s1 (trm_let x t1 t2) s3 v.

Exercise: 3 stars, standard, especially useful (triple_let)

Following the same proof scheme as for triple_seq, establish the reasoning rule for triple_let, whose statement appears earlier in this file. Make sure to first state and prove the lemma hoare_let, which has the same statement as triple_let yet with occurrences of triple replaced with hoare.
(* FILL IN HERE *)

Proofs for the Arithmetic Primitive Operations

Addition

Recall the evaluation rule for addition.
Parameter eval_add : s n1 n2,
  eval s (val_add (val_int n1) (val_int n2)) s (val_int (n1 + n2)).
In the proof, we will need to use the following result, established in the first chapter.
Parameter hstar_hpure_l : P H h,
  (\[P] \* H) h = (PH h).
As usual, we first establish a Hoare triple.
Lemma hoare_add : H n1 n2,
  hoare (val_add n1 n2)
    H
    (fun r\[r = val_int (n1 + n2)] \* H).
Proof using.
  intros. intros s K0. s (val_int (n1 + n2)). split.
  { applys eval_add. }
  { rewrite hstar_hpure_l. split.
    { auto. }
    { applys K0. } }
Qed.
Deriving triple_add is then straightforward.
Lemma triple_add : n1 n2,
  triple (val_add n1 n2)
    \[]
    (fun r\[r = val_int (n1 + n2)]).
Proof using.
  intros. intros H'. applys hoare_conseq.
  { applys hoare_add. } { xsimpl. } { xsimpl. auto. }
Qed.

Division

Recall the evaluation rule for division.
Parameter eval_div' : s n1 n2,
  n2 ≠ 0 →
  eval s (val_div (val_int n1) (val_int n2)) s (val_int (Z.quot n1 n2)).

Exercise: 3 stars, standard, optional (triple_div)

Following the same proof scheme as for triple_add, establish the reasoning rule for triple_div. Make sure to first state and prove hoare_div, which is like triple_div except with hoare instead of triple.
(* FILL IN HERE *)

Proofs for Primitive Operations Operating on the State

The proofs for establishing the Separation Logic reasoning rules for ref, get and set follow a similar proof pattern, that is, they go through the proofs of rules for Hoare triples.
Unlike before, however, the Hoare triples are not directly established with respect to the big-step evaluation rules. Instead, we start by proving corollaries to the big-step rules to reformulate them in a way that give already them a flavor of "Separation Logic". Concretely, we reformulate the evaluation rules, which are expressed in terms of read and updates in finite maps, to be expressed instead entirely in terms of disjoint unions.
The introduction of these disjoint union operations then significantly eases the justification of the separating conjunctions that appear in the targeted Separation Logic triples.
In this section, the constructor hval_val appears. This constructor converts a "value" into a "heap value". For the purpose, of this file, the two notion are identical. Yet, to allow for generalization to the semantics of allocation by blocks, we need to assume that states are finite maps from location to heap values. Heap values, of type hval, can be assumed to be defined by the following inductive data type.
    Inductive hval : Type :=
      | hval_val : valhval.

Read in a Reference

The big-step rule for get p requires that p be in the domain of the current state s, and asserts that the output value is the result of reading in s at location p.
Parameter eval_get : v s p,
  Fmap.indom s p
  Fmap.read s p = v
  eval s (val_get (val_loc p)) s v.
We reformulate this rule by isolating from the current state s the singleton heap made of the cell at location p, and let s2 denote the rest of the heap. When the singleton heap is described as Fmap.single p v, then v is the result value returned by get p.
Lemma eval_get_sep : s s2 p v,
  s = Fmap.union (Fmap.single p v) s2
  eval s (val_get (val_loc p)) s v.
The proof of this lemma is of little interest. We show it only to demonstrate that it relies only a few basic facts related to finite maps.
Proof using.
  introv →. forwards Dv: Fmap.indom_single p v.
  applys eval_get.
  { applys* Fmap.indom_union_l. }
  { rewrite* Fmap.read_union_l. rewrite* Fmap.read_single. }
Qed.
Our goal is to establish the triple:
    triple (val_get p)
      (p ~~> v)
      (fun r\[r = v] \* (p ~~> v)).
Establishing this lemma requires us to reason about propositions of the form (\[P] \* H) h and (p ~~> v) h. To that end, recall lemma hstar_hpure_l, which was already exploited in the proof of triple_add, and recall hsingle_inv from Hprop.
Parameter hsingle_inv: p v h,
  (p ~~> v) h
  h = Fmap.single p v.
We establish the specification of get first w.r.t. to the hoare judgment.
Lemma hoare_get : H v p,
  hoare (val_get p)
    ((p ~~> v) \* H)
    (fun r\[r = v] \* (p ~~> v) \* H).
Proof using.
  (* 1. We unfold the definition of hoare. *)
  intros. intros s K0.
  (* 2. We provide the witnesses for the reduction,
        as dictated by eval_get_sep. *)

   s v. split.
  { (* 3. To justify the reduction using eval_get_sep, we need to
          argue that the state s decomposes as a singleton heap
          Fmap.single p v and the rest of the state s2. This is
          obtained by eliminating the star in hypothesis K0. *)

    destruct K0 as (s1&s2&P1&P2&D&U).
    (* 4. Inverting (p ~~> v) h1 simplifies the goal. *)
    lets E1: hsingle_inv P1. subst s1.
    (* 5. At this point, the goal matches exactly eval_get_sep. *)
    applys eval_get_sep U. }
  { (* 6. To establish the postcondition, we check the pure fact
          \v = v, and check that the state, which has not changed,
          satisfies the same heap predicate as in the precondition. *)

    rewrite hstar_hpure_l. auto. }
Qed.
Deriving the Separation Logic triple follows the usual pattern.
Lemma triple_get : v p,
  triple (val_get p)
    (p ~~> v)
    (fun r\[r = v] \* (p ~~> v)).
Proof using.
  intros. intros H'. applys hoare_conseq.
  { applys hoare_get. }
  { xsimpl. }
  { xsimpl. auto. }
Qed.

Allocation of a Reference

Next, we consider the reasoning rule for operation ref, which involves a proof yet slightly more trickier than that for get and set.
The big-step evaluation rule for ref v extends the initial state s with an extra binding from p to v, for some fresh location p.
Parameter eval_ref : s v p,
  ¬Fmap.indom s p
  eval s (val_ref v) (Fmap.update s p v) (val_loc p).
Let us reformulate eval_ref to replace references to Fmap.indom and Fmap.update with references to Fmap.single and Fmap.disjoint. Concretely, ref v extends the state from s1 to s1 \u s2, where s2 denotes the singleton heap Fmap.single p v, and with the requirement that Fmap.disjoint s2 s1, to capture freshness.
Lemma eval_ref_sep : s1 s2 v p,
  s2 = Fmap.single p v
  Fmap.disjoint s2 s1
  eval s1 (val_ref v) (Fmap.union s2 s1) (val_loc p).
Proof using.
It is not needed to follow through this proof.
  introvD. forwards Dv: Fmap.indom_single p v.
  rewrite <- Fmap.update_eq_union_single. applys* eval_ref.
  { intros N. applys* Fmap.disjoint_inv_not_indom_both D N. }
Qed.
In order to apply the rules eval_ref or eval_ref_sep, we need to be able to synthetize fresh locations. The following lemma (from LibSepFmap.v) captures the existence, for any state s, of a (non-null) location p not already bound in s.
Parameter exists_fresh : s,
    p, ¬Fmap.indom s ppnull.
We reformulate the lemma above in a way that better matches the premise of the lemma eval_ref_sep, which we need to apply for establishing the specification of ref.
This reformulation, shown below, asserts that, for any h, there existence a non-null location p such that the singleton heap Fmap.single p v is disjoint from h.
Lemma single_fresh : h v,
   p, Fmap.disjoint (Fmap.single p v) h.
Proof using.
It is not needed to follow through this proof.
  intros. forwards (p&F&N): exists_fresh h.
   p. applys* Fmap.disjoint_single_of_not_indom.
Qed.
The proof of the Hoare triple for ref is as follows.
Lemma hoare_ref : H v,
  hoare (val_ref v)
    H
    (fun r ⇒ (\ p, \[r = val_loc p] \* p ~~> v) \* H).
Proof using.
  (* 1. We unfold the definition of hoare. *)
  intros. intros s1 K0.
  (* 2. We claim the disjointness relation
       Fmap.disjoint (Fmap.single p v) s1. *)

  forwards* (p&D): (single_fresh s1 v).
  (* 3. We provide the witnesses for the reduction,
        as dictated by eval_ref_sep. *)

   ((Fmap.single p v) \u s1) (val_loc p). split.
  { (* 4. We exploit eval_ref_sep, which has exactly the desired shape! *)
    applys eval_ref_sep D. auto. }
  { (* 5. We establish the postcondition
          (\ p, \[r = val_loc p] \* p ~~> v) \* H
          by providing p and the relevant pieces of heap. *)

    applys hstar_intro.
    { p. rewrite hstar_hpure_l.
      split. { auto. } { applys¬hsingle_intro. } }
    { applys K0. }
    { applys D. } }
Qed.
We then derive the Separation Logic triple as usual.
Lemma triple_ref : v,
  triple (val_ref v)
    \[]
    (funloc pp ~~> v).
Proof using.
  intros. intros H'. applys hoare_conseq.
  { applys hoare_ref. }
  { xsimpl. }
  { xsimpl. auto. }
Qed.

End Proofs.

Optional Material

Reasoning Rules for Recursive Functions

This reasoning rules for functions immediately generalizes to recursive functions. A term describing a recursive function is written trm_fix f x t1, and the corresponding value is written val_fix f x t1.
Parameter triple_fix : f x t1 H Q,
  H ==> Q (val_fix f x t1) →
  triple (trm_fix f x t1) H Q.
The reasoning rule that corresponds to beta-reduction for a recursive function involves two substitutions: a first substitution for recursive occurrences of the function, followed with a second substitution for the argument provided to the call.
Parameter triple_app_fix : v1 v2 f x t1 H Q,
  v1 = val_fix f x t1
  triple (subst x v2 (subst f v1 t1)) H Q
  triple (trm_app v1 v2) H Q.

Other Proofs of Reasoning Rules for Terms

Module ProofsContinued.

Proof of triple_fun and triple_fix

The proofs for triple_fun and triple_fix are essentially identical to that of triple_val, so we do not include them here.

Proof of triple_if

Recall the reasoning rule for conditionals. Recall that this rule is stated by factorizing the premises.
Lemma eval_if : s1 s2 b v t1 t2,
  eval s1 (if b then t1 else t2) s2 v
  eval s1 (trm_if b t1 t2) s2 v.
Proof using.
  intros. case_if; applys eval_if; auto_false.
Qed.
The reasoning rule for conditional w.r.t. Hoare triples is as follows.
Lemma hoare_if_case : b t1 t2 H Q,
  (b = truehoare t1 H Q) →
  (b = falsehoare t2 H Q) →
  hoare (trm_if b t1 t2) H Q.
Proof using.
  introv M1 M2. intros s K0. destruct b.
  { forwards* (s1'&v1&R1&K1): (rm M1) K0.
     s1' v1. split*. { applys* eval_if. } }
  { forwards* (s1'&v1&R1&K1): (rm M2) K0.
     s1' v1. split*. { applys* eval_if. } }
Qed.
The corresponding Separation Logic reasoning rule is as follows.
Lemma triple_if_case : b t1 t2 H Q,
  (b = truetriple t1 H Q) →
  (b = falsetriple t2 H Q) →
  triple (trm_if (val_bool b) t1 t2) H Q.
Proof using.
  unfold triple. introv M1 M2. intros H'.
  applys hoare_if_case; intros Eb.
  { applys* M1. }
  { applys* M2. }
Qed.
Observe that the above proofs contain a fair amount of duplication, due to the symmetry between the b = true and b = false branches.
If we state the reasoning rules using Coq's conditional just like it appears in the evaluation rule eval_if, we can better factorize the proof script.
Lemma hoare_if : (b:bool) t1 t2 H Q,
  hoare (if b then t1 else t2) H Q
  hoare (trm_if b t1 t2) H Q.
Proof using.
  introv M1. intros s K0.
  forwards (s'&v&R1&K1): (rm M1) K0.
   s' v. split. { applys eval_if R1. } { applys K1. }
Qed.

Lemma triple_if : b t1 t2 H Q,
  triple (if b then t1 else t2) H Q
  triple (trm_if (val_bool b) t1 t2) H Q.
Proof using.
  unfold triple. introv M1. intros H'. applys hoare_if. applys M1.
Qed.

Proof of triple_app_fun

The reasoning rule for an application asserts that the a pre- and poscondition hold for a beta-redex (val_fun x t1) v2 provided that they hold for the term subst x v2 t1.
This result follows directly from the big-step evaluation rule for applications.
Parameter eval_app_fun : s1 s2 v1 v2 x t1 v,
  v1 = val_fun x t1
  eval s1 (subst x v2 t1) s2 v
  eval s1 (trm_app v1 v2) s2 v.

Exercise: 2 stars, standard, optional (hoare_app_fun)

Prove the lemma hoare_app_fun.
Lemma hoare_app_fun : v1 v2 x t1 H Q,
  v1 = val_fun x t1
  hoare (subst x v2 t1) H Q
  hoare (trm_app v1 v2) H Q.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard, optional (triple_app_fun)

Prove the lemma triple_app_fun.
Lemma triple_app_fun : x v1 v2 t1 H Q,
  v1 = val_fun x t1
  triple (subst x v2 t1) H Q
  triple (trm_app v1 v2) H Q.
Proof using. (* FILL IN HERE *) Admitted.

Deallocation of a Reference

Optional contents: this section may be safely skipped.
Last, we consider the reasoning rule for operation free. We leave this one as exercise.
Recall the big-step evaluation rule for free p.
Parameter eval_free : s p,
  Fmap.indom s p
  eval s (val_set (val_loc p)) (Fmap.remove s p) val_unit.
Let us reformulate eval_free to replace references to Fmap.indom and Fmap.remove with references to Fmap.single and Fmap.union and Fmap.disjoint. The details are not essential, thus omitted.
Parameter eval_free_sep : s1 s2 v p,
  s1 = Fmap.union (Fmap.single p v) s2
  Fmap.disjoint (Fmap.single p v) s2
  eval s1 (val_free p) s2 val_unit.

Exercise: 3 stars, standard, optional (hoare_free)

Prove the Hoare triple for the operation free. Hint: exploit the lemma eval_free_sep.
Lemma hoare_free : H p v,
  hoare (val_free (val_loc p))
    ((p ~~> v) \* H)
    (fun _H).
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard, optional (triple_free)

Derive from the Hoare triple for the operation free the corresponding Separation Logic triple. Hint: adapt the proof of lemma triple_set.
Lemma triple_free : p v,
  triple (val_free (val_loc p))
    (p ~~> v)
    (fun _\[]).
Proof using. (* FILL IN HERE *) Admitted.

Write in a Reference

The big-step evaluation rule for set p v updates the initial state s by re-binding the location p to the value v. The location p must already belong to the domain of s.
Parameter eval_set : m p v,
   Fmap.indom m p
   eval m (val_set (val_loc p) v) (Fmap.update m p v) val_unit.
As for get, we first reformulate this lemma, to replace references to Fmap.indom and Fmap.update with references to Fmap.union, Fmap.single, and Fmap.disjoint, to prepare for the introduction of separating conjunctions.
Lemma eval_set_sep : s1 s2 h2 p v1 v2,
  s1 = Fmap.union (Fmap.single p v1) h2
  s2 = Fmap.union (Fmap.single p v2) h2
  Fmap.disjoint (Fmap.single p v1) h2
  eval s1 (val_set (val_loc p) v2) s2 val_unit.
Proof using.
It is not needed to follow through this proof.
  introv → → D. forwards Dv: Fmap.indom_single p v1.
  applys_eq eval_set.
  { rewrite* Fmap.update_union_l. fequals.
    rewrite* Fmap.update_single. }
  { applys* Fmap.indom_union_l. }
Qed.
The proof of the Hoare rule for set makes use of the following fact (from LibSepFmap.v) about Fmap.disjoint: when one of its argument is a singleton map, the value stored in that singleton map is irrelevant.
    Check Fmap.disjoint_single_set : p v1 v2 h2,
      Fmap.disjoint (Fmap.single p v1) h2
      Fmap.disjoint (Fmap.single p v2) h2.

Exercise: 5 stars, standard, optional (hoare_set)

Prove the lemma hoare_set. Hints:
  • exploit the evaluation rule eval_set_sep presented above,
  • exploit the lemma Fmap.disjoint_single_set presented above,
  • to obtain an elegant proof, prefer invoking the lemmas hsingle_intro, hsingle_inv, hstar_intro, and hstar_inv, rather than unfolding the definitions of hstar and hsingle.
Lemma hoare_set : H v p v',
  hoare (val_set (val_loc p) v)
    ((p ~~> v') \* H)
    (fun _ ⇒ (p ~~> v) \* H).
Proof using. (* FILL IN HERE *) Admitted.
We then derive the Separation Logic triple as usual.
Lemma triple_set : w p v,
  triple (val_set (val_loc p) w)
    (p ~~> v)
    (fun _p ~~> w).
Proof using.
  intros. intros H'. applys hoare_conseq.
  { applys hoare_set. }
  { xsimpl. }
  { xsimpl. }
Qed.

End ProofsContinued.

Proofs Revisited using the triple_of_hoare Lemma

Module ProofsFactorization.
The proof that, e.g., triple_add is a consequence of hoare_add follows the same pattern as many other similar proofs, each time invoking the lemma hoare_conseq. Thus, we could attempt at factorizing this proof pattern. The following lemma corresponds to such an attempt.

Exercise: 2 stars, standard, optional (triple_of_hoare)

Prove the lemma triple_of_hoare stated below.
Lemma triple_of_hoare : t H Q,
  ( H', Q', hoare t (H \* H') Q'
                     ∧ Q' ===> Q \*+ H') →
  triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard, optional (triple_add')

Prove that triple_add is a consequence of hoare_add by exploiting triple_of_hoare.
Lemma triple_add' : n1 n2,
  triple (val_add n1 n2)
    \[]
    (fun r\[r = val_int (n1 + n2)]).
Proof using. (* FILL IN HERE *) Admitted.
End ProofsFactorization.

Triple for Terms with Same Semantics

Module ProofsSameSemantics.
A general principle is that if t1 has the same semantics as t2 (w.r.t. the big-step evaluation judgment eval), then t1 and t2 satisfy the same triples.
Let us formalize this principle.
Two (closed) terms are semantically equivalent, written trm_equiv t1 t2, if two terms, when evaluated in the same state, produce the same output.
Definition trm_equiv (t1 t2:trm) : Prop :=
   s s' v, eval s t1 s' veval s t2 s' v.
Two terms that are equivalent satisfy the same Separation Logic triples (and the same Hoare triples).
Indeed, the definition of a Separation Logic triple directly depends on the notion of Hoare triple, and the latter directly depends on the semantics captured by the predicate eval.
Let us formalize the result in three steps.
eval_like t1 t2 asserts that t2 evaluates like t1. In particular, this relation hold whenever t2 reduces in small-step to t1.
Definition eval_like (t1 t2:trm) : Prop :=
   s s' v, eval s t1 s' veval s t2 s' v.
For example eval_like t (trm_let x t x) holds, reflecting the fact that let x = t in x reduces in small-step to t.
Lemma eval_like_eta_reduction : (t:trm) (x:var),
  eval_like t (trm_let x t x).
Proof using.
  introv R. applys eval_let R.
  simpl. rewrite var_eq_spec. case_if. applys eval_val.
Qed.
It turns out that the symmetric relation eval_like (trm_let x t x) t also holds: the term t does not exhibit more behaviors than those of let x = t in x.
Lemma eval_like_eta_expansion : (t:trm) (x:var),
  eval_like (trm_let x t x) t.
Proof using.
  introv R. inverts R as. introv R1 R2.
  simpl in R2. rewrite var_eq_spec in R2. case_if.
  inverts R2. auto.
Qed.
We deduce that a term t denotes a program equivalent to the program let x = t in x.
Lemma trm_equiv_eta : (t:trm) (x:var),
  trm_equiv t (trm_let x t x).
Proof using.
  intros. intros s s' v. iff M.
  { applys eval_like_eta_reduction M. }
  { applys eval_like_eta_expansion M. }
Qed.
If eval_like t1 t2, then any triple that holds for t1 also holds for t2.
Lemma hoare_eval_like : t1 t2 H Q,
  eval_like t1 t2
  hoare t1 H Q
  hoare t2 H Q.
Proof using.
  introv E M1 K0. forwards (s'&v&R1&K1): M1 K0.
   s' v. split. { applys E R1. } { applys K1. }
Qed.

Lemma triple_eval_like : t1 t2 H Q,
  eval_like t1 t2
  triple t1 H Q
  triple t2 H Q.
Proof using.
  introv E M1. intros H'. applys hoare_eval_like E. applys M1.
Qed.
It follows that if two terms are equivalent, then they admit the same triples.
Lemma triple_trm_equiv : t1 t2 H Q,
  trm_equiv t1 t2
  triple t1 H Qtriple t2 H Q.
Proof using.
  introv E. unfolds trm_equiv. iff M.
  { applys triple_eval_like M. introv R. applys* E. }
  { applys triple_eval_like M. introv R. applys* E. }
Qed.
The reasoning rule triple_eval_like has a number of practical applications.
One, show below, is to revisit the proof of triple_app_fun in a much more succint way, by arguing that trm_app (val_fun x t1) v2 and subst x v2 t1 are equivalent terms, hence they admit the same behavior.
Lemma triple_app_fun : x v1 v2 t1 H Q,
  v1 = val_fun x t1
  triple (subst x v2 t1) H Q
  triple (trm_app v1 v2) H Q.
Proof using.
  introv E M1. applys triple_eval_like M1.
  introv R. applys eval_app_fun E R.
Qed.
Another application is the following rule, which allows to modify the parenthesis structure of a sequence.

Exercise: 3 stars, standard, optional (triple_trm_seq_assoc)

Prove that the term t1; (t2; t3) satisfies the same triples as the term (t1;t2); t3.
Lemma triple_trm_seq_assoc : t1 t2 t3 H Q,
  triple (trm_seq (trm_seq t1 t2) t3) H Q
  triple (trm_seq t1 (trm_seq t2 t3)) H Q.
Proof using. (* FILL IN HERE *) Admitted.
Such a change in the parenthesis structure of a sequence can be helfpul to apply the frame rule around t1;t2, for example.
Another useful application of the lemma triple_eval_like appears in chapter Affine, for proving the equivalence of two versions of the garbage collection rule.
End ProofsSameSemantics.

Alternative Specification Style for Result Values.

Module MatchStyle.
Recall the specification for the function ref.
Parameter triple_ref : v,
  triple (val_ref v)
    \[]
    (fun r\ p, \[r = val_loc p] \* p ~~> v).
Its postcondition could be equivalently stated by using, instead of an existential quantifier \, a pattern matching.
Parameter triple_ref' : v,
  triple (val_ref v)
    \[]
    (fun rmatch r with
              | val_loc p ⇒ (p ~~> v)
              | _\[False]
              end).
However, the pattern-matching presentation is less readable and would be fairly cumbersome to work with in practice. Thus, we systematically prefer using existentials.
End MatchStyle.

Historical Notes

[Gordon 1989] presents the first mechanization of Hoare logic in a proof assistant, using the HOL tool. Gordon's pioneering work was followed by numerous formalizations of Hoare logic, targeting various programming languages.
The original presentation of Separation Logic (1999-2001) consists of a set of rules written down on paper. These rules were not formally described in a proof assistant. Nevertheless, mechanized presentation of Separation Logic emerged a few years later.
[Yu, Hamid, and Shao 2003] present the CAP framework for the verification in Coq of assembly-level code. This framework exploits separation logic style specifications, with predicate for lists and list segments involving the separating conjunction operator.
In parallel, [Weber 2004], advised by Nipkow, developed the first mechanization of the rules of Separation Logic for a while language, using the Isabelle/HOL tool. His presentation is quite close from the original, paper presentation.
Numerous mechanized presentations of Separation Logic, targeting various languages (assembly, C, core-Java, ML, etc.) and using various tools (Isabelle/HOL, Coq, PVS, HOL4, HOL). For a detailed list, as of 2020, we refer to Section 10.3 from the paper: http://www.chargueraud.org/research/2020/seq_seplogic/seq_seplogic.pdf.

WPsem: Semantics of Weakest Preconditions

Set Implicit Arguments.
From SLF Require Export Rules.

Implicit Types f : var.
Implicit Types b : bool.
Implicit Types v : val.
Implicit Types h : heap.
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : valhprop.

First Pass

In previous chapters, we have introduced the notion of Separation Logic triple, written triple t H Q.
In this chapter, we introduce the notion of "weakest precondition" for Separation Logic triples, written wp t Q.
The intention is for wp t Q to be a heap predicate (of type hprop) such that H ==> wp t Q if and only if triple t H Q holds.
The benefits of introducing weakest preconditions is two-fold:
  • the use of wp greatly reduces the number of structural rules required, and thus reduces accordingly the number of tactics required for carrying out proofs in practice;
  • the predicate wp will serve as guidelines for setting up in the next chapter a "characteristic formula generator", which is the key ingredient at the heart of the implementation of the CFML tool.
This chapter presents:
  • the notion of weakest precondition, as captured by wp,
  • the reformulation of structural rules in wp-style,
  • the reformulation of reasoning rules in wp-style,
  • (optional) alternative, equivalent definitions for wp, and alternative proofs for deriving wp-style reasoning rules.

Notion of Weakest Precondition

We next introduce a function wp, called "weakest precondition". Given a term t and a postcondition Q, the expression wp t Q denotes a heap predicate wp t Q such that, for any heap predicate H, the entailment H ==> wp t Q is equivalent to triple t H Q.
The notion of wp usually sounds fairly mysterious at first sight. It will make more sense when we describe the properties of wp.
Parameter wp : trm → (valhprop) → hprop.

Parameter wp_equiv : t H Q,
  (H ==> wp t Q) ↔ (triple t H Q).
The wp t Q is called "weakest precondition" for two reasons: because (1) it is a precondition, and (2) it is the weakest one, as we explain next.
First, wp t Q is always a "valid precondition" for a triple associated with the term t and the postcondition Q.
Lemma wp_pre : t Q,
  triple t (wp t Q) Q.
Proof using. intros. rewrite <- wp_equiv. applys himpl_refl. Qed.
Second, wp t Q is the "weakest" of all valid preconditions for the term t and the postcondition Q, in the sense that any other valid precondition H, i.e. satisfying triple t H Q, is such that H entails wp t Q.
Lemma wp_weakest : t H Q,
  triple t H Q
  H ==> wp t Q.
Proof using. introv M. rewrite wp_equiv. applys M. Qed.
In other words, wp t Q is the "smallest" H satisfying triple t H Q with respect to the order on heap predicates induced by the entailment relation ==>.
There are several equivalent ways to define wp, as we show in the optional contents of this chapter. It turns out that the equivalence (H ==> wp t Q) (triple t H Q) fully characterizes the predicate wp, and that it is all we need to carry out formal reasoning.
For this reason, we postpone to further on in this chapter the description of alternative, direct definitions for wp.

Structural Rules in Weakest-Precondition Style

We next present reformulations of the frame rule and of the rule of consequence in "weakest-precondition style".

The Frame Rule

The frame rule for wp asserts that (wp t Q) \* H entails wp t (Q \*+ H). This statement can be read as follows: if you own both a piece of state satisfying H and a piece of state in which the execution of t produces (a result and an output value satisfying) Q, then you own a piece of state in which the execution of t produces Q \*+ H, that is, produces both Q and H.
Lemma wp_frame : t H Q,
  (wp t Q) \* H ==> wp t (Q \*+ H).
The lemma is proved by exploiting the frame rule for triples and the equivalence that characterizes wp.
Proof using.
  intros. rewrite wp_equiv.
  applys triple_frame. rewrite* <- wp_equiv.
Qed.
The connection with the frame might not be totally obvious. Recall the frame rule for triples.
    triple t H1 Q
    triple t (H1 \* H) (Q \*+ H)
Let us replace the form triple t H Q with the form H ==> wp t Q. We obtain the following statement.
Lemma wp_frame_trans : t H1 Q H,
  H1 ==> wp t Q
  (H1 \* H) ==> wp t (Q \*+ H).
If we exploit transitivity of entailment to eliminate H1, then we obtain exactly wp_frame, as illustrated by the proof script below.
Proof using. introv M. xchange M. applys* wp_frame. Qed.

The Rule of Consequence

The rule of consequence for wp materializes as a covariance property: it asserts that wp t Q is covariant in Q. In other words, if one weakens Q, then one weakens wp t Q. The corresponding formal statement appears next.
Lemma wp_conseq : t Q1 Q2,
  Q1 ===> Q2
  wp t Q1 ==> wp t Q2.
Proof using.
  introv M. rewrite wp_equiv. applys* triple_conseq (wp t Q1) M. applys wp_pre.
Qed.
The connection with the rule of consequence is, again, not totally obvious. Recall the rule of consequence for triples.
    triple t H1 Q1
    H2 ==> H1
    Q1 ===> Q2
    triple t H2 Q2
Let us replace the form triple t H Q with the form H ==> wp t Q. We obtain the following statement:
Lemma wp_conseq_trans : t H1 H2 Q1 Q2,
  H1 ==> wp t Q1
  H2 ==> H1
  Q1 ===> Q2
  H2 ==> wp t Q2.
If we exploit transitivity of entailment to eliminate H1 and H2, then we obtain exactly wp_conseq, as illustrated below.
Proof using.
  introv M WH WQ. xchange WH. xchange M. applys wp_conseq WQ.
Qed.

The Extraction Rules

The extraction rules triple_hpure and triple_hexists have no specific counterpart with the wp presentation. Indeed, in a weakest-precondition style presentation, the extraction rules for triples correspond exactly to the extraction rules for entailment.
To see why, consider for example the rule triple_hpure.
Parameter triple_hpure : t (P:Prop) H Q,
  (Ptriple t H Q) →
  triple t (\[P] \* H) Q.
Replacing the form triple t H Q with H ==> wp t Q yields the following statement.
Lemma triple_hpure_with_wp : t H Q (P:Prop),
  (P → (H ==> wp t Q)) →
  (\[P] \* H) ==> wp t Q.
The above implication is just a special case of the extraction lemma for pure facts on the left on an entailment, named himpl_hstar_hpure_l, and whose statement is as follows.
    (P → (H ==> H')) →
    (\[P] \* H) ==> H'.
Instantiating H' with wp t Q proves triple_hpure_with_wp.
Proof using. introv M. applys himpl_hstar_hpure_l M. Qed.
A similar reasoning applies to the extraction rule for existentials.

Reasoning Rules for Terms, in Weakest-Precondition Style

Rule for Values

Recall the rule triple_val which gives a reasoning rule for establishing a triple for a value v.
Parameter triple_val : v H Q,
  H ==> Q v
  triple (trm_val v) H Q.
If we rewrite this rule in wp style, we obtain the rule below.
    H ==> Q v
    H ==> wp (trm_val v) Q.
By exploiting transitivity of entailment, we can eliminate H. We obtain the following statement, which reads as follows: if you own a state satisfying Q v, then you own a state from which the evaluation of the value v produces Q.
Lemma wp_val : v Q,
  Q v ==> wp (trm_val v) Q.
Proof using.
  intros. rewrite wp_equiv. applys* triple_val.
Qed.
We can verify that, when migrating to the wp presentation, we have not lost any expressiveness. To that end, we prove that triple_val is derivable from wp_val.
Lemma triple_val_derived_from_wp_val : v H Q,
  H ==> Q v
  triple (trm_val v) H Q.
Proof using. introv M. rewrite <- wp_equiv. xchange M. applys wp_val. Qed.

Rule for Sequence

Recall the reasoning rule for a sequence trm_seq t1 t2.
Parameter triple_seq : t1 t2 H Q H1,
  triple t1 H (fun vH1) →
  triple t2 H1 Q
  triple (trm_seq t1 t2) H Q.
Replacing triple t H Q with H ==> wp t Q throughout the rule gives the statement below.
      H ==> (wp t1) (fun vH1) →
      H1 ==> (wp t2) Q
      H ==> wp (trm_seq t1 t2) Q.
This entailment holds for any H and H1. Let us specialize it to H1 := (wp t2) Q and H := (wp t1) (fun v (wp t2) Q).
This leads us to the following statement, which reads as follows: if you own a state from which the evaluation of t1 produces a state from which the evaluation of t2 produces the postcondition Q, then you own a state from which the evaluation of the sequence t1;t2 produces Q.
Lemma wp_seq : t1 t2 Q,
  wp t1 (fun vwp t2 Q) ==> wp (trm_seq t1 t2) Q.
Proof using.
  intros. rewrite wp_equiv. applys triple_seq.
  { rewrite* <- wp_equiv. }
  { rewrite* <- wp_equiv. }
Qed.

Exercise: 2 stars, standard, optional (triple_seq_from_wp_seq)

Check that wp_seq is just as expressive as triple_seq, by proving that triple_seq is derivable from wp_seq and from the structural rules for wp and/or the structural rules for triple.
Lemma triple_seq_from_wp_seq : t1 t2 H Q H1,
  triple t1 H (fun vH1) →
  triple t2 H1 Q
  triple (trm_seq t1 t2) H Q.
Proof using. (* FILL IN HERE *) Admitted.

More Details

Other Reasoning Rules for Terms

Rule for Functions

Recall the reasoning rule for a term trm_fun x t1, which evaluates to the value val_fun x t1.
Parameter triple_fun : x t1 H Q,
  H ==> Q (val_fun x t1) →
  triple (trm_fun x t1) H Q.
The rule for functions follow exactly the same pattern as for values.
Lemma wp_fun : x t Q,
  Q (val_fun x t) ==> wp (trm_fun x t) Q.
Proof using. intros. rewrite wp_equiv. applys* triple_fun. Qed.
A similar rule holds for the evaluation of a recursive function.
Lemma wp_fix : f x t Q,
  Q (val_fix f x t) ==> wp (trm_fix f x t) Q.
Proof using. intros. rewrite wp_equiv. applys* triple_fix. Qed.

Rule for Conditionals

Recall the reasoning rule for a term triple_if b t1 t2.
Parameter triple_if : b t1 t2 H Q,
  triple (if b then t1 else t2) H Q
  triple (trm_if (val_bool b) t1 t2) H Q.
Replacing triple using wp entailments yields:
    H ==> wp (if b then t1 else t2) Q
    H ==> wp (trm_if (val_bool b) t1 t2) Q.
which simplifies by transitivity to:
    wp (if b then t1 else t2) Q ==> wp (trm_if (val_bool b) t1 t2) Q.
This statement corresponds to the wp-style reasoning rule for conditionals. The proof appears next.
Lemma wp_if : b t1 t2 Q,
  wp (if b then t1 else t2) Q ==> wp (trm_if (val_bool b) t1 t2) Q.
Proof using.
  intros. rewrite wp_equiv. applys triple_if. rewrite* <- wp_equiv.
Qed.

Rule for Let-Bindings

Recall the reasoning rule for a term trm_let x t1 t2.
Parameter triple_let : x t1 t2 Q1 H Q,
  triple t1 H Q1
  ( v1, triple (subst x v1 t2) (Q1 v1) Q) →
  triple (trm_let x t1 t2) H Q.
The rule of trm_let x t1 t2 is very similar to that for trm_seq, the only difference being the substitution of x by v in t2, where v denotes the result of t1.
Lemma wp_let : x t1 t2 Q,
  wp t1 (fun v1wp (subst x v1 t2) Q) ==> wp (trm_let x t1 t2) Q.
Proof using.
  intros. rewrite wp_equiv. applys triple_let.
  { rewrite* <- wp_equiv. }
  { intros v. rewrite* <- wp_equiv. }
Qed.

Rule For Function Applications

Recall the reasoning rule for an application (val_fun x t1) v2.
Parameter triple_app_fun : x v1 v2 t1 H Q,
  v1 = val_fun x t1
  triple (subst x v2 t1) H Q
  triple (trm_app v1 v2) H Q.
The corresponding wp rule is stated and proved next.
Lemma wp_app_fun : x v1 v2 t1 Q,
  v1 = val_fun x t1
  wp (subst x v2 t1) Q ==> wp (trm_app v1 v2) Q.
Proof using.
  introv EQ1. rewrite wp_equiv. applys* triple_app_fun.
  rewrite* <- wp_equiv.
Qed.
A similar rule holds for the application of a recursive function.

Optional Material

A Concrete Definition for Weakest Precondition

Module WpHighLevel.
The lemma wp_equiv captures the characteristic property of wp, that is, (H ==> wp t Q) (triple t H Q) .
However, it does not give evidence that there exists a predicate wp satisfying this equivalence. We next present one possible definition.
The idea is to define wp t Q as the predicate \ H, H \* \[triple t H Q], which, reading litterally, is satisfied by "any" heap predicate H which is a valid precondition for a triple for the term t and the postcondition Q.
Definition wp (t:trm) (Q:valhprop) : hprop :=
  \ (H:hprop), H \* \[triple t H Q].
First, let us prove that wp t Q is itself a valid precondition, in the sense that triple t (wp t Q) Q always holds (as asserted by the lemma wp_pre).
To establish this fact, we have to prove: triple t (\ H, H \* \[triple t H Q]) Q.
Applying the extraction rule for existentials gives: H, triple t (H \* \[triple t H Q]) Q.
Applying the extraction rule for pure facts gives: H, (triple t H Q) (triple t H Q), which is true.
Second, let us demonstrate that the heap predicate wp t Q is entailed by any precondition H that satisfies triple t H Q, as asserted by the lemma wp_weakest.
Assume triple t H Q. Let us prove H ==> wp t Q, that is H ==> \ H, H \* \[triple t H Q]. Instantiating the H on the right-hand side as the H from the left-hand side suffices to satisfy the entailment.
Recall that the properties wp_pre and wp_weakest were derivable from the characteristic equivalence triple t H Q H ==> wp Q. Thus, to formalize the proofs of wp_pre and wp_weakest, all we have to do is to establish that equivalence.

Exercise: 2 stars, standard, especially useful (wp_equiv)

Prove that the definition wp_high satisfies the characteristic equivalence for weakest preconditions.
Lemma wp_equiv : t H Q,
  (H ==> wp t Q) ↔ (triple t H Q).
Proof using. (* FILL IN HERE *) Admitted.
End WpHighLevel.

Equivalence Between all Definitions Of wp

We next prove that the equivalence (triple t H Q) (H ==> wp t Q) defines a unique predicate wp. In other words, all possible definitions of wp are equivalent to each another. Thus, it really does not matter which concrete definition of wp we consider: they are all equivalent.
Concretely, assume two predicates wp1 and wp2 to both satisfy the characteristic equivalence. We prove that they are equal.
Lemma wp_unique : wp1 wp2,
  ( t H Q, (triple t H Q) ↔ (H ==> wp1 t Q)) →
  ( t H Q, (triple t H Q) ↔ (H ==> wp2 t Q)) →
  wp1 = wp2.
Proof using.
  introv M1 M2. applys fun_ext_2. intros t Q. applys himpl_antisym.
  { rewrite <- M2. rewrite M1. auto. }
  { rewrite <- M1. rewrite M2. auto. }
Qed.
Recall that both wp_pre and wp_weakest are derivable from wp_equiv. Let us also that, reciprocally, wp_equiv is derivable from the conjunction of wp_pre and wp_weakest.
In other words, the property of "being the weakest precondition" also uniquely characterizes the definition of wp.
Lemma wp_from_weakest_pre : wp',
  ( t Q, triple t (wp' t Q) Q) → (* wp_pre *)
  ( t H Q, triple t H QH ==> wp' t Q) → (* wp_weakest *)
  ( t H Q, H ==> wp' t Qtriple t H Q). (* wp_equiv *)
Proof using.
  introv M1 M2. iff M.
  { applys triple_conseq M1 M. auto. }
  { applys M2. auto. }
Qed.

An Alternative Definition for Weakest Precondition

Module WpLowLevel.
The concrete definition for wp given above is expressed in terms of Separation Logic combinators. In contrast to this "high level" definition, there exists a more "low level" definition, expressed directly as a function over heaps.
In that alternative definition, the heap predicate wp t Q is defined as a predicate that holds of a heap h if and only if the execution of t starting in exactly the heap h produces the post-condition Q.
Technically, wp t Q can be defined as: fun (h:heap) triple t (fun h' h' = h) Q. In other words, the precondition requires the input heap to be exactly h.
Definition wp (t:trm) (Q:valhprop) : hprop :=
  fun (h:heap) ⇒ triple t (fun h' ⇒ (h' = h)) Q.

Exercise: 4 stars, standard, optional (wp_equiv_wp_low)

Prove this alternative definition of wp also satisfies the characteristic equivalence H ==> wp Q triple t H Q. Hint: exploit the lemma triple_named_heap which was established as an exercise in the appendix of the chapter Himpl.)
Lemma wp_equiv_wp_low : t H Q,
  (H ==> wp t Q) ↔ (triple t H Q).
Proof using. (* FILL IN HERE *) Admitted.
End WpLowLevel.

Extraction Rule for Existentials

Recall the extraction rule for existentials.
Parameter triple_hexists : t (A:Type) (J:Ahprop) Q,
  ( x, triple t (J x) Q) →
  triple t (\ x, J x) Q.
Replacing triple t H Q with H ==> wp t Q yields the lemma stated below.

Exercise: 1 star, standard, optional (triple_hexists_in_wp)

Prove the extraction rule for existentials in wp style.
Lemma triple_hexists_in_wp : t Q A (J:Ahprop),
  ( x, (J x ==> wp t Q)) →
  (\ x, J x) ==> wp t Q.

Proof using. (* FILL IN HERE *) Admitted.
In other words, in the wp presentation, we do not need a specific extraction rule for existentials, because the extraction rule for entailment already does the job.

Combined Structural Rule

Recall the combined consequence-frame rule for triple.
Parameter triple_conseq_frame : H2 H1 Q1 t H Q,
  triple t H1 Q1
  H ==> H1 \* H2
  Q1 \*+ H2 ===> Q
  triple t H Q.
Let us reformulate this rule using wp, replacing the form triple t H Q with the form H ==> wp t Q.

Exercise: 2 stars, standard, especially useful (wp_conseq_frame_trans)

Prove the combined structural rule in wp style. Hint: exploit wp_conseq_trans and wp_frame.
Lemma wp_conseq_frame_trans : t H H1 H2 Q1 Q,
  H1 ==> wp t Q1
  H ==> H1 \* H2
  Q1 \*+ H2 ===> Q
  H ==> wp t Q.
Proof using. (* FILL IN HERE *) Admitted.
The combined structural rule for wp can actually be stated in a more concise way, as follows. The rule reads as follows: if you own a state from which the execution of t produces (a result and a state satisfying) Q1 and you own H, and if you can trade the combination of Q1 and H against Q2, the you own a piece of state from which the execution of t produces Q2.

Exercise: 2 stars, standard, especially useful (wp_conseq_frame)

Prove the concise version of the combined structural rule in wp style. Many proofs are possible.
Lemma wp_conseq_frame : t H Q1 Q2,
  Q1 \*+ H ===> Q2
  (wp t Q1) \* H ==> (wp t Q2).
Proof using. (* FILL IN HERE *) Admitted.

Alternative Statement of the Rule for Conditionals

Module WpIfAlt.
We have established the following rule for reasoning about conditionals using wp.
Parameter wp_if : b t1 t2 Q,
  wp (if b then t1 else t2) Q ==> wp (trm_if b t1 t2) Q.
Equivalently, the rule may be stated with the conditional around the calls to wp t1 Q and wp t2 Q.

Exercise: 1 star, standard, optional (wp_if')

Prove the alternative statement of rule wp_if, either from wp_if or directly from triple_if.
Lemma wp_if' : b t1 t2 Q,
  (if b then (wp t1 Q) else (wp t2 Q)) ==> wp (trm_if b t1 t2) Q.
Proof using. (* FILL IN HERE *) Admitted.
End WpIfAlt.

Definition of wp Directly from hoare

Let's take a step back and look at our construction of Separation Logic so far.
1. We defined Hoare triples (hoare) with respect to the big-step judgment (eval).
2. We defined Separation Logic triples (triple) in terms of Hoare triples (hoare), through the definition: \ H', hoare t (H \* H') (Q \*+ H').
3. We then defined Separation Logic weakest-preconditions (wp) in terms of Separation Logic triples (triple).
Through the construction, we established reasoning rules, first for Hoare triples (hoare), then for Separation Logic triples (triple), and finally for weakest-preconditions (wp).
One question that naturally arises is whether there is a more direct route to deriving reasoning rules for weakest preconditions. In other words, can we obtain the same end result through simpler proofs?
The notion of Hoare triple is a key abstraction that enables conduction further proofs without manipulating heaps (of type heap) explicitly. Experiments suggest that it is beneficial to introduce the Hoare logic layer. In other words, it is counterproductive to try an prove Separation Logic reasoning rules, whether for triple or for wp, directly with respect to the evaluation judgment eval.
Thus, the only question that remains is whether it would have some interest to derive the reasoning rules for weakest preconditions (wp) directly from the the reasoning rules for Hoare triples (hoare), that is, by bypassing the statement and proofs for the reasoning rules for Separation Logic triples (triple).
In what follows, we show that if one cares only for wp-style rules, then the route to deriving them straight from hoare-style rules may indeed be somewhat shorter.
Module WpFromHoare.
Recall the definition of triple in terms of hoare.
    Definition triple (t:trm) (H:hprop) (Q:valhprop) : Prop :=
       (H':hprop), hoare t (H \* H') (Q \*+ H').
In what follows, we conduct the proofs by assuming a concrete definition for wp, namely wp_high, which lends itself better to automated proofs.
Definition wp (t:trm) := fun (Q:valhprop) ⇒
  \ H, H \* \[triple t H Q].
First, we check the equivalence between triple t H Q and H ==> wp t Q. This proof is the same as wp_equiv from the module WpHighLevel given earlier in this chapter.
Lemma wp_equiv : t H Q,
  (H ==> wp t Q) ↔ (triple t H Q).
Proof using.
  unfold wp. iff M.
  { applys* triple_conseq Q M.
    applys triple_hexists. intros H'.
    rewrite hstar_comm. applys* triple_hpure. }
  { xsimpl* H. }
Qed.
Second, we prove the consequence-frame rule associated with wp. It is the only structural rule that is needed for working with weakest preconditions.
Lemma wp_conseq_frame : t H Q1 Q2,
  Q1 \*+ H ===> Q2
  (wp t Q1) \* H ==> (wp t Q2).
The proof leverages the consequence rule for hoare triples, and the frame property comes from the H' quantification baked in the definition of triple.
Proof using.
  introv M. unfold wp. xpull. intros H' N. xsimpl (H' \* H).
  unfolds triple. intros H''. specializes N (H \* H'').
  applys hoare_conseq N. { xsimpl. } { xchange M. }
Qed.
Third and last, we establish reasoning rules for terms in wp-style directly from the corresponding rules for hoare triples.
The proof details are beyond the scope of this course. The point here is to show that the proofs are fairly concise.
Lemma wp_val : v Q,
  Q v ==> wp (trm_val v) Q.
Proof using.
  intros. unfold wp. xsimpl. intros H'. applys hoare_val. xsimpl.
Qed.

Lemma wp_fun : x t Q,
  Q (val_fun x t) ==> wp (trm_fun x t) Q.
Proof using.
  intros. unfold wp. xsimpl. intros H'. applys hoare_fun. xsimpl.
Qed.

Lemma wp_fix : f x t Q,
  Q (val_fix f x t) ==> wp (trm_fix f x t) Q.
Proof using.
  intros. unfold wp. xsimpl. intros H'. applys hoare_fix. xsimpl.
Qed.

Lemma wp_if : b t1 t2 Q,
  wp (if b then t1 else t2) Q ==> wp (trm_if b t1 t2) Q.
Proof using.
  intros. unfold wp. xsimpl. intros H M H'.
  applys hoare_if. applys M.
Qed.

Lemma wp_app_fun : x v1 v2 t1 Q,
  v1 = val_fun x t1
  wp (subst x v2 t1) Q ==> wp (trm_app v1 v2) Q.
Proof using.
  introv EQ1. unfold wp. xsimpl. intros H' M. intros H''. applys* hoare_app_fun.
Qed.

Lemma wp_app_fix : f x v1 v2 t1 Q,
  v1 = val_fix f x t1
  wp (subst x v2 (subst f v1 t1)) Q ==> wp (trm_app v1 v2) Q.
Proof using.
  introv EQ1. unfold wp. xsimpl. intros H' M. intros H''. applys* hoare_app_fix.
Qed.

Exercise: 4 stars, standard, especially useful (wp_let)

Prove wp-style rule for let bindings.
Lemma wp_let : x t1 t2 Q,
  wp t1 (fun vwp (subst x v t2) Q) ==> wp (trm_let x t1 t2) Q.
Proof using. (* FILL IN HERE *) Admitted.
Note: wp_seq admits essentially the same proof as wp_let, simply replacing hoare_let with hoare_seq, and removing the tactic intros v.
It is technically possible to bypass even the definition of triple and specify all functions directly using the predicate wp. However, using triple leads to better readability of specifications, thus it seems preferable to continue using that style for specifying functions. (See discussion in chapter Wand, appendix on "Texan triples".)
End WpFromHoare.

Historical Notes

The idea of weakest precondition was introduced by [Dijstra 1975] in his seminal paper "Guarded Commands, Nondeterminacy and Formal Derivation of Programs".
Weakest preconditions provide a reformulation of Floyd-Hoare logic. Numerous practical verification tools leverage weakest preconditions, e.g. ESC/Java, Why3, Boogie, Spec, etc. In the context of Separation Logic in a proof assistant, the Iris framework https://iris-project.org/),developed since 2015, exploits weakest preconditions to state reasoning rules. See the Postface.

WPgen: Weakest Precondition Generator

Set Implicit Arguments.
From SLF Require Import WPsem.

Implicit Types f : var.
Implicit Types b : bool.
Implicit Types v : val.
Implicit Types h : heap.
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : valhprop.

First Pass

In the previous chapter, we have introduced a predicate called wp to describe the weakest precondition of a term t with respect to a given postcondition Q. The weakest precondition wp is defined by the equivalence: H ==> wp t Q if and only if triple t H Q.
With respect to this "characterization of the semantics of wp", we could prove "wp-style" reasoning rules. For example, the lemma wp_seq asserts: