AllChaptersContents of All Chapters on a Single Page
Welcome
Separation Logic
{ H } t { Q } | |
{ H \* H' } t { Q \* H' } |
- Automated proofs: the user provides only the code, and the tool locates sources of potential bugs. A good automated tool provides feedback that, most of time, is relevant.
- Semi-automated proofs: the user provides not just the code, but also specifications and invariants. The tool then leverages automated solvers (e.g., SMT solvers) to discharge proof obligations.
- Interactive proofs: the user provides not just the code and its specifications, but also a detailed proof script justifying the correctness of the code. These proofs may be worked on interactively using a proof assistant such as Coq.
Separation Logic in a proof assistant
- higher-order logic provides virtually unlimited expressiveness that enables formulating arbitrarily complex specifications and invariants;
- a proof assistant provides a unified framework to prove both the implementation details of the code and the underlying mathematical results form, e.g., results from theory or graph theory;
- proof scripts may be easily maintained to reflect on a change to the source code;
- the fact that Separation Logic is formalized in the proof assistant provides high confidence in the correctness of the tool.
- A formalization of the syntax and semantics of the source language. This is called a deep embedding of the programming language.
- A definition of Separation Logic predicates as predicates from higher-order logic. This is called a shallow embedding of the program logic.
- A definition of Separation Logic triples as a predicate, the statements of the reasoning rules as lemmas, and the proof of these reasoning rules with respect to the semantics.
- An infrastructure that consists of lemmas, tactics and notation, allowing for the verification of concrete programs to be carried out through relatively concise proof scripts.
- Applications of this infrastructure to the verification of concrete programs.
Several Possible Depths of Reading
- The short curriculum includes only the 5 first chapters (ranging from chapter Basic to chapter Rules).
- The medium curriculum includes 3 additional chapters (ranging from chapter WPsem to chapter Wand).
- The full curriculum includes 5 more chapters (ranging from chapter Partial to chapter Rich).
- The First Pass section presents the most important ideas only. The course in designed in such a way that it is possible to read only the First Pass section of every chapter. The reader may be interested in going through all these sections to get the big picture, before revisiting each chapter in more details.
- The More Details section presents additional material explaining in more depth the meaning and the consequences of the key results. This section also contains descriptions of the most important proofs. By default, readers would eventually read all this material.
- The Optional Material section typically contains the remaining proofs, as well as discussions of alternative definitions. The Optional Material sections are all independent from each other. They would typically be of interest to readers who want to understand every detail, readers who are seeking for a deep understanding of a particular notion, and readers who are looking for answers to specific question.
List of Chapters
- (1) Basic: introduction to the core features of Separation Logic,
illustrated using short programs manipulating references.
- (1) Repr: introduction to representation predicates in Separation
Logic, in particular for describing mutable lists and trees.
- (2) Hprop: definition of the core operators of Separation Logic,
of Hoare triples, and of Separation Logic triples.
- (2) Himpl: definition of the entailment relation, statement and
proofs of its fundamental properties, and description of the
simplification tactic for entailment.
- (3) Rules: statement and proofs of the reasoning rules of
Separation Logic, and example proofs of programs using these rules.
- (4) WPsem: definition of the semantic notion of weakest
precondition, and statement of rules in weakest-precondition style.
- (4) WPgen: presentation of a function that effectively computes
the weakest precondition of a term, independently of its
specification.
- (5) Wand: introduction of the magic wand operator and of the
ramified frame rule, and extension of the weakest-precondition
generator for handling local function definitions.
- (5) Affine: description of a generalization of Separation Logic
with affine heap predicates, which are useful, in particular, for
handling garbage-collected programming languages.
- (6) Struct: specification of array and record operations, and
encoding of these operations using pointer arithmetic.
- (6) Rich: description of the treatment of additional language
constructs, including loops, assertions, and n-ary functions.
- (7) Nondet: definition of triples for non-deterministic
programming languages.
- (7) Partial: definition of triples for partial correctness only, i.e., for not requiring termination proofs.
Other Distributed Files
- LibSepReference: a long file that defines the program
verification tool that is used in the first two chapters, and
whose implementation is discussed throughout the other chapters.
Each chapter from the course imports this module, as opposed to
importing the chapters that precedes it.
- LibSepVar: a formalization of program variables, together with
a bunch of notations for parsing variables.
- LibSepFmap: a formalization of finite maps, which are used for
representing the memory state.
- LibSepSimpl: a functor that implements a powerful tactic for
automatically simplifying entailments in Separation Logic.
- LibSepMinimal: a minimalistic formalization of a soundness
proof for Separation Logic, corresponding to the definitions and proofs
presented in the ICFP'20 paper mentioned earlier.
- All other Lib* files are imports from the TLC library, which is described next.
Feedback Welcome
Exercises
System Requirements
Note for CoqIDE Users
coqide -async-proofs off -async-proofs-command-error-resilience off -Q . Basic.v &
Recommended Citation Format
@book {Charguéraud:SF6,
author = {Arthur Charguéraud},
title = "Separation Logic Foundations",
series = "Software Foundations",
volume = "6",
year = "2021",
publisher = "Electronic textbook",
note = {Version 1.0, \URL{http://softwarefoundations.cis.upenn.edu} },
}
Thanks
Basic: Basic Proofs in Separation Logic
Set Implicit Arguments.
From SLF Require Import LibSepReference.
Import ProgramSyntax DemoPrograms.
Implicit Types n m : int.
Implicit Types p q : loc.
From SLF Require Import LibSepReference.
Import ProgramSyntax DemoPrograms.
Implicit Types n m : int.
Implicit Types p q : loc.
A First Taste
- "Heap predicates", which are used to describe memory states in Separation Logic.
- "Specification triples", of the form triple t H Q, which relate a term t, a precondition H, and a postcondition Q.
- "Entailment proof obligations", of the form H ==> H' or Q ===> Q', which assert that a pre- or post-condition is weaker than another one.
- "Verification proof obligations", of the form PRE H CODE F POST Q, which internally leverage a form of weakest-precondition.
- Custom proof tactics, called "x-tactics", which are specialized tactics for carrying out the verification proofs.
- p ~~> n, which describes a memory cell at location p with contents n,
- \[], which describes an empty state,
- \[P], which also describes an empty state, and moreover asserts that the proposition P is true,
- H1 \* H2, which describes a state made of two disjoint parts: H1 and H2,
- \∃ x, H, which is used to quantify variables in postconditions.
- xwp or xtriple to begin a proof,
- xapp to reason about an application,
- xval to reason about a return value,
- xif to reason about a conditional,
- xsimpl to simplify or prove entailments (H ==> H' or Q ===> Q').
- math, which is a variant of lia for proving mathematical goals,
- induction_wf, which sets up proofs by well-founded induction,
- gen, which is a shorthand for generalize dependent, a tactic also useful to set up induction principles.
The Increment Function
let n = ! p in
let m = n + 1 in
p := m
Definition incr : val :=
<{ fun 'p ⇒
let 'n = ! 'p in
let 'm = 'n + 1 in
'p := 'm }>.
<{ fun 'p ⇒
let 'n = ! 'p in
let 'm = 'n + 1 in
'p := 'm }>.
The quotes that appear in the source code are used to disambiguate
between the keywords and variables associated with the source code,
and those from the corresponding Coq keywords and variables.
The fun keyword should be read like the fun keyword from OCaml.
The specification of incr p, shown below, is expressed using a
"Separation Logic triple". A triple is formally expressed by a proposition
of the form triple t H Q. By convention, we write the precondition H
and the postcondition Q on separate lines.
Lemma triple_incr : ∀ (p:loc) (n:int),
triple (incr p)
(p ~~> n)
(fun _ ⇒ (p ~~> (n+1))).
triple (incr p)
(p ~~> n)
(fun _ ⇒ (p ~~> (n+1))).
Here p denotes the address in memory of the reference cell provided
as argument to the increment function. In technical vocabulary, p
is the "location" of a reference cell. All locations have type loc,
thus the argument p of incr has type loc.
In Separation Logic, the "heap predicate" p ~~> n describes a memory
state in which the contents of the location p is the value n.
In the present example, n denotes an integer value.
The behavior of the operation incr p consists of updating the memory
state by incrementing the contents of the cell at location p, so that
its new contents are n+1. Thus, the memory state posterior to the
increment operation can be described by the heap predicate p ~~> (n+1).
The result value returned by incr p is the unit value, which does not
carry any useful information. In the specification of incr, the
postcondition is of the form fun _ ⇒ ... to indicate that there is
no need to bind a name for the result value.
The general pattern of a specification thus includes:
Note that we have to write p ~~> (n+1) using parentheses around n+1,
because p ~~> n+1 would get parsed as (p ~~> n) + 1.
Our next step is to prove the specification lemma triple_incr which
specifies the behavior of the function incr. We conduct the
verification proof using x-tactics.
- Quantification of the arguments of the functions---here, the variable p.
- Quantification of the "ghost variables" used to describe the input state---here, the variable n.
- The application of the predicate triple to the function application incr p, which is the term being specified by the triple.
- The precondition describing the input state---here, the predicate p ~~> n.
- The postcondition describing both the output value and the output state. The general pattern is fun r ⇒ H', where r names the result and H' describes the final state. Here, the final state is described by p ~~> (n+1).
Proof.
xwp begins the verification proof. The proof obligation is
displayed using the custom notation PRE H CODE F POST Q.
The CODE part does not look very nice, but one should
be able to somehow recognize the body of incr. Indeed,
if we ignore the details and perform the alpha-renaming
from v to n and v0 to m, the CODE section reads like:
Let' n := (App val_get p) in
Let' m := (App val_add n 1) in
App val_set p m
which is somewhat similar to the original source code.
Let' n := (App val_get p) in
Let' m := (App val_add n 1) in
App val_set p m
xwp.
The remainder of the proof performs some form of symbolic
execution. One should not attempt to read the full proof
obligation at each step, but instead only look at the current
state, described by the PRE part (here, p ~~> n), and at
the first line only of the CODE part, where one can read
the code of the next operation to reason about.
Each function call is handled using the tactic xapp.
We reason about the operation !p that reads into p;
this read operation returns the value n.
xapp.
We reason about the addition operation n+1.
xapp.
We reason about the update operation p := n+1,
thereby updating the state to p ~~> (n+1).
xapp.
At this stage, the proof obligation takes the form _ ==> _,
which require checking that the final state matches what
is claimed in the postcondition. We discharge it using
the tactic xsimpl.
xsimpl.
Qed.
Qed.
The command below associates the specification lemma triple_incr
with the function incr in a hint database, so that if we subsequently
verify a program that features a call to incr, the xapp tactic
is able to automatically invoke the lemma triple_incr.
Hint Resolve triple_incr : triple.
The proof framework can be used without any knowledge about the
implementation of the notation PRE H CODE F POST Q nor about the
implementation of the x-tactics. Readers with prior experience in
program verification may nevertheless be interested to know that
PRE H CODE F POST Q is defined as the entailment H ==> F Q,
where F is a form of weakest-precondition that describes the
behavior of the code.
A Function with a Return Value
Definition example_let : val :=
<{ fun 'n ⇒
let 'a = 'n + 1 in
let 'b = 'n - 1 in
'a + 'b }>.
<{ fun 'n ⇒
let 'a = 'n + 1 in
let 'b = 'n - 1 in
'a + 'b }>.
We specify this function using the the triple notation, in the form
triple (example_let n) H (fun r ⇒ H'), where r, of type val,
denotes the output value.
To denote the fact that the input state is empty, we write \[]
in the precondition.
To denote the fact that the output state is empty, we could use \[].
Yet, if we write just fun r ⇒ \[] as postcondition, we would have
said nothing about the output value r produced by a call example_let.
Instead, we would like to specify that the result r is equal to 2*n.
To that end, we write the postcondition fun r ⇒ \[r = 2*n], which
actually stands for fun (r:val) ⇒ [r = val_int (2*n)], where the
coercion [val_int] translates the integer value [2*n] into the
corresponding value of type [val] from the programming language.
Lemma triple_example_let : ∀ (n:int),
triple (example_let n)
\[]
(fun r ⇒ \[r = 2*n]).
triple (example_let n)
\[]
(fun r ⇒ \[r = 2*n]).
The verification proof script is very similar to the previous one.
The x-tactics xapp performs symbolic execution of the code.
Ultimately, we need to check that the expression computed,
(n + 1) + (n - 1), is equal to the specified result, that is, 2*n.
We exploit the TLC tactics math to prove this mathematical result.
Proof.
xwp. xapp. xapp. xapp. xsimpl. math.
Qed.
xwp. xapp. xapp. xapp. xsimpl. math.
Qed.
The Function quadruple
Definition quadruple : val :=
<{ fun 'n ⇒
let 'm = 'n + 'n in
'm + 'm }>.
<{ fun 'n ⇒
let 'm = 'n + 'n in
'm + 'm }>.
Exercise: 1 star, standard, especially useful (triple_quadruple)
Specify and verify the function quadruple to express that it returns 4*n, following the template of triple_example_let.
(* FILL IN HERE *)
☐
☐
The Function inplace_double
Definition inplace_double : val :=
<{ fun 'p ⇒
let 'n = !'p in
let 'm = 'n + 'n in
'p := 'm }>.
<{ fun 'p ⇒
let 'n = !'p in
let 'm = 'n + 'n in
'p := 'm }>.
Exercise: 1 star, standard, especially useful (triple_inplace_double)
Specify and verify the function inplace_double, following the template of triple_incr.
(* FILL IN HERE *)
☐
☐
Increment of Two References
Definition incr_two : val :=
<{ fun 'p 'q ⇒
incr 'p;
incr 'q }>.
<{ fun 'p 'q ⇒
incr 'p;
incr 'q }>.
The specification of this function takes the form
triple (incr_two p q) H (fun _ ⇒ H'),
where r denotes the result value of type unit.
The precondition describes two references cells: p ~~> n
and q ~~> m. To assert that the two cells are distinct from
each other, we separate their description with the operator \*.
This operator called "separating conjunction" in Separation Logic,
and is also known as the "star" operator. Thus, the precondition
is (p ~~> n) \* (q ~~> m), or simply p ~~> n \* q ~~> m.
The postcondition describes the final state as
is p ~~> (n+1) \* q ~~> (m+1), where the contents of both
cells is increased by one unit compared with the precondition.
The specification triple for incr_two is thus as follows.
Lemma triple_incr_two : ∀ (p q:loc) (n m:int),
triple (incr_two p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n+1) \* q ~~> (m+1)).
triple (incr_two p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n+1) \* q ~~> (m+1)).
The verification proof follows the usual pattern. Note that,
from here on, we use the command Proof using. instead of
just Proof., to enable asynchronous proof checking, a feature
that allows for faster navigation in scripts when using CoqIDE.
Proof using.
xwp. xapp. xapp. xsimpl.
Qed.
xwp. xapp. xapp. xsimpl.
Qed.
We register the specification triple_incr_two in the
database, to enable reasoning about calls to incr_two.
Hint Resolve triple_incr_two : triple.
Aliased Arguments
Definition aliased_call : val :=
<{ fun 'p ⇒
incr_two 'p 'p }>.
<{ fun 'p ⇒
incr_two 'p 'p }>.
A call to aliased_call p should increase the contents of p by 2.
This property can be specified as follows.
Lemma triple_aliased_call : ∀ (p:loc) (n:int),
triple (aliased_call p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+2)).
triple (aliased_call p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+2)).
If we attempt the proof, we get stuck. Observe how xapp reports its
failure to make progress.
Proof using.
xwp. xapp.
Abort.
xwp. xapp.
Abort.
In the above proof, we get stuck with a proof obligation of the form:
\[] ==> (p ~~> ?m) \* _, which requires showing that
from an empty state one can extract a reference p ~~> ?m
for some integer ?m.
What happened is that when matching the current state p ~~> n
against p ~~> ?n \* p ~~> ?m (which corresponds to the precondition
of triple_incr_two with q = p), the internal simplification tactic
was able to cancel out p ~~> n in both expressions, but then got
stuck with matching the empty state against p ~~> ?m.
The issue here is that the specification triple_incr_two is
specialized for the case of non-aliased references.
It is possible to state and prove an alternative specification for
the function incr_two, to cover the case of aliased arguments.
Its precondition mentions only one reference, p ~~> n, and its
postcondition asserts that its contents gets increased by two units.
This alternative specification can be stated and proved as follows.
Lemma triple_incr_two_aliased : ∀ (p:loc) (n:int),
triple (incr_two p p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+2)).
Proof using.
xwp. xapp. xapp. xsimpl. math.
Qed.
triple (incr_two p p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+2)).
Proof using.
xwp. xapp. xapp. xsimpl. math.
Qed.
By exploiting the alternative specification, we are able to verify
the specification of aliased_call p, which invokes incr_two p p.
In order to indicate to the xapp tactic that it should invoke the
lemma triple_incr_two_aliased and not triple_incr_two, we provide that
lemma as argument to xapp, by writing xapp triple_incr_two_aliased.
Lemma triple_aliased_call : ∀ (p:loc) (n:int),
triple (aliased_call p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+2)).
Proof using.
xwp. xapp triple_incr_two_aliased. xsimpl.
Qed.
triple (aliased_call p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+2)).
Proof using.
xwp. xapp triple_incr_two_aliased. xsimpl.
Qed.
A Function that Takes Two References and Increments One
Definition incr_first : val :=
<{ fun 'p 'q ⇒
incr 'p }>.
<{ fun 'p 'q ⇒
incr 'p }>.
We can specify this function by describing its input state
as p ~~> n \* q ~~> m, and describing its output state
as p ~~> (n+1) \* q ~~> m. Formally:
Lemma triple_incr_first : ∀ (p q:loc) (n m:int),
triple (incr_first p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n+1) \* q ~~> m).
Proof using.
xwp. xapp. xsimpl.
Qed.
triple (incr_first p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n+1) \* q ~~> m).
Proof using.
xwp. xapp. xsimpl.
Qed.
Observe, however, that the second reference plays absolutely
no role in the execution of the function. In fact, we might
equally well have described in the specification only the
existence of the reference that the code actually manipulates.
Lemma triple_incr_first' : ∀ (p q:loc) (n:int),
triple (incr_first p q)
(p ~~> n)
(fun _ ⇒ p ~~> (n+1)).
Proof using.
xwp. xapp. xsimpl.
Qed.
triple (incr_first p q)
(p ~~> n)
(fun _ ⇒ p ~~> (n+1)).
Proof using.
xwp. xapp. xsimpl.
Qed.
Interestingly, the specification triple_incr_first, which
mentions the two references, is derivable from the specification
triple_incr_first', which mentions only the first reference.
The proof of this fact uses the tactic xtriple, which turns a
specification triple of the form triple t H Q into the form PRE
H CODE t POST Q, thereby enabling this proof obligation to be
processed by xapp.
Here, we invoke the tactic xapp triple_incr_first', to exploit
the specification triple_incr_first'.
Lemma triple_incr_first_derived : ∀ (p q:loc) (n m:int),
triple (incr_first p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n+1) \* q ~~> m).
Proof using.
xtriple. xapp triple_incr_first'. xsimpl.
Qed.
triple (incr_first p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n+1) \* q ~~> m).
Proof using.
xtriple. xapp triple_incr_first'. xsimpl.
Qed.
More generally, in Separation Logic, if a specification triple
holds, then this triple remains valid when we add the same heap
predicate to both the precondition and the postcondition. This is
the "frame" principle, a key modularity feature that we'll come
back to later on in the course.
Definition transfer : val :=
<{ fun 'p 'q ⇒
let 'n = !'p in
let 'm = !'q in
let 's = 'n + 'm in
'p := 's;
'q := 0 }>.
<{ fun 'p 'q ⇒
let 'n = !'p in
let 'm = !'q in
let 's = 'n + 'm in
'p := 's;
'q := 0 }>.
Exercise: 1 star, standard, especially useful (triple_transfer)
State and prove a lemma called triple_transfer specifying the behavior of transfer p q in the case where p and q denote two distinct references.
(* FILL IN HERE *)
☐
☐
Exercise: 1 star, standard, especially useful (triple_transfer_aliased)
State and prove a lemma called triple_transfer_aliased specifying the behavior of transfer when it is applied twice to the same argument. It should take the form triple (transfer p p) H Q.
(* FILL IN HERE *)
☐
☐
Specification of Allocation
(* TODO: explain the notation triple <{ vs triple (). *)
Parameter triple_ref : ∀ (v:val),
triple <{ ref v }>
\[]
(fun r ⇒ \∃ p, \[r = val_loc p] \* p ~~> v).
Parameter triple_ref : ∀ (v:val),
triple <{ ref v }>
\[]
(fun r ⇒ \∃ p, \[r = val_loc p] \* p ~~> v).
The pattern fun r ⇒ \∃ p, \[r = val_loc p] \* H) occurs
whenever a function returns a pointer. Thus, this pattern appears
pervasively. To improve concision, we introduce a specific
notation for this pattern, shortening it to funloc p ⇒ H.
Notation "'funloc' p '=>' H" :=
(fun r ⇒ \∃ p, \[r = val_loc p] \* H)
(at level 200, p ident, format "'funloc' p '=>' H").
(fun r ⇒ \∃ p, \[r = val_loc p] \* H)
(at level 200, p ident, format "'funloc' p '=>' H").
Using this notation, the specification triple_ref can be reformulated
more concisely, as follows.
Parameter triple_ref' : ∀ (v:val),
triple <{ ref v }>
\[]
(funloc p ⇒ p ~~> v).
triple <{ ref v }>
\[]
(funloc p ⇒ p ~~> v).
Remark: the CFML tool features a technique that generalizes the
notation funloc to all return types, by leveraging type-classes.
Unfortunately, the use of type-classes involves a number of
technicalities that we wish to avoid in this course. For that
reason, we employ only the funloc notation, and use existential
quantifiers explicitly for other types.
Allocation of a Reference with Greater Contents
Definition ref_greater : val :=
<{ fun 'p ⇒
let 'n = !'p in
let 'm = 'n + 1 in
ref 'm }>.
<{ fun 'p ⇒
let 'n = !'p in
let 'm = 'n + 1 in
ref 'm }>.
The precondition of ref_greater needs to assert the existence of a cell
p ~~> n. The postcondition of ref_greater should asserts the existence
of two cells, p ~~> n and q ~~> (n+1), where q denotes the
location returned by the function. The postcondition is thus written
funloc q ⇒ p ~~> n \* q ~~> (n+1), which is a shorthand for
fun (r:val) ⇒ \∃ q, \[r = val_loc q] \* p ~~> n \* q ~~> (n+1).
The complete specification of ref_greater is:
Lemma triple_ref_greater : ∀ (p:loc) (n:int),
triple (ref_greater p)
(p ~~> n)
(funloc q ⇒ p ~~> n \* q ~~> (n+1)).
Proof using.
xwp. xapp. xapp. xapp. intros q. xsimpl. auto.
Qed.
☐
triple (ref_greater p)
(p ~~> n)
(funloc q ⇒ p ~~> n \* q ~~> (n+1)).
Proof using.
xwp. xapp. xapp. xapp. intros q. xsimpl. auto.
Qed.
☐
Exercise: 2 stars, standard, especially useful (triple_ref_greater_abstract)
State another specification for the function ref_greater, called triple_ref_greater_abstract, with a postcondition that does not reveal the contents of the fresh reference q, but instead only asserts that it is greater than the contents of p. To that end, introduce in the postcondition an existentially quantified variable called m, with m > n.
(* FILL IN HERE *)
☐
☐
Deallocation in Separation Logic
Definition succ_using_incr_attempt :=
<{ fun 'n ⇒
let 'p = ref 'n in
incr 'p;
! 'p }>.
<{ fun 'n ⇒
let 'p = ref 'n in
incr 'p;
! 'p }>.
The operation succ_using_incr_attempt n admits an empty
precondition, and a postcondition asserting that the final
result is n+1. Yet, if we try to prove this specification,
we get stuck.
Lemma triple_succ_using_incr_attempt : ∀ (n:int),
triple (succ_using_incr_attempt n)
\[]
(fun r ⇒ \[r = n+1]).
Proof using.
xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }
Abort.
triple (succ_using_incr_attempt n)
\[]
(fun r ⇒ \[r = n+1]).
Proof using.
xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }
Abort.
In the above proof script, we get stuck with the entailment
p ~~> (n+1) ==> \[], which indicates that the current state contains
a reference, whereas the postcondition describes an empty state.
We could attempt to patch the specification to account for the left-over
reference. This yields a provable specification.
Lemma triple_succ_using_incr_attempt' : ∀ (n:int),
triple (succ_using_incr_attempt n)
\[]
(fun r ⇒ \[r = n+1] \* \∃ p, (p ~~> (n+1))).
Proof using.
xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }
Qed.
triple (succ_using_incr_attempt n)
\[]
(fun r ⇒ \[r = n+1] \* \∃ p, (p ~~> (n+1))).
Proof using.
xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }
Qed.
However, while the above specification is provable, it is not
especially useful, since the piece of postcondition
\∃ p, p ~~> (n+1) is of absolutely no use to the caller
of the function. Worse, the caller will have its own state polluted with
\∃ p, p ~~> (n+1) and will have no way to get rid of it apart
from incorporating it into its own postcondition.
The right solution is to alter the code to free the reference once
it is no longer needed, as shown below. We assume the source language
includes a deallocation operation written free p. (This operation
does not exist in OCaml, but let us nevertheless continue using OCaml
syntax for writing programs.)
Definition succ_using_incr :=
<{ fun 'n ⇒
let 'p = ref 'n in
incr 'p;
let 'x = ! 'p in
free 'p;
'x }>.
<{ fun 'n ⇒
let 'p = ref 'n in
incr 'p;
let 'x = ! 'p in
free 'p;
'x }>.
This program may now be proved correct with respect to the intended
specification. Observe in particular the last call to xapp below,
which corresponds to the free operation.
The final result is the value of the variable x. To reason about it,
we exploit the tactic xval, as illustrated below.
Lemma triple_succ_using_incr : ∀ n,
triple (succ_using_incr n)
\[]
(fun r ⇒ \[r = n+1]).
Proof using.
xwp. xapp. intros p. xapp. xapp. xapp. xval. xsimpl. auto.
Qed.
triple (succ_using_incr n)
\[]
(fun r ⇒ \[r = n+1]).
Proof using.
xwp. xapp. intros p. xapp. xapp. xapp. xval. xsimpl. auto.
Qed.
Remark: if we verify programs written in a language equipped with
a garbage collector (like, e.g., OCaml), we need to tweak the
Separation Logic to account for the fact that some heap predicates
can be freely discarded from postconditions. This variant of
Separation Logic will be described in the chapter Affine.
Combined Reading and Freeing of a Reference
Definition get_and_free : val :=
<{ fun 'p ⇒
let 'v = ! 'p in
free 'p;
'v }>.
<{ fun 'p ⇒
let 'v = ! 'p in
free 'p;
'v }>.
Exercise: 2 stars, standard, especially useful (triple_get_and_free)
Prove the correctness of the function get_and_free.
Lemma triple_get_and_free : ∀ p v,
triple (get_and_free p)
(p ~~> v)
(fun r ⇒ \[r = v]).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (get_and_free p)
(p ~~> v)
(fun r ⇒ \[r = v]).
Proof using. (* FILL IN HERE *) Admitted.
☐
Hint Resolve triple_get_and_free : triple.
Axiomatization of the Mathematical Factorial Function
Module Import Facto.
Parameter facto : int → int.
Parameter facto_init : ∀ n,
0 ≤ n ≤ 1 →
facto n = 1.
Parameter facto_step : ∀ n,
n > 1 →
facto n = n * (facto (n-1)).
End Facto.
Parameter facto : int → int.
Parameter facto_init : ∀ n,
0 ≤ n ≤ 1 →
facto n = 1.
Parameter facto_step : ∀ n,
n > 1 →
facto n = n * (facto (n-1)).
End Facto.
Note that we have purposely not specified the value of facto on
negative arguments.
A Partial Recursive Function, Without State
if n ≤ 1 then 1 else n * factorec (n-1)
Definition factorec : val :=
<{ fix 'f 'n ⇒
let 'b = 'n ≤ 1 in
if 'b
then 1
else let 'x = 'n - 1 in
let 'y = 'f 'x in
'n * 'y }>.
<{ fix 'f 'n ⇒
let 'b = 'n ≤ 1 in
if 'b
then 1
else let 'x = 'n - 1 in
let 'y = 'f 'x in
'n * 'y }>.
A call factorec n can be specified as follows:
In case the argument is negative (i.e., n < 0), we have two choices:
Let us follow the second approach, in order to illustrate the
specification of partial functions.
There are two possibilities for expressing the constraint n ≥ 0:
The two presentations are totally equivalent. By convention, we follow
the second presentation, which tends to improve both the readability of
specifications and the conciseness of proof scripts.
The specification of factorec is thus stated as follows.
- the initial state is empty,
- the final state is empty,
- the result value r is such that r = facto n, when n ≥ 0.
- either we explicitly specify that the result is 1 in this case,
- or we rule out this possibility by requiring n ≥ 0.
- either we use as precondition \[n ≥ 0],
- or we place an assumption (n ≥ 0) → _ to the front of the triple, and use an empty precondition, that is, \[].
Lemma triple_factorec : ∀ n,
n ≥ 0 →
triple (factorec n)
\[]
(fun r ⇒ \[r = facto n]).
n ≥ 0 →
triple (factorec n)
\[]
(fun r ⇒ \[r = facto n]).
Let's walk through the proof script in detail, to see in particular
how to set up the induction, how to reason about the recursive call,
and how to deal with the precondition n ≥ 0.
Proof using. unfold factorec.
We set up a proof by induction on n to obtain an induction
hypothesis for the recursive calls. Recursive calls are made
each time on smaller values, and the last recursive call is
made on n = 1. The well-founded relation downto 1 captures
this recursion pattern. The tactic induction_wf is provided
by the TLC library to assist in setting up well-founded inductions.
It is exploited as follows.
intros n. induction_wf IH: (downto 1) n.
Observe the induction hypothesis IH. By unfolding downto
as done in the next step, this hypothesis asserts that the
specification that we are trying to prove already holds for
arguments that are smaller than the current argument n,
and that are greater than or equal to 1.
unfold downto in IH.
We may then begin the interactive verification proof.
intros Hn. xwp.
We reason about the evaluation of the boolean condition n ≤ 1.
xapp.
The result of the evaluation of n ≤ 1 in the source program
is described by the boolean value isTrue (n ≤ 1), which appears
in the CODE section after Ifval. The operation isTrue is
provided by the TLC library as a conversion function from Prop
to bool. The use of such a conversion function (which leverages
classical logic) greatly simplifies the process of automatically
performing substitutions after calls to xapp. We next perform the case analysis on the test n ≤ 1.
xif.
Doing so gives two cases.
In the "then" branch, we can assume n ≤ 1.
{ intros C.
Here, the return value is 1.
xval. xsimpl.
We check that 1 = facto n when n ≤ 1.
rewrite facto_init; math. }
In the "else" branch, we can assume n > 1.
{ intros C.
We reason about the evaluation of n-1
xapp.
We reason about the recursive call, implicitly exploiting
the induction hypothesis IH with n-1.
xapp.
We justify that the recursive call is indeed made on a smaller
argument than the current one, that is, n.
{ math. }
We justify that the recursive call is made to a nonnegative argument,
as required by the specification.
{ math. }
We reason about the multiplication n * facto(n-1).
xapp.
We check that n * facto (n-1) matches facto n.
xsimpl. rewrite (@facto_step n); math. }
Qed.
Qed.
A Recursive Function with State
if m > 0 then (
incr p;
repeat_incr p (m - 1)
)
Definition repeat_incr : val :=
<{ fix 'f 'p 'm ⇒
let 'b = 'm > 0 in
if 'b then
incr 'p;
let 'x = 'm - 1 in
'f 'p 'x
end }>.
<{ fix 'f 'p 'm ⇒
let 'b = 'm > 0 in
if 'b then
incr 'p;
let 'x = 'm - 1 in
'f 'p 'x
end }>.
The specification for repeat_incr p requires that the initial
state contains a reference p with some integer contents n,
that is, p ~~> n. Its postcondition asserts that the resulting
state is p ~~> (n+m), which is the result after incrementing
m times the reference p. Observe that this postcondition is
only valid under the assumption that m ≥ 0.
Lemma triple_repeat_incr : ∀ (m n:int) (p:loc),
m ≥ 0 →
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + m)).
m ≥ 0 →
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + m)).
Exercise: 2 stars, standard, especially useful (triple_repeat_incr)
Prove the specification of the function repeat_incr. Hint: the structure of the proof resembles that of triple_factorec'.
Proof using. unfold repeat_incr. (* FILL IN HERE *) Admitted.
☐
☐
Lemma triple_repeat_incr' : ∀ (p:loc) (n m:int),
m ≥ 0 →
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + m)).
Proof using.
(* First, introduces all variables and hypotheses. *)
intros n m Hm.
(* Next, generalize those that are not constant during the recursion. *)
gen n Hm.
(* Then, set up the induction. *)
induction_wf IH: (downto 0) m. unfold downto in IH.
(* Finally, re-introduce the generalized hypotheses. *)
intros.
Abort.
m ≥ 0 →
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + m)).
Proof using.
(* First, introduces all variables and hypotheses. *)
intros n m Hm.
(* Next, generalize those that are not constant during the recursion. *)
gen n Hm.
(* Then, set up the induction. *)
induction_wf IH: (downto 0) m. unfold downto in IH.
(* Finally, re-introduce the generalized hypotheses. *)
intros.
Abort.
Trying to Prove Incorrect Specifications
Lemma triple_repeat_incr_incorrect : ∀ (p:loc) (n m:int),
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + m)).
Proof using.
intros. revert n. induction_wf IH: (downto 0) m. unfold downto in IH.
intros. xwp. xapp. xif; intros C.
{ (* In the 'then' branch: m > 0 *)
xapp. xapp. xapp. { math. } xsimpl. math. }
{ (* In the 'else' branch: m ≤ 0 *)
xval.
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + m)).
Proof using.
intros. revert n. induction_wf IH: (downto 0) m. unfold downto in IH.
intros. xwp. xapp. xif; intros C.
{ (* In the 'then' branch: m > 0 *)
xapp. xapp. xapp. { math. } xsimpl. math. }
{ (* In the 'else' branch: m ≤ 0 *)
xval.
At this point, we are requested to justify that the current state
p ~~> n matches the postcondition p ~~> (n + m), which
amounts to proving n = n + m.
xsimpl.
Abort.
Abort.
When the specification features the assumption m ≥ 0,
we can prove this equality because the fact that we are
in the else branch means that m ≤ 0, thus m = 0.
However, without the assumption m ≥ 0, the value of
m could very well be negative.
Note that there exists a valid specification for repeat_incr
that does not constrain m but instead specifies that the state
always evolves from p ~~> n to p ~~> (n + max 0 m).
The corresponding proof scripts exploits two properties of the
max function.
Lemma max_l : ∀ n m,
n ≥ m →
max n m = n.
Proof using. introv M. unfold max. case_if; math. Qed.
Lemma max_r : ∀ n m,
n ≤ m →
max n m = m.
Proof using. introv M. unfold max. case_if; math. Qed.
n ≥ m →
max n m = n.
Proof using. introv M. unfold max. case_if; math. Qed.
Lemma max_r : ∀ n m,
n ≤ m →
max n m = m.
Proof using. introv M. unfold max. case_if; math. Qed.
Here is the most general specification for the function
repeat_incr.
Lemma triple_repeat_incr' : ∀ (p:loc) (n m:int),
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + max 0 m)).
Proof using.
intros. gen n. induction_wf IH: (downto 0) m.
xwp. xapp. xif; intros C.
{ xapp. xapp. xapp. { math. }
xsimpl. repeat rewrite max_r; math. }
{ xval. xsimpl. rewrite max_l; math. }
Qed.
triple (repeat_incr p m)
(p ~~> n)
(fun _ ⇒ p ~~> (n + max 0 m)).
Proof using.
intros. gen n. induction_wf IH: (downto 0) m.
xwp. xapp. xif; intros C.
{ xapp. xapp. xapp. { math. }
xsimpl. repeat rewrite max_r; math. }
{ xval. xsimpl. rewrite max_l; math. }
Qed.
A Recursive Function Involving two References
if !q > 0 then (
incr p;
decr q;
step_transfer p q
)
Definition step_transfer :=
<{ fix 'f 'p 'q ⇒
let 'm = !'q in
let 'b = 'm > 0 in
if 'b then
incr 'p;
decr 'q;
'f 'p 'q
end }>.
<{ fix 'f 'p 'q ⇒
let 'm = !'q in
let 'b = 'm > 0 in
if 'b then
incr 'p;
decr 'q;
'f 'p 'q
end }>.
The specification of step_transfer is essentially the same as
that of the function transfer presented previously, the
only difference being that we here assume the contents of q to be
nonnegative.
Lemma triple_step_transfer : ∀ p q n m,
m ≥ 0 →
triple (step_transfer p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n + m) \* q ~~> 0).
m ≥ 0 →
triple (step_transfer p q)
(p ~~> n \* q ~~> m)
(fun _ ⇒ p ~~> (n + m) \* q ~~> 0).
Exercise: 2 stars, standard, especially useful (triple_step_transfer)
Verify the function step_transfer. Hint: to set up the induction, follow the pattern shown in the proof of triple_repeat_incr'.
Proof using. (* FILL IN HERE *) Admitted.
☐
☐
Historical Notes
Repr: Representation Predicates
Set Implicit Arguments.
From SLF Require Import LibSepReference.
Import ProgramSyntax DemoPrograms.
From SLF Require Import Basic.
Open Scope liblist_scope.
Implicit Types n m : int.
Implicit Types p q s c : loc.
From SLF Require Import LibSepReference.
Import ProgramSyntax DemoPrograms.
From SLF Require Import Basic.
Open Scope liblist_scope.
Implicit Types n m : int.
Implicit Types p q s c : loc.
First Pass
- xfun to reason about a function definition.
- xchange for exploiting transitivity of ==>.
Formalization of the List Representation Predicate
Definition head : field := 0%nat.
Definition tail : field := 1%nat.
Definition tail : field := 1%nat.
The heap predicate p ~~~>`{ head := x; tail := q} describes a
record allocated at location p, with a head field storing x
and a tail field storing q.
A mutable list consists of a chain of cells, terminated by null.
The heap predicate MList L p describes a list whose head cell is
at location p, and whose elements are described by the list L.
This predicate is defined recursively on the structure of L:
- if L is empty, then p is the null pointer,
- if L is of the form x::L', then p is not null, the head field of p contains x, and the tail field of p contains a pointer q such that MList L' q describes the tail of the list.
Fixpoint MList (L:list val) (p:loc) : hprop :=
match L with
| nil ⇒ \[p = null]
| x::L' ⇒ \∃ q, (p ~~~> `{ head := x; tail := q}) \* (MList L' q)
end.
match L with
| nil ⇒ \[p = null]
| x::L' ⇒ \∃ q, (p ~~~> `{ head := x; tail := q}) \* (MList L' q)
end.
Alternative Characterizations of MList
Lemma MList_nil : ∀ p,
(MList nil p) = \[p = null].
Proof using. auto. Qed.
Lemma MList_cons : ∀ p x L',
MList (x::L') p =
\∃ q, (p ~~~> `{ head := x; tail := q}) \* (MList L' q).
Proof using. auto. Qed.
(MList nil p) = \[p = null].
Proof using. auto. Qed.
Lemma MList_cons : ∀ p x L',
MList (x::L') p =
\∃ q, (p ~~~> `{ head := x; tail := q}) \* (MList L' q).
Proof using. auto. Qed.
Another characterization of MList L p is useful for proofs. Whereas
the original definition is by case analysis on whether L is empty,
the alternative one is by case analysis on whether p is null:
The lemma below is stated using an entailment. The reciprocal entailment
is also true, but we do not need it so we do not bother proving it here.
- if p is null, then L is empty,
- otherwise, L decomposes as x::L', the head field of p contains x, and the tail field of p contains a pointer q such that MList L' q describes the tail of the list.
Lemma MList_if : ∀ (p:loc) (L:list val),
(MList L p)
==> (If p = null
then \[L = nil]
else \∃ x q L', \[L = x::L']
\* (p ~~~> `{ head := x; tail := q}) \* (MList L' q)).
Proof using.
(* Let's prove this result by case analysis on L. *)
intros. destruct L as [|x L'].
{ (* Case L = nil. By definition of MList, we have p = null. *)
xchange MList_nil. intros M.
(* We have to justify L = nil, which is trivial.
The TLC case_if tactic performs a case analysis on the argument
of the first visible if conditional. *)
case_if. xsimpl. auto. }
{ (* Case L = x::L'. *)
(* One possibility is to perform a rewrite operation using MList_cons
on its first occurrence. Then using CFML the tactic xpull to extract
the existential quantifiers out from the precondition:
rewrite MList_cons. xpull. intros q.
A more efficient approach is to use the dedicated CFML tactic xchange,
which is specialized for performing updates in the current state. *)
xchange MList_cons. intros q.
(* Because a record is allocated at location p, the pointer p
cannot be null. The lemma hrecord_not_null allows us to exploit
this property, extracting the hypothesis p ≠ null. We use again
the tactic case_if to simplify the case analysis. *)
xchange hrecord_not_null. intros N. case_if.
(* To conclude, it suffices to correctly instantiate the existential
quantifiers. The tactic xsimpl is able to guess the appropriate
instantiations. *)
xsimpl. auto. }
Qed.
Global Opaque MList.
(MList L p)
==> (If p = null
then \[L = nil]
else \∃ x q L', \[L = x::L']
\* (p ~~~> `{ head := x; tail := q}) \* (MList L' q)).
Proof using.
(* Let's prove this result by case analysis on L. *)
intros. destruct L as [|x L'].
{ (* Case L = nil. By definition of MList, we have p = null. *)
xchange MList_nil. intros M.
(* We have to justify L = nil, which is trivial.
The TLC case_if tactic performs a case analysis on the argument
of the first visible if conditional. *)
case_if. xsimpl. auto. }
{ (* Case L = x::L'. *)
(* One possibility is to perform a rewrite operation using MList_cons
on its first occurrence. Then using CFML the tactic xpull to extract
the existential quantifiers out from the precondition:
rewrite MList_cons. xpull. intros q.
A more efficient approach is to use the dedicated CFML tactic xchange,
which is specialized for performing updates in the current state. *)
xchange MList_cons. intros q.
(* Because a record is allocated at location p, the pointer p
cannot be null. The lemma hrecord_not_null allows us to exploit
this property, extracting the hypothesis p ≠ null. We use again
the tactic case_if to simplify the case analysis. *)
xchange hrecord_not_null. intros N. case_if.
(* To conclude, it suffices to correctly instantiate the existential
quantifiers. The tactic xsimpl is able to guess the appropriate
instantiations. *)
xsimpl. auto. }
Qed.
Global Opaque MList.
In-place Concatenation of Two Mutable Lists
if p1.tail == null
then p1.tail <- p2
else append p1.tail p2
Definition append : val :=
<{ fix 'f 'p1 'p2 ⇒
let 'q1 = 'p1`.tail in
let 'b = ('q1 = null) in
if 'b
then 'p1`.tail := 'p2
else 'f 'q1 'p2 }>.
<{ fix 'f 'p1 'p2 ⇒
let 'q1 = 'p1`.tail in
let 'b = ('q1 = null) in
if 'b
then 'p1`.tail := 'p2
else 'f 'q1 'p2 }>.
The append function is specified and verified as follows. The
proof pattern is representative of that of many list-manipulating
functions, so it is essential that the reader follow through every
step of the proof.
Lemma Triple_append : ∀ (L1 L2:list val) (p1 p2:loc),
p1 ≠ null →
triple (append p1 p2)
(MList L1 p1 \* MList L2 p2)
(fun _ ⇒ MList (L1++L2) p1).
Proof using.
p1 ≠ null →
triple (append p1 p2)
(MList L1 p1 \* MList L2 p2)
(fun _ ⇒ MList (L1++L2) p1).
Proof using.
The induction principle provides an hypothesis for the tail of L1,
i.e., for the list L1', such that the relation list_sub L1' L1
holds.
introv K. gen p1. induction_wf IH: list_sub L1. introv N. xwp.
To begin the proof, we reveal the head cell of p1.
We let q1 denote the tail of p1, and L1' the tail of L1.
xchange (MList_if p1). case_if. xpull. intros x q1 L1' →.
We then reason about the case analysis.
xapp. xapp. xif; intros Cq1.
If q1' is null, then L1' is empty.
{ xchange (MList_if q1). case_if. xpull. intros →.
In this case, we set the pointer, then we fold back the head cell.
xapp. xchange <- MList_cons. }
Otherwise, if q1' is not null, we reason about the recursive call using
the induction hypothesis, then we fold back the head cell.
{ xapp. xchange <- MList_cons. }
Qed.
Qed.
Implicit Types x : val.
This section introduces two smart constructors for linked lists,
called mnil and mcons.
The operation mnil() is intended to create an empty list.
Its implementation simply returns the value null. Its
specification asserts that the return value is a pointer p
such that MList nil p holds.
Definition mnil : val :=
<{ fun 'u ⇒
null }>.
Lemma triple_mnil :
triple (mnil ())
\[]
(funloc p ⇒ MList nil p).
Proof using. xwp. xval. xchanges* <- (MList_nil null). Qed.
Hint Resolve triple_mnil : triple.
<{ fun 'u ⇒
null }>.
Lemma triple_mnil :
triple (mnil ())
\[]
(funloc p ⇒ MList nil p).
Proof using. xwp. xval. xchanges* <- (MList_nil null). Qed.
Hint Resolve triple_mnil : triple.
Observe that the specification triple_mnil does not mention
the null pointer anywhere. This specification may thus be
used to specify the behavior of operations on mutable lists
without having to reveal low-level implementation details that
involve the null pointer.
The operation mcons x q is intended to allocate a fresh list cell,
with x in the head field and q in the tail field. The implementation
of this operation allocates and initializes a fresh record made of
two fields. The allocation operation leverages the New construct, which
is notation for a operation called val_new_hrecord_2, which we here view
as a primitive operation. (The chapter Struct describes an encoding
of this function in terms of the allocation and write operations.)
Definition mcons : val :=
<{ fun 'x 'q ⇒
`{ head := 'x; tail := 'q } }>.
<{ fun 'x 'q ⇒
`{ head := 'x; tail := 'q } }>.
The operation mcons admits two specifications: one that describes only
the fresh record allocated, and one that combines it with a list
representation of the form Mlist q L to produce as postcondition
an extended list of the form Mlist p (x::L).
The first specification is as follows.
Lemma triple_mcons : ∀ x q,
triple (mcons x q)
\[]
(funloc p ⇒ p ~~~> `{ head := x ; tail := q }).
Proof using. xwp. xapp triple_new_hrecord_2; auto. xsimpl*. Qed.
triple (mcons x q)
\[]
(funloc p ⇒ p ~~~> `{ head := x ; tail := q }).
Proof using. xwp. xapp triple_new_hrecord_2; auto. xsimpl*. Qed.
The second specification is derived from the first.
Lemma triple_mcons' : ∀ L x q,
triple (mcons x q)
(MList L q)
(funloc p ⇒ MList (x::L) p).
Proof using.
intros. xtriple. xapp triple_mcons.
intros p. xchange <- MList_cons. xsimpl*.
Qed.
Hint Resolve triple_mcons' : triple.
triple (mcons x q)
(MList L q)
(funloc p ⇒ MList (x::L) p).
Proof using.
intros. xtriple. xapp triple_mcons.
intros p. xchange <- MList_cons. xsimpl*.
Qed.
Hint Resolve triple_mcons' : triple.
Copy Function for Lists
if p == null
then mnil ()
else mcons (p.head) (mcopy p.tail)
Definition mcopy : val :=
<{ fix 'f 'p ⇒
let 'b = ('p = null) in
if 'b
then mnil ()
else
let 'x = 'p`.head in
let 'q = 'p`.tail in
let 'q2 = ('f 'q) in
mcons 'x 'q2 }>.
<{ fix 'f 'p ⇒
let 'b = ('p = null) in
if 'b
then mnil ()
else
let 'x = 'p`.head in
let 'q = 'p`.tail in
let 'q2 = ('f 'q) in
mcons 'x 'q2 }>.
The precondition of mcopy requires a linked list MList L p.
Its postcondition asserts that the function returns a pointer p'
and a list MList L p', in addition to the original list MList L p.
The two lists are totally disjoint and independent, as captured by
the separating conjunction symbol (the star).
Lemma triple_mcopy : ∀ L p,
triple (mcopy p)
(MList L p)
(funloc p' ⇒ (MList L p) \* (MList L p')).
triple (mcopy p)
(MList L p)
(funloc p' ⇒ (MList L p) \* (MList L p')).
The proof script is very similar in structure to the previous ones.
While playing the script, try to spot the places where:
- mnil produces an empty list of the form MList nil p',
- the recursive call produces a list of the form MList L' q',
- mcons produces a list of the form MList (x::L') p'.
Proof using.
intros. gen p. induction_wf IH: list_sub L.
xwp. xapp. xchange MList_if. xif; intros C; case_if; xpull.
{ intros →. xapp. xsimpl*. subst. xchange* <- MList_nil. }
{ intros x q L' →. xapp. xapp. xapp. intros q'.
xapp. intros p'. xchange <- MList_cons. xsimpl*. }
Qed.
intros. gen p. induction_wf IH: list_sub L.
xwp. xapp. xchange MList_if. xif; intros C; case_if; xpull.
{ intros →. xapp. xsimpl*. subst. xchange* <- MList_nil. }
{ intros x q L' →. xapp. xapp. xapp. intros q'.
xapp. intros p'. xchange <- MList_cons. xsimpl*. }
Qed.
Length Function for Lists
if p == null
then 0
else 1 + mlength p.tail
Definition mlength : val :=
<{ fix 'f 'p ⇒
let 'b = ('p = null) in
if 'b
then 0
else (let 'q = 'p`.tail in
let 'n = 'f 'q in
'n + 1) }>.
<{ fix 'f 'p ⇒
let 'b = ('p = null) in
if 'b
then 0
else (let 'q = 'p`.tail in
let 'n = 'f 'q in
'n + 1) }>.
Exercise: 3 stars, standard, especially useful (triple_mlength)
Prove the correctness of the function mlength.
Lemma triple_mlength : ∀ L p,
triple (mlength p)
(MList L p)
(fun r ⇒ \[r = val_int (length L)] \* (MList L p)).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (mlength p)
(MList L p)
(fun r ⇒ \[r = val_int (length L)] \* (MList L p)).
Proof using. (* FILL IN HERE *) Admitted.
☐
Alternative Length Function for Lists
if p == null
then ()
else (incr c; listacc c p.tail)
let mlength' p =
let c = ref 0 in
listacc c p;
!c
Definition acclength : val :=
<{ fix 'f 'c 'p ⇒
let 'b = ('p ≠ null) in
if 'b then
incr 'c;
let 'q = 'p`.tail in
'f 'c 'q
end }>.
Definition mlength' : val :=
<{ fun 'p ⇒
let 'c = ref 0 in
acclength 'c 'p;
get_and_free 'c }>.
<{ fix 'f 'c 'p ⇒
let 'b = ('p ≠ null) in
if 'b then
incr 'c;
let 'q = 'p`.tail in
'f 'c 'q
end }>.
Definition mlength' : val :=
<{ fun 'p ⇒
let 'c = ref 0 in
acclength 'c 'p;
get_and_free 'c }>.
Exercise: 3 stars, standard, especially useful (triple_mlength')
Prove the correctness of the function mlength'. Hint: start by stating a lemma triple_acclength expressing the specification of the recursive function acclength. Then, register that specification for subsequent use by xapp using the command Hint Resolve triple_acclength : triple. Finally, complete the proof of the specification triple_mlength'.
(* FILL IN HERE *)
Lemma triple_mlength' : ∀ L p,
triple (mlength' p)
(MList L p)
(fun r ⇒ \[r = val_int (length L)] \* (MList L p)).
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma triple_mlength' : ∀ L p,
triple (mlength' p)
(MList L p)
(fun r ⇒ \[r = val_int (length L)] \* (MList L p)).
Proof using. (* FILL IN HERE *) Admitted.
☐
In-Place Reversal Function for Lists
if p2 == null
then p1
else (let p3 = p2.tail in
p2.tail <- p1;
mrev_aux p2 p3)
let mrev p =
mrev_aux p null
Definition mrev_aux : val :=
<{ fix 'f 'p1 'p2 ⇒
let 'b = ('p2 = null) in
if 'b
then 'p1
else (
let 'p3 = 'p2`.tail in
'p2`.tail := 'p1;
'f 'p2 'p3) }>.
Definition mrev : val :=
<{ fun 'p ⇒
mrev_aux null 'p }>.
<{ fix 'f 'p1 'p2 ⇒
let 'b = ('p2 = null) in
if 'b
then 'p1
else (
let 'p3 = 'p2`.tail in
'p2`.tail := 'p1;
'f 'p2 'p3) }>.
Definition mrev : val :=
<{ fun 'p ⇒
mrev_aux null 'p }>.
Exercise: 5 stars, standard, optional (triple_mrev)
Prove the correctness of the functions mrev_aux and mrev.
(* FILL IN HERE *)
Lemma triple_mrev : ∀ L p,
triple (mrev p)
(MList L p)
(funloc q ⇒ MList (rev L) q).
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma triple_mrev : ∀ L p,
triple (mrev p)
(MList L p)
(funloc q ⇒ MList (rev L) q).
Proof using. (* FILL IN HERE *) Admitted.
☐
Module SizedStack.
In this section, we consider the implementation of a mutable stack with
constant-time access to its size, i.e., to the number of items it stores.
The stack structure consists of a 2-field record, storing a pointer on a
mutable linked list, and an integer storing the length of that list.
The stack implementation exposes a function to create an empty stack, a
function for reading the size, and push and pop methods for operating
at the top of the stack.
type 'a stack = { data : 'a list; size : int }
let create () =
{ data = nil; size = 0 }
let sizeof s =
s.size
let push p x =
s.data <- mcons x s.data;
s.size <- s.size + 1
let pop s =
let p = s.data in
let x = p.head in
let q = p.tail in
delete p;
s.data <- q in
s.size <- s.size - 1;
x
The representation predicate for the stack takes the form Stack L s, where
s denotes the location of the record describing the stack, and where L
denotes the list of items stored in the stack. The underlying mutable list
is described as MList L p, where p is the location p stored in the
first field of the record. The definition of Stack is as follows.
let create () =
{ data = nil; size = 0 }
let sizeof s =
s.size
let push p x =
s.data <- mcons x s.data;
s.size <- s.size + 1
let pop s =
let p = s.data in
let x = p.head in
let q = p.tail in
delete p;
s.data <- q in
s.size <- s.size - 1;
x
Definition data : field := 0%nat.
Definition size : field := 1%nat.
Definition Stack (L:list val) (s:loc) : hprop :=
\∃ p, s ~~~>`{ data := p; size := length L } \* (MList L p).
Definition size : field := 1%nat.
Definition Stack (L:list val) (s:loc) : hprop :=
\∃ p, s ~~~>`{ data := p; size := length L } \* (MList L p).
Observe that the predicate Stack does not expose the address of the
mutable list; this adress is existentially quantified in the definition.
The predicate Stack also does not expose the size of the stack, as this
value can be deduced as length L.
Definition create : val :=
<{ fun 'u ⇒
`{ data := null; size := 0 } }>.
Lemma triple_create :
triple (create ())
\[]
(funloc s ⇒ Stack nil s).
Proof using.
xwp. xapp triple_new_hrecord_2; auto. intros s.
unfold Stack. xsimpl*. xchange* <- (MList_nil null).
Qed.
<{ fun 'u ⇒
`{ data := null; size := 0 } }>.
Lemma triple_create :
triple (create ())
\[]
(funloc s ⇒ Stack nil s).
Proof using.
xwp. xapp triple_new_hrecord_2; auto. intros s.
unfold Stack. xsimpl*. xchange* <- (MList_nil null).
Qed.
The sizeof operation returns the contents of the size field of a
stack.
Definition sizeof : val :=
<{ fun 'p ⇒
'p`.size }>.
Lemma triple_sizeof : ∀ L s,
triple (sizeof s)
(Stack L s)
(fun r ⇒ \[r = length L] \* Stack L s).
Proof using.
xwp. unfold Stack. xpull. intros p. xapp. xsimpl*.
Qed.
<{ fun 'p ⇒
'p`.size }>.
Lemma triple_sizeof : ∀ L s,
triple (sizeof s)
(Stack L s)
(fun r ⇒ \[r = length L] \* Stack L s).
Proof using.
xwp. unfold Stack. xpull. intros p. xapp. xsimpl*.
Qed.
The push operation extends the head of the list, and increments the size
field.
Definition push : val :=
<{ fun 's 'x ⇒
let 'p = 's`.data in
let 'p2 = mcons 'x 'p in
's`.data := 'p2;
let 'n = 's`.size in
let 'n2 = 'n + 1 in
's`.size := 'n2 }>.
<{ fun 's 'x ⇒
let 'p = 's`.data in
let 'p2 = mcons 'x 'p in
's`.data := 'p2;
let 'n = 's`.size in
let 'n2 = 'n + 1 in
's`.size := 'n2 }>.
Exercise: 3 stars, standard, especially useful (triple_push)
Prove the following specification for the push operation.
Lemma triple_push : ∀ L s x,
triple (push s x)
(Stack L s)
(fun u ⇒ Stack (x::L) s).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (push s x)
(Stack L s)
(fun u ⇒ Stack (x::L) s).
Proof using. (* FILL IN HERE *) Admitted.
☐
Definition pop : val :=
<{ fun 's ⇒
let 'p = 's`.data in
let 'x = 'p`.head in
let 'p2 = 'p`.tail in
delete 'p;
's`.data := 'p2;
let 'n = 's`.size in
let 'n2 = 'n - 1 in
's`.size := 'n2;
'x }>.
<{ fun 's ⇒
let 'p = 's`.data in
let 'x = 'p`.head in
let 'p2 = 'p`.tail in
delete 'p;
's`.data := 'p2;
let 'n = 's`.size in
let 'n2 = 'n - 1 in
's`.size := 'n2;
'x }>.
Exercise: 4 stars, standard, especially useful (triple_pop)
Prove the following specification for the pop operation.
Lemma triple_pop : ∀ L s,
L ≠ nil →
triple (pop s)
(Stack L s)
(fun x ⇒ \∃ L', \[L = x::L'] \* Stack L' s).
Proof using. (* FILL IN HERE *) Admitted.
☐
L ≠ nil →
triple (pop s)
(Stack L s)
(fun x ⇒ \∃ L', \[L = x::L'] \* Stack L' s).
Proof using. (* FILL IN HERE *) Admitted.
☐
End SizedStack.
Formalization of the Tree Representation Predicate
Inductive tree : Type :=
| Leaf : tree
| Node : int → tree → tree → tree.
Implicit Types T : tree.
| Leaf : tree
| Node : int → tree → tree → tree.
Implicit Types T : tree.
A mutable tree node consists of a three-cell record, featuring an field
storing an integer named "item", and two pointers named "left" and "right",
denoting the pointers to the left and right subtrees, respectively.
Definition item : field := 0%nat.
Definition left : field := 1%nat.
Definition right : field := 2%nat.
Definition left : field := 1%nat.
Definition right : field := 2%nat.
The heap predicate p ~~~>`{ item := n; left := p1; right := p2 }
describes a record allocated at location p, describing a node storing
the integer n, with its two subtrees at locations p1 and p2.
An empty tree is represented using the null pointer.
The representation predicate MTree T p, of type hprop, asserts that
the mutable tree structure with root at location p describes the logical
tree T. The predicate is defined recursively on the structure of T:
- if T is a Leaf, then p is the null pointer,
- if T is a node Node n T1 T2, then p is not null, and at location p one finds a record with field contents n, p1 and p2, with MTree T1 p1 and MTree T2 p2 describing the two subtrees.
Fixpoint MTree (T:tree) (p:loc) : hprop :=
match T with
| Leaf ⇒ \[p = null]
| Node n T1 T2 ⇒ \∃ p1 p2,
(p ~~~>`{ item := n; left := p1; right := p2 })
\* (MTree T1 p1)
\* (MTree T2 p2)
end.
match T with
| Leaf ⇒ \[p = null]
| Node n T1 T2 ⇒ \∃ p1 p2,
(p ~~~>`{ item := n; left := p1; right := p2 })
\* (MTree T1 p1)
\* (MTree T2 p2)
end.
Alternative Characterization of MTree
Lemma MTree_Leaf : ∀ p,
(MTree Leaf p) = \[p = null].
Proof using. auto. Qed.
Lemma MTree_Node : ∀ p n T1 T2,
(MTree (Node n T1 T2) p) =
\∃ p1 p2,
(p ~~~>`{ item := n; left := p1; right := p2 })
\* (MTree T1 p1) \* (MTree T2 p2).
Proof using. auto. Qed.
(MTree Leaf p) = \[p = null].
Proof using. auto. Qed.
Lemma MTree_Node : ∀ p n T1 T2,
(MTree (Node n T1 T2) p) =
\∃ p1 p2,
(p ~~~>`{ item := n; left := p1; right := p2 })
\* (MTree T1 p1) \* (MTree T2 p2).
Proof using. auto. Qed.
The third lemma reformulates MTree T p using a case analysis on whether
p is the null pointer. This formulation matches the case analysis
typically perform in the code of functions operating on trees.
Lemma MTree_if : ∀ (p:loc) (T:tree),
(MTree T p)
==> (If p = null
then \[T = Leaf]
else \∃ n p1 p2 T1 T2, \[T = Node n T1 T2]
\* (p ~~~>`{ item := n; left := p1; right := p2 })
\* (MTree T1 p1) \* (MTree T2 p2)).
Proof using.
intros. destruct T as [|n T1 T2].
{ xchange MTree_Leaf. intros M. case_if. xsimpl*. }
{ xchange MTree_Node. intros p1 p2.
xchange hrecord_not_null. intros N. case_if. xsimpl*. }
Qed.
Opaque MTree.
(MTree T p)
==> (If p = null
then \[T = Leaf]
else \∃ n p1 p2 T1 T2, \[T = Node n T1 T2]
\* (p ~~~>`{ item := n; left := p1; right := p2 })
\* (MTree T1 p1) \* (MTree T2 p2)).
Proof using.
intros. destruct T as [|n T1 T2].
{ xchange MTree_Leaf. intros M. case_if. xsimpl*. }
{ xchange MTree_Node. intros p1 p2.
xchange hrecord_not_null. intros N. case_if. xsimpl*. }
Qed.
Opaque MTree.
Additional Tooling for MTree
Inductive tree_sub : binary (tree) :=
| tree_sub_1 : ∀ n T1 T2,
tree_sub T1 (Node n T1 T2)
| tree_sub_2 : ∀ n T1 T2,
tree_sub T2 (Node n T1 T2).
Lemma tree_sub_wf : wf tree_sub.
Proof using.
intros T. induction T; constructor; intros t' H; inversions¬H.
Qed.
Hint Resolve tree_sub_wf : wf.
| tree_sub_1 : ∀ n T1 T2,
tree_sub T1 (Node n T1 T2)
| tree_sub_2 : ∀ n T1 T2,
tree_sub T2 (Node n T1 T2).
Lemma tree_sub_wf : wf tree_sub.
Proof using.
intros T. induction T; constructor; intros t' H; inversions¬H.
Qed.
Hint Resolve tree_sub_wf : wf.
For allocating fresh tree nodes as a 3-field record, we introduce the
operation mnode n p1 p2, defined and specified as follows.
Definition mnode : val :=
val_new_hrecord_3 item left right.
val_new_hrecord_3 item left right.
The first specification of mnode describes the allocation of a record.
Lemma triple_mnode : ∀ n p1 p2,
triple (mnode n p1 p2)
\[]
(funloc p ⇒ p ~~~> `{ item := n ; left := p1 ; right := p2 }).
Proof using. intros. applys* triple_new_hrecord_3. Qed.
triple (mnode n p1 p2)
\[]
(funloc p ⇒ p ~~~> `{ item := n ; left := p1 ; right := p2 }).
Proof using. intros. applys* triple_new_hrecord_3. Qed.
The second specification, derived from the first, asserts that, if provided
the description of two subtrees T1 and T2 at locations p1 and p2,
the operation mnode n p1 p2 builds, at a fresh location p, a tree
described by Mtree [Node n T1 T2] p.
Exercise: 2 stars, standard, optional (triple_mnode')
Prove the specification triple_mnode' for node allocation.
Lemma triple_mnode' : ∀ T1 T2 n p1 p2,
triple (mnode n p1 p2)
(MTree T1 p1 \* MTree T2 p2)
(funloc p ⇒ MTree (Node n T1 T2) p).
Proof using. (* FILL IN HERE *) Admitted.
Hint Resolve triple_mnode' : triple.
☐
triple (mnode n p1 p2)
(MTree T1 p1 \* MTree T2 p2)
(funloc p ⇒ MTree (Node n T1 T2) p).
Proof using. (* FILL IN HERE *) Admitted.
Hint Resolve triple_mnode' : triple.
☐
Tree Copy
if p = null
then null
else mnode p.item (tree_copy p.left) (tree_copy p.right)
Definition tree_copy :=
<{ fix 'f 'p ⇒
let 'b = ('p = null) in
if 'b then null else (
let 'n = 'p`.item in
let 'p1 = 'p`.left in
let 'p2 = 'p`.right in
let 'q1 = 'f 'p1 in
let 'q2 = 'f 'p2 in
mnode 'n 'q1 'q2
) }>.
<{ fix 'f 'p ⇒
let 'b = ('p = null) in
if 'b then null else (
let 'n = 'p`.item in
let 'p1 = 'p`.left in
let 'p2 = 'p`.right in
let 'q1 = 'f 'p1 in
let 'q2 = 'f 'p2 in
mnode 'n 'q1 'q2
) }>.
Lemma triple_tree_copy : ∀ p T,
triple (tree_copy p)
(MTree T p)
(funloc q ⇒ (MTree T p) \* (MTree T q)).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (tree_copy p)
(MTree T p)
(funloc q ⇒ (MTree T p) \* (MTree T q)).
Proof using. (* FILL IN HERE *) Admitted.
☐
Computing the Sum of the Items in a Tree
if p ≠ null then (
c := !c + p.item;
treeacc c p.left;
treeacc c p.right)
let mtreesum p =
let c = ref 0 in
treeacc c p;
!c
Definition treeacc : val :=
<{ fix 'f 'c 'p ⇒
let 'b = ('p ≠ null) in
if 'b then
let 'm = ! 'c in
let 'x = 'p`.item in
let 'm2 = 'm + 'x in
'c := 'm2;
let 'p1 = 'p`.left in
'f 'c 'p1;
let 'p2 = 'p`.right in
'f 'c 'p2
end }>.
Definition mtreesum : val :=
<{ fun 'p ⇒
let 'c = ref 0 in
treeacc 'c 'p;
get_and_free 'c }>.
<{ fix 'f 'c 'p ⇒
let 'b = ('p ≠ null) in
if 'b then
let 'm = ! 'c in
let 'x = 'p`.item in
let 'm2 = 'm + 'x in
'c := 'm2;
let 'p1 = 'p`.left in
'f 'c 'p1;
let 'p2 = 'p`.right in
'f 'c 'p2
end }>.
Definition mtreesum : val :=
<{ fun 'p ⇒
let 'c = ref 0 in
treeacc 'c 'p;
get_and_free 'c }>.
The specification of mtreesum is expressed in terms of the logical
operation treesum which computes the sum of the node items stored
in a logical tree. This operation is defined in Coq as a fixed point.
Fixpoint treesum (T:tree) : int :=
match T with
| Leaf ⇒ 0
| Node n T1 T2 ⇒ n + treesum T1 + treesum T2
end.
match T with
| Leaf ⇒ 0
| Node n T1 T2 ⇒ n + treesum T1 + treesum T2
end.
Exercise: 4 stars, standard, optional (triple_mtreesum)
Prove the correctness of the function mlength'. Hint: start by stating a lemma triple_treeacc, then register it using the command Hint Resolve triple_treeacc : triple.. Finally, complete the proof of the specification triple_mtreesum.
(* FILL IN HERE *)
Lemma triple_mtreesum : ∀ T p,
triple (mtreesum p)
(MTree T p)
(fun r ⇒ \[r = treesum T] \* (MTree T p)).
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma triple_mtreesum : ∀ T p,
triple (mtreesum p)
(MTree T p)
(fun r ⇒ \[r = treesum T] \* (MTree T p)).
Proof using. (* FILL IN HERE *) Admitted.
☐
Verification of a Counter Function with Local State
Implicit Type f : val.
A counter function is a function f such that, each time it
is invoked, returns the next integer, starting from 1.
The function create_counter produces a fresh counter function.
Concretely, if create_counter() produces the counter f,
then the first call to f() returns 1, the second call to
f() returns 2, the third call to f() returns 3, and so on.
The counter function can be implemented using a reference cell,
named p, that stores the integer last returned by the counter.
Initially, the contents of the reference p is zero. Each time
the counter is queried, the contents of p is increased by one
unit, and the new value of the contents is returned to the caller.
let create_counter () =
let p = ref 0 in
(fun () → (incr p; !p))
let p = ref 0 in
(fun () → (incr p; !p))
Definition create_counter : val :=
<{ fun 'u ⇒
let 'p = ref 0 in
(fun_ 'u ⇒ (incr 'p; !'p)) }>.
<{ fun 'u ⇒
let 'p = ref 0 in
(fun_ 'u ⇒ (incr 'p; !'p)) }>.
The remaining of this section presents two set of specifications,
each covering both the specification of a counter function and
the specification of a function that creates a fresh counter function.
The first specification is the most direct: it exposes the existence
of a reference p. Doing so, however, reveals implementation details
about the counter function.
The second specification is more abstract: it successfully hides
the description of the internal representation of the counter,
and states specification only in terms of a representation predicate,
called IsCounter f n, that exposes only the identity of the
counter function and its current value, but not the existence of
the reference p.
Let's begin with the simple, direct specification. The proposition
CounterSpec f p asserts that the counter function f has its
internal state stored in a reference cell at location p, in the
sense that invoking f in a state p ~~> m updates the state to
p ~~> (m+1) and produces the output value m+1.
Definition CounterSpec (f:val) (p:loc) : Prop :=
∀ m, triple (f ())
(p ~~> m)
(fun r ⇒ \[r = m+1] \* p ~~> (m+1)).
∀ m, triple (f ())
(p ~~> m)
(fun r ⇒ \[r = m+1] \* p ~~> (m+1)).
The function create_counter creates a fresh counter. Its precondition
is empty. Its postcondition describes the existence of a reference
cell p ~~> 0, for an existentially quantified location p, and such
that the proposition CounterSpec f p holds. Observe the use of the
tactic xfun for reasoning about a function definition.
Lemma triple_create_counter :
triple (create_counter ())
\[]
(fun f ⇒ \∃ p, (p ~~> 0) \* \[CounterSpec f p]).
Proof using.
xwp. xapp. intros p. xfun. intros f Hf. xsimpl.
{ intros m. applys Hf. xapp. xapp. xsimpl. auto. }
Qed.
triple (create_counter ())
\[]
(fun f ⇒ \∃ p, (p ~~> 0) \* \[CounterSpec f p]).
Proof using.
xwp. xapp. intros p. xfun. intros f Hf. xsimpl.
{ intros m. applys Hf. xapp. xapp. xsimpl. auto. }
Qed.
Given a counter function described by CounterSpec f p, a call to the
function f on a unit argument, in an input state of the form p ~~> n
for some n, produces the state p ~~> (n+1) and the output value n+1.
Observe how this specification applies to any function f satisfying
CounterSpec f p.
Lemma triple_apply_counter : ∀ f p n,
CounterSpec f p →
triple (f ())
(p ~~> n)
(fun r ⇒ \[r = n+1] \* (p ~~> (n+1))).
Proof using. introv Hf. applys Hf. Qed.
CounterSpec f p →
triple (f ())
(p ~~> n)
(fun r ⇒ \[r = n+1] \* (p ~~> (n+1))).
Proof using. introv Hf. applys Hf. Qed.
In the proof above, we were able to directly apply the assumption Hf.
In general, however, for reasoning about a call to an abstract function
f, it is generally needed to first invoke the tactic xwp.
Lemma triple_apply_counter_alternative_proof : ∀ f p n,
CounterSpec f p →
triple (f ())
(p ~~> n)
(fun r ⇒ \[r = n+1] \* (p ~~> (n+1))).
Proof using.
introv Hf. xtriple. lets H: Hf n. xapp. xsimpl. auto.
Qed.
CounterSpec f p →
triple (f ())
(p ~~> n)
(fun r ⇒ \[r = n+1] \* (p ~~> (n+1))).
Proof using.
introv Hf. xtriple. lets H: Hf n. xapp. xsimpl. auto.
Qed.
Let's now make the specifications more abstract, by hiding away the
description of the reference cells from the client of the specifications.
To that end, we introduce the heap predicate IsCounter f n, which
relates a function f, its current value n, and the piece of memory
state involved in the implementation of this function. This piece of
memory is of the form p ~~> n for some location p, such that
CounterSpec f p holds. Here, p gets quantified existentially.
Definition IsCounter (f:val) (n:int) : hprop :=
\∃ p, p ~~> n \* \[CounterSpec f p].
\∃ p, p ~~> n \* \[CounterSpec f p].
Using IsCounter, we can reformulate the specification of
create_counter is a more abstract and more concise manner,
with a postcondition simply asserting that the fresh counter
function f initialized at zero is described by the heap
predicate IsCounter f 0.
Lemma triple_create_counter_abstract :
triple (create_counter ())
\[]
(fun f ⇒ IsCounter f 0).
Proof using. unfold IsCounter. applys triple_create_counter. Qed.
triple (create_counter ())
\[]
(fun f ⇒ IsCounter f 0).
Proof using. unfold IsCounter. applys triple_create_counter. Qed.
Likewise, we can reformulate the specification of a call to a counter
function f. Such a call, under precondition IsCounter f n for some
n, admits the precondition IsCounter f (n+1), and returns n+1.
Exercise: 3 stars, standard, especially useful (triple_apply_counter_abstract)
Prove the abstract specification for a counter function. Hint: use xtriple at some point.
Lemma triple_apply_counter_abstract : ∀ f n,
triple (f ())
(IsCounter f n)
(fun r ⇒ \[r = n+1] \* (IsCounter f (n+1))).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (f ())
(IsCounter f n)
(fun r ⇒ \[r = n+1] \* (IsCounter f (n+1))).
Proof using. (* FILL IN HERE *) Admitted.
☐
Opaque IsCounter.
Specification of a Higher-Order Repeat Operator
if n > 0 then (f(); repeat f (n-1))
Definition repeat : val :=
<{ fix 'g 'f 'n ⇒
let 'b = ('n > 0) in
if 'b then
'f ();
let 'n2 = 'n - 1 in
'g 'f 'n2
end }>.
<{ fix 'g 'f 'n ⇒
let 'b = ('n > 0) in
if 'b then
'f ();
let 'n2 = 'n - 1 in
'g 'f 'n2
end }>.
For simplicity, let us assume for now n ≥ 0. The specification of
repeat n f is expressed in terms of an invariant, named I,
describing the state in-between every two calls to f. Initially, we
assume the state to satisfy f 0. We assume that, for every index i
in the range from 0 (inclusive) to n (exclusive), a call f() takes
the state from one satisfying f i to one satisfying f (i+1). The
specification below asserts that, under these assumptions, after the n
calls to f(), the final state satisfies f n.
The specification takes the form:
n ≥ 0 →
Hypothesis_on_f →
triple (repeat f n)
(I 0)
(fun u ⇒ I n).
where the hypothesis of f captures the following specification:
0 ≤ i < n →
triple (f ())
(I i)
(fun u ⇒ I (i+1))
The complete specification of repeat n f appears below.
n ≥ 0 →
Hypothesis_on_f →
triple (repeat f n)
(I 0)
(fun u ⇒ I n).
0 ≤ i < n →
triple (f ())
(I i)
(fun u ⇒ I (i+1))
Lemma triple_repeat : ∀ (I:int→hprop) (f:val) (n:int),
n ≥ 0 →
(∀ i, 0 ≤ i < n →
triple (f ())
(I i)
(fun u ⇒ I (i+1))) →
triple (repeat f n)
(I 0)
(fun u ⇒ I n).
n ≥ 0 →
(∀ i, 0 ≤ i < n →
triple (f ())
(I i)
(fun u ⇒ I (i+1))) →
triple (repeat f n)
(I 0)
(fun u ⇒ I n).
To establish that specification, we carry out a proof by induction over the
judgment:
triple (repeat f m)
(I (n-m))
(fun u ⇒ I n)).
which asserts that, when there remains m calls to perform, the current
state is described by I (n-m), that is, the current index i is equal
to n-m.
triple (repeat f m)
(I (n-m))
(fun u ⇒ I n)).
Proof using.
introv Hn Hf.
cuts G: (∀ m, 0 ≤ m ≤ n →
triple (repeat f m)
(I (n-m))
(fun u ⇒ I n)).
{ applys_eq G. { fequals. math. } { math. } }
intros m. induction_wf IH: (downto 0) m. intros Hm.
xwp. xapp. xif; intros C.
{ xapp. { math. } xapp. xapp. { math. } { math. }
math_rewrite ((n - m) + 1 = n - (m - 1)). xsimpl. }
{ xval. math_rewrite (n - m = n). xsimpl. }
Qed.
introv Hn Hf.
cuts G: (∀ m, 0 ≤ m ≤ n →
triple (repeat f m)
(I (n-m))
(fun u ⇒ I n)).
{ applys_eq G. { fequals. math. } { math. } }
intros m. induction_wf IH: (downto 0) m. intros Hm.
xwp. xapp. xif; intros C.
{ xapp. { math. } xapp. xapp. { math. } { math. }
math_rewrite ((n - m) + 1 = n - (m - 1)). xsimpl. }
{ xval. math_rewrite (n - m = n). xsimpl. }
Qed.
Specification of an Iterator on Mutable Lists
if p ≠ null then (f p.head; miter f p.tail)
Definition miter : val :=
<{ fix 'g 'f 'p ⇒
let 'b = ('p ≠ null) in
if 'b then
let 'x = 'p`.head in
'f 'x;
let 'q = 'p`.tail in
'g 'f 'q
end }>.
<{ fix 'g 'f 'p ⇒
let 'b = ('p ≠ null) in
if 'b then
let 'x = 'p`.head in
'f 'x;
let 'q = 'p`.tail in
'g 'f 'q
end }>.
The specification of miter follows the same structure as that of the
function repeat from the previous section, with two main differences.
The first difference is that the invariant is expressed not in terms of
an index i ranging from 0 to n, but in terms of a prefix of the
list L being traversed, ranging from nil to the full list L.
The second difference is that the operation miter f p requires in its
precondition, in addition to I nil, the description of the mutable list
MList L p. This predicate is returned in the postcondition, unchanged,
reflecting on the fact that the iteration does not alter the list.
Exercise: 5 stars, standard, especially useful (triple_miter)
Prove the correctness of triple_miter.
Lemma triple_miter : ∀ (I:list val→hprop) L (f:val) p,
(∀ x L1 L2, L = L1++x::L2 →
triple (f x)
(I L1)
(fun u ⇒ I (L1++(x::nil)))) →
triple (miter f p)
(MList L p \* I nil)
(fun u ⇒ MList L p \* I L).
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ x L1 L2, L = L1++x::L2 →
triple (f x)
(I L1)
(fun u ⇒ I (L1++(x::nil)))) →
triple (miter f p)
(MList L p \* I nil)
(fun u ⇒ MList L p \* I L).
Proof using. (* FILL IN HERE *) Admitted.
☐
Computing the Length of a Mutable List using an Iterator
let c = ref 0 in
miter (fun x → incr c) p;
!c
Exercise: 4 stars, standard, especially useful (triple_mlength_using_miter)
Prove the correctness of mlength_using_iter. Hint: use xfun. intros f Hf for reasoning about the function definition, then use xapp for reasoning about a call to f by inlining its code.
Definition mlength_using_miter : val :=
<{ fun 'p ⇒
let 'c = ref 0 in
let 'f = (fun_ 'x ⇒ incr 'c) in
miter 'f 'p;
get_and_free 'c }>.
Lemma triple_mlength_using_miter : ∀ p L,
triple (mlength_using_miter p)
(MList L p)
(fun r ⇒ \[r = length L] \* MList L p).
Proof using. (* FILL IN HERE *) Admitted.
☐
<{ fun 'p ⇒
let 'c = ref 0 in
let 'f = (fun_ 'x ⇒ incr 'c) in
miter 'f 'p;
get_and_free 'c }>.
Lemma triple_mlength_using_miter : ∀ p L,
triple (mlength_using_miter p)
(MList L p)
(fun r ⇒ \[r = length L] \* MList L p).
Proof using. (* FILL IN HERE *) Admitted.
☐
A Continuation-Passing-Style, In-Place Concatenation Function
if p1 == null
then k p2
else cps_append_aux p1.tail p2 (fun r ⇒ (p1.tail <- r); k p1)
let cps_append p1 p2 =
cps_append_aux p1 p2 (fun r ⇒ r)
Definition cps_append_aux : val :=
<{ fix 'f 'p1 'p2 'k ⇒
let 'b = ('p1 = null) in
if 'b
then 'k 'p2
else
let 'q1 = 'p1`.tail in
let 'k2 = (fun_ 'r ⇒ ('p1`.tail := 'r; 'k 'p1)) in
'f 'q1 'p2 'k2 }>.
Definition cps_append : val :=
<{ fun 'p1 'p2 ⇒
let 'f = (fun_ 'r ⇒ 'r) in
cps_append_aux 'p1 'p2 'f }>.
<{ fix 'f 'p1 'p2 'k ⇒
let 'b = ('p1 = null) in
if 'b
then 'k 'p2
else
let 'q1 = 'p1`.tail in
let 'k2 = (fun_ 'r ⇒ ('p1`.tail := 'r; 'k 'p1)) in
'f 'q1 'p2 'k2 }>.
Definition cps_append : val :=
<{ fun 'p1 'p2 ⇒
let 'f = (fun_ 'r ⇒ 'r) in
cps_append_aux 'p1 'p2 'f }>.
Exercise: 5 stars, standard, optional (triple_cps_append)
Specify and verify cps_append_aux, then verify cps_append.
(* FILL IN HERE *)
The main function cps_append simply invokes cps_append_aux
with a trivial continuation.
Lemma triple_cps_append : ∀ (L1 L2:list val) (p1 p2:loc),
triple (cps_append p1 p2)
(MList L1 p1 \* MList L2 p2)
(funloc p3 ⇒ MList (L1++L2) p3).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (cps_append p1 p2)
(MList L1 p1 \* MList L2 p2)
(funloc p3 ⇒ MList (L1++L2) p3).
Proof using. (* FILL IN HERE *) Admitted.
☐
Historical Notes
Hprop: Heap Predicates
Set Implicit Arguments.
From SLF Require Export LibSepReference.
Import ProgramSyntax.
From SLF Require Export LibSepReference.
Import ProgramSyntax.
Tweak to simplify the use of definitions and lemmas from LibSepFmap.v.
Arguments Fmap.single {A} {B}.
Arguments Fmap.union {A} {B}.
Arguments Fmap.disjoint {A} {B}.
Arguments Fmap.union_comm_of_disjoint {A} {B}.
Arguments Fmap.union_empty_l {A} {B}.
Arguments Fmap.union_empty_r {A} {B}.
Arguments Fmap.union_assoc {A} {B}.
Arguments Fmap.disjoint_empty_l {A} {B}.
Arguments Fmap.disjoint_empty_r {A} {B}.
Arguments Fmap.disjoint_union_eq_l {A} {B}.
Arguments Fmap.disjoint_union_eq_r {A} {B}.
Arguments Fmap.union {A} {B}.
Arguments Fmap.disjoint {A} {B}.
Arguments Fmap.union_comm_of_disjoint {A} {B}.
Arguments Fmap.union_empty_l {A} {B}.
Arguments Fmap.union_empty_r {A} {B}.
Arguments Fmap.union_assoc {A} {B}.
Arguments Fmap.disjoint_empty_l {A} {B}.
Arguments Fmap.disjoint_empty_r {A} {B}.
Arguments Fmap.disjoint_union_eq_l {A} {B}.
Arguments Fmap.disjoint_union_eq_r {A} {B}.
First Pass
- A location has type loc.
- A value has type val.
- A state has type state.
- H denotes a heap predicate of type hprop; it describes a piece of state,
- Q denotes a postcondition, of type val→hprop; it describes both a result value and a piece of state. Observe that val→hprop is equivalent to val→state→Prop.
- \[] denotes the empty heap predicate,
- \[P] denotes a pure fact,
- p ~~> v denotes a singleton heap,
- H1 \* H2 denotes the separating conjunction,
- Q1 \*+ H2 denotes the separating conjunction, between a postcondition and a heap predicate,
- \∃ x, H denotes an existential.
- a Hoare triple, written hoare t H Q, features a precondition H and a postcondition Q that describe the whole memory state in which the execution of the term t takes place.
- a Separation Logic triple, written triple t H Q, features a pre- and a postcondition that describes only the piece of the memory state in which the execution of the term t takes place.
- applys is an enhanced version of eapply.
- applys_eq is a variant of applys that enables matching the arguments of the predicate that appears in the goal "up to equality" rather than "up to conversion".
- specializes is an enhanced version of specialize.
- lets and forwards are forward-chaining tactics that enable instantiating a lemma.
Syntax and Semantics
- a type of terms, written trm,
- a type of values, written val,
- a type of states, written state (i.e., finite map from loc to val),
- a big-step evaluation judgment, written eval h1 t h2 v, asserting that,
starting from state s1, the evaluation of the term t terminates in
the state s2 and produces the value v.
Check eval : state → trm → state → val → Prop.
Definition example_val : val :=
val_fun "x" (trm_if (trm_var "x") (trm_val (val_int 0)) (trm_val (val_int 1))).
val_fun "x" (trm_if (trm_var "x") (trm_val (val_int 0)) (trm_val (val_int 1))).
Thanks to a set of coercions and notation, this term can be written in a
somewhat more readable way, as follows.
Definition example_val' : trm :=
<{ fun "x" ⇒
if "x" then 0 else 1 }>.
<{ fun "x" ⇒
if "x" then 0 else 1 }>.
Description of the State
Definition state : Type := fmap loc val.
By convention, we use the type state describes a full state of memory,
and introduce the type heap to describe just a piece of state.
Definition heap : Type := state.
In particular, the library LibSepFmap.v, whose module name is abbreviated
as Fmap, exports the following definitions.
The types of these definitions are as follows.
Check Fmap.empty : heap.
Check Fmap.single : loc → val → heap.
Check Fmap.union : heap → heap → heap.
Check Fmap.disjoint : heap → heap → Prop.
Note that the union operation is commutative only when its two arguments
have disjoint domains. Throughout the Separation Logic course, we will
only consider disjoint unions, for which commutativity holds.
- Fmap.empty denotes the empty state,
- Fmap.single p v denotes a singleton state, that is, a unique cell at location p with contents v,
- Fmap.union h1 h2 denotes the union of the two states h1 and h2.
- Fmap.disjoint h1 h2 asserts that h1 and h2 have disjoint domains.
Check Fmap.empty : heap.
Check Fmap.single : loc → val → heap.
Check Fmap.union : heap → heap → heap.
Check Fmap.disjoint : heap → heap → Prop.
Heap Predicates
Definition hprop := heap → Prop.
Thereafter, let H range over heap predicates.
Implicit Type H : hprop.
An essential aspect of Separation Logic is that all heap predicates
defined and used in practice are built using a few basic combinators.
In other words, except for the definition of the combinators themselves,
we will never define a custom heap predicate directly as a function
of the state.
We next describe the most important combinators of Separation Logic.
The hempty predicate, usually written \[], characterizes an empty
state.
Definition hempty : hprop :=
fun (h:heap) ⇒ (h = Fmap.empty).
Notation "\[]" := (hempty) (at level 0).
fun (h:heap) ⇒ (h = Fmap.empty).
Notation "\[]" := (hempty) (at level 0).
The pure fact predicate, written \[P], characterizes an empty state
and moreover asserts that the proposition P is true.
Definition hpure (P:Prop) : hprop :=
fun (h:heap) ⇒ (h = Fmap.empty) ∧ P.
Notation "\[ P ]" := (hpure P) (at level 0, format "\[ P ]").
fun (h:heap) ⇒ (h = Fmap.empty) ∧ P.
Notation "\[ P ]" := (hpure P) (at level 0, format "\[ P ]").
The singleton heap predicate, written p ~~> v, characterizes a
state with a single allocated cell, at location p, storing the
value v.
Definition hsingle (p:loc) (v:val) : hprop :=
fun (h:heap) ⇒ (h = Fmap.single p v).
Notation "p '~~>' v" := (hsingle p v) (at level 32).
fun (h:heap) ⇒ (h = Fmap.single p v).
Notation "p '~~>' v" := (hsingle p v) (at level 32).
The "separating conjunction", written H1 \* H2, characterizes a
state that can be decomposed in two disjoint parts, one that
satisfies H1, and another that satisfies H2.
In the definition below, the two parts are named h1 and h2.
Definition hstar (H1 H2 : hprop) : hprop :=
fun (h:heap) ⇒ ∃ h1 h2, H1 h1
∧ H2 h2
∧ Fmap.disjoint h1 h2
∧ h = Fmap.union h1 h2.
Notation "H1 '\*' H2" := (hstar H1 H2) (at level 41, right associativity).
fun (h:heap) ⇒ ∃ h1 h2, H1 h1
∧ H2 h2
∧ Fmap.disjoint h1 h2
∧ h = Fmap.union h1 h2.
Notation "H1 '\*' H2" := (hstar H1 H2) (at level 41, right associativity).
The existential quantifier for heap predicates, written \∃ x, H
characterizes a heap that satisfies H for some x.
The variable x has type A, for some arbitrary type A.
The notation \∃ x, H stands for hexists (fun x ⇒ H).
The generalized notation \∃ x1 ... xn, H is also available.
The definition of hexists is a bit technical. It is not essential
to master it at this point. Additional explanations are provided
near the end of this chapter.
Definition hexists (A:Type) (J:A→hprop) : hprop :=
fun (h:heap) ⇒ ∃ x, J x h.
Notation "'\exists' x1 .. xn , H" :=
(hexists (fun x1 ⇒ .. (hexists (fun xn ⇒ H)) ..))
(at level 39, x1 binder, H at level 50, right associativity,
format "'[' '\exists' '/ ' x1 .. xn , '/ ' H ']'").
fun (h:heap) ⇒ ∃ x, J x h.
Notation "'\exists' x1 .. xn , H" :=
(hexists (fun x1 ⇒ .. (hexists (fun xn ⇒ H)) ..))
(at level 39, x1 binder, H at level 50, right associativity,
format "'[' '\exists' '/ ' x1 .. xn , '/ ' H ']'").
All the definitions above are eventually turned Opaque, after
the appropriate introduction and elimination lemmas are
established for them. Thus, at some point it is no longer
possible to involve, say unfold hstar. Opacity ensures that
all the proofs that are constructed do not depend on the
details of how these definitions of heap predicates are set up.
Extensionality for Heap Predicates
Parameter hstar_assoc : ∀ H1 H2 H3,
(H1 \* H2) \* H3 = H1 \* (H2 \* H3).
(H1 \* H2) \* H3 = H1 \* (H2 \* H3).
How can we prove a goal of the form H1 = H2 when H1 and H2
have type hprop, that is, heap→Prop?
Intuitively, H and H' are equal if and only if they
characterize exactly the same set of heaps, that is, if
∀ (h:heap), H1 h ↔ H2 h.
This reasoning principle, a specific form of extensionality
property, is not available by default in Coq. But we can safely
assume it if we extend the logic of Coq with a standard axiom
called "predicate extensionality".
Axiom predicate_extensionality : ∀ (A:Type) (P Q:A→Prop),
(∀ x, P x ↔ Q x) →
P = Q.
(∀ x, P x ↔ Q x) →
P = Q.
By specializing P and Q above to the type hprop, we obtain
exactly the desired extensionality principle.
Lemma hprop_eq : ∀ (H1 H2:hprop),
(∀ (h:heap), H1 h ↔ H2 h) →
H1 = H2.
Proof using. applys predicate_extensionality. Qed.
(∀ (h:heap), H1 h ↔ H2 h) →
H1 = H2.
Proof using. applys predicate_extensionality. Qed.
Type and Syntax for Postconditions
Implicit Type Q : val → hprop.
One common operation is to augment a postcondition with a piece of state.
This operation is described by the operator Q \*+ H, which is just a
convenient notation for fun x ⇒ (Q x \* H).
Notation "Q \*+ H" := (fun x ⇒ hstar (Q x) H) (at level 40).
Intuitively, in order to prove that two postconditions Q1
and Q2 are equal, it suffices to show that the heap predicates
Q1 v and Q2 v (both of type hprop) are equal for any value v.
Again, the extensionality property that we need is not built-in to Coq.
We need the axiom called "functional extensionality", stated next.
Axiom functional_extensionality : ∀ A B (f g:A→B),
(∀ x, f x = g x) →
f = g.
(∀ x, f x = g x) →
f = g.
The desired equality property for postconditions follows directly
from that axiom.
Lemma qprop_eq : ∀ (Q1 Q2:val→hprop),
(∀ (v:val), Q1 v = Q2 v) →
Q1 = Q2.
Proof using. applys functional_extensionality. Qed.
(∀ (v:val), Q1 v = Q2 v) →
Q1 = Q2.
Proof using. applys functional_extensionality. Qed.
Separation Logic Triples and the Frame Rule
Definition hoare (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ (s:state), H s →
∃ (s':state) (v:val), eval s t s' v ∧ Q v s'.
∀ (s:state), H s →
∃ (s':state) (v:val), eval s t s' v ∧ Q v s'.
Note that Q has type val→hprop, thus Q v has type hprop.
Recall that hprop = heap→Prop. Thus Q v s' has type Prop.
The frame rule asserts that if one can derive a specification of
the form triple H t Q for a term t, then one should be able
to automatically derive triple (H \* H') t (Q \*+ H') for any H'.
Intuitively, if t executes safely in a heap H, it should behave
similarly in any extension of H with a disjoint part H'. Moreover,
its evaluation should leave this piece of state H' unmodified
throughout the execution of t.
The following definition of a Separation Logic triple builds upon
that of a Hoare triple by "baking in" the frame rule.
Definition triple (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ (H':hprop), hoare t (H \* H') (Q \*+ H').
∀ (H':hprop), hoare t (H \* H') (Q \*+ H').
This definition inherently satisfies the frame rule, as we show
below. The proof only exploits the associativity of the star
operator.
Lemma triple_frame : ∀ t H Q H',
triple t H Q →
triple t (H \* H') (Q \*+ H').
Proof using.
introv M. unfold triple in *. rename H' into H1. intros H2.
specializes M (H1 \* H2).
(* M matches the goal up to rewriting for associativity. *)
applys_eq M.
{ rewrite hstar_assoc. auto. }
{ applys functional_extensionality. intros v. rewrite hstar_assoc. auto. }
Qed.
triple t H Q →
triple t (H \* H') (Q \*+ H').
Proof using.
introv M. unfold triple in *. rename H' into H1. intros H2.
specializes M (H1 \* H2).
(* M matches the goal up to rewriting for associativity. *)
applys_eq M.
{ rewrite hstar_assoc. auto. }
{ applys functional_extensionality. intros v. rewrite hstar_assoc. auto. }
Qed.
Example of a Triple: the Increment Function
Parameter incr : val.
An application of this function, written incr p, is technically
a term of the form trm_app (trm_val incr) (trm_val (val_loc p)),
where trm_val injects values in the grammar of terms, and
val_loc injects locations in the grammar of locations.
The abbreviation incr p parses correctly because trm_app,
trm_val, and val_loc are registered as coercions. Let us
check this claim with Coq.
Lemma incr_applied : ∀ (p:loc) (n:int),
trm_app (trm_val incr) (trm_val (val_loc p))
= incr p.
Proof using. reflexivity. Qed.
trm_app (trm_val incr) (trm_val (val_loc p))
= incr p.
Proof using. reflexivity. Qed.
The operation incr p is specified using a triple as shown
below.
Parameter triple_incr : ∀ (p:loc) (n:int),
triple (trm_app incr p)
(p ~~> n)
(fun r ⇒ p ~~> (n+1)).
triple (trm_app incr p)
(p ~~> n)
(fun r ⇒ p ~~> (n+1)).
Example Applications of the Frame Rule
Lemma triple_incr_2 : ∀ (p q:loc) (n m:int),
triple (incr p)
((p ~~> n) \* (q ~~> m))
(fun _ ⇒ (p ~~> (n+1)) \* (q ~~> m)).
triple (incr p)
((p ~~> n) \* (q ~~> m))
(fun _ ⇒ (p ~~> (n+1)) \* (q ~~> m)).
The above specification lemma is derivable from the specification
lemma triple_incr by applying the frame rule to augment
both the precondition and the postcondition with q ~~> m.
Proof using.
intros. lets M: triple_incr p n.
lets N: triple_frame (q ~~> m) M. apply N.
Qed.
intros. lets M: triple_incr p n.
lets N: triple_frame (q ~~> m) M. apply N.
Qed.
Here, we have framed on q ~~> m, but we could similarly
frame on any heap predicate H, as captured by the following
specification lemma.
Parameter triple_incr_3 : ∀ (p:loc) (n:int) (H:hprop),
triple (incr p)
((p ~~> n) \* H)
(fun _ ⇒ (p ~~> (n+1)) \* H).
triple (incr p)
((p ~~> n) \* H)
(fun _ ⇒ (p ~~> (n+1)) \* H).
Remark: in practice, we always prefer writing
"small-footprint specifications", such as triple_incr,
that describe the minimal piece of state necessary for
the function to execute. Indeed, other specifications that
describe a larger piece of state can be derived by application
of the frame rule.
Power of the Frame Rule with Respect to Allocation
Parameter triple_ref : ∀ (v:val),
triple (val_ref v)
\[]
(funloc p ⇒ p ~~> v).
triple (val_ref v)
\[]
(funloc p ⇒ p ~~> v).
Applying the frame rule to the above specification, and to
another memory cell, say l' ~~> v', we obtain:
Parameter triple_ref_with_frame : ∀ (l':loc) (v':val) (v:val),
triple (val_ref v)
(l' ~~> v')
(funloc p ⇒ p ~~> v \* l' ~~> v').
triple (val_ref v)
(l' ~~> v')
(funloc p ⇒ p ~~> v \* l' ~~> v').
This derived specification captures the fact that the newly
allocated cell at address p is distinct from the previously
allocated cell at address l'.
More generally, through the frame rule, we can derive that
any piece of freshly allocated data is distinct from any
piece of previously existing data.
This independence principle is extremely powerful. It is an
inherent strength of Separation Logic.
Notation for Heap Union
Notation "h1 \u h2" := (Fmap.union h1 h2) (at level 37, right associativity).
Introduction and Inversion Lemmas for Basic Operators
Implicit Types P : Prop.
Implicit Types v : val.
Implicit Types v : val.
The introduction lemmas show how to prove goals of the form H h,
for various forms of the heap predicate H.
Lemma hempty_intro :
\[] Fmap.empty.
Proof using. hnf. auto. Qed.
Lemma hpure_intro : ∀ P,
P →
\[P] Fmap.empty.
Proof using. introv M. hnf. auto. Qed.
Lemma hsingle_intro : ∀ p v,
(p ~~> v) (Fmap.single p v).
Proof using. intros. hnf. auto. Qed.
Lemma hstar_intro : ∀ H1 H2 h1 h2,
H1 h1 →
H2 h2 →
Fmap.disjoint h1 h2 →
(H1 \* H2) (h1 \u h2).
Proof using. intros. ∃* h1 h2. Qed.
Lemma hexists_intro : ∀ A (x:A) (J:A→hprop) h,
J x h →
(\∃ x, J x) h.
Proof using. introv M. hnf. eauto. Qed.
\[] Fmap.empty.
Proof using. hnf. auto. Qed.
Lemma hpure_intro : ∀ P,
P →
\[P] Fmap.empty.
Proof using. introv M. hnf. auto. Qed.
Lemma hsingle_intro : ∀ p v,
(p ~~> v) (Fmap.single p v).
Proof using. intros. hnf. auto. Qed.
Lemma hstar_intro : ∀ H1 H2 h1 h2,
H1 h1 →
H2 h2 →
Fmap.disjoint h1 h2 →
(H1 \* H2) (h1 \u h2).
Proof using. intros. ∃* h1 h2. Qed.
Lemma hexists_intro : ∀ A (x:A) (J:A→hprop) h,
J x h →
(\∃ x, J x) h.
Proof using. introv M. hnf. eauto. Qed.
The inversion lemmas show how to extract information from hypotheses
of the form H h, for various forms of the heap predicate H.
Lemma hempty_inv : ∀ h,
\[] h →
h = Fmap.empty.
Proof using. introv M. hnf in M. auto. Qed.
Lemma hpure_inv : ∀ P h,
\[P] h →
P ∧ h = Fmap.empty.
Proof using. introv M. hnf in M. autos*. Qed.
Lemma hsingle_inv: ∀ p v h,
(p ~~> v) h →
h = Fmap.single p v.
Proof using. introv M. hnf in M. auto. Qed.
Lemma hstar_inv : ∀ H1 H2 h,
(H1 \* H2) h →
∃ h1 h2, H1 h1 ∧ H2 h2 ∧ Fmap.disjoint h1 h2 ∧ h = h1 \u h2.
Proof using. introv M. hnf in M. eauto. Qed.
Lemma hexists_inv : ∀ A (J:A→hprop) h,
(\∃ x, J x) h →
∃ x, J x h.
Proof using. introv M. hnf in M. eauto. Qed.
\[] h →
h = Fmap.empty.
Proof using. introv M. hnf in M. auto. Qed.
Lemma hpure_inv : ∀ P h,
\[P] h →
P ∧ h = Fmap.empty.
Proof using. introv M. hnf in M. autos*. Qed.
Lemma hsingle_inv: ∀ p v h,
(p ~~> v) h →
h = Fmap.single p v.
Proof using. introv M. hnf in M. auto. Qed.
Lemma hstar_inv : ∀ H1 H2 h,
(H1 \* H2) h →
∃ h1 h2, H1 h1 ∧ H2 h2 ∧ Fmap.disjoint h1 h2 ∧ h = h1 \u h2.
Proof using. introv M. hnf in M. eauto. Qed.
Lemma hexists_inv : ∀ A (J:A→hprop) h,
(\∃ x, J x) h →
∃ x, J x h.
Proof using. introv M. hnf in M. eauto. Qed.
Exercise: 4 stars, standard, especially useful (hstar_hpure_l)
Check Fmap.union_empty_l : ∀ h,
Fmap.empty \u h = h.
Check Fmap.disjoint_empty_l : ∀ h,
Fmap.disjoint Fmap.empty h.
Lemma hstar_hpure_l : ∀ P H h,
(\[P] \* H) h = (P ∧ H h).
Proof using. (* FILL IN HERE *) Admitted.
☐
(\[P] \* H) h = (P ∧ H h).
Proof using. (* FILL IN HERE *) Admitted.
☐
Alternative, Equivalent Definitions for Separation Logic Triples
Definition triple_lowlevel (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ h1 h2,
Fmap.disjoint h1 h2 →
H h1 →
∃ h1' v,
Fmap.disjoint h1' h2
∧ eval (h1 \u h2) t (h1' \u h2) v
∧ Q v h1'.
∀ h1 h2,
Fmap.disjoint h1 h2 →
H h1 →
∃ h1' v,
Fmap.disjoint h1' h2
∧ eval (h1 \u h2) t (h1' \u h2) v
∧ Q v h1'.
Let us establish the equivalence between this alternative
definition of triple and the original one.
Exercise: 4 stars, standard, especially useful (triple_iff_triple_lowlevel)
Prove the equivalence between triple and triple_low_level. Warning: this is probably a very challenging exercise.
Lemma triple_iff_triple_lowlevel : ∀ t H Q,
triple t H Q ↔ triple_lowlevel t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
triple t H Q ↔ triple_lowlevel t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Alternative Definitions for Heap Predicates
Lemma hempty_eq_hpure_true :
\[] = \[True].
Proof using.
unfold hempty, hpure. apply hprop_eq. intros h. iff Hh.
{ auto. }
{ jauto. }
Qed.
\[] = \[True].
Proof using.
unfold hempty, hpure. apply hprop_eq. intros h. iff Hh.
{ auto. }
{ jauto. }
Qed.
Thus, hempty could be defined in terms of hpure, as hpure True,
written \[True].
Definition hempty' : hprop :=
\[True].
\[True].
The pure fact predicate [\P] is equivalent to the existential
quantification over a proof of P in the empty heap, that is,
to the heap predicate \∃ (p:P), \[].
Lemma hpure_eq_hexists_proof : ∀ P,
\[P] = (\∃ (p:P), \[]).
Proof using.
unfold hempty, hpure, hexists. intros P.
apply hprop_eq. intros h. iff Hh.
{ destruct Hh as (E&p). ∃ p. auto. }
{ destruct Hh as (p&E). auto. }
Qed.
\[P] = (\∃ (p:P), \[]).
Proof using.
unfold hempty, hpure, hexists. intros P.
apply hprop_eq. intros h. iff Hh.
{ destruct Hh as (E&p). ∃ p. auto. }
{ destruct Hh as (p&E). auto. }
Qed.
Thus, hpure could be defined in terms of hexists and hempty, as
hexists (fun (p:P) ⇒ hempty), also written \∃ (p:P), \[].
Definition hpure' (P:Prop) : hprop :=
\∃ (p:P), \[].
\∃ (p:P), \[].
It is useful to minimize the number of combinators, both for elegance
and to reduce the proof effort.
Since we cannot do without hexists, we have a choice between
considering either hpure or hempty as primitive, and the other
one as derived. The predicate hempty is simpler and appears as
more fundamental.
Hence, in the subsequent chapters (and in the CFML tool),
we define hpure in terms of hexists and hempty,
like in the definition of hpure' shown above.
In other words, we assume the definition:
Definition hpure (P:Prop) : hprop :=
\∃ (p:P), \[].
Definition hpure (P:Prop) : hprop :=
\∃ (p:P), \[].
Additional Explanations for the Definition of \∃
Parameter (p:loc).
Check (\∃ (n:int), p ~~> (val_int n)) : hprop.
Check ex : ∀ A : Type, (A → Prop) → Prop.
Check hexists : ∀ A : Type, (A → hprop) → hprop.
Notation "'exists' x .. y , p" := (ex (fun x ⇒ .. (ex (fun y ⇒ p)) ..))
(at level 200, x binder, right associativity,
format "'[' 'exists' '/ ' x .. y , '/ ' p ']'").
Module Extensionality.
To establish extensionality of entailment, we have used
the predicate extensionality axiom. In fact, this axiom
can be derived by combining the axiom of "functional extensionality"
with another one called "propositional extensionality".
The axiom of "propositional extensionality" asserts that
two propositions that are logically equivalent (in the sense
that they imply each other) can be considered equal.
Axiom propositional_extensionality : ∀ (P Q:Prop),
(P ↔ Q) →
P = Q.
(P ↔ Q) →
P = Q.
The axiom of "functional extensionality" asserts that
two functions are equal if they provide equal result
for every argument.
Axiom functional_extensionality : ∀ A B (f g:A→B),
(∀ x, f x = g x) →
f = g.
(∀ x, f x = g x) →
f = g.
Exercise: 1 star, standard, especially useful (predicate_extensionality_derived)
Using the two axioms propositional_extensionality and functional_extensionality, show how to derive predicate_extensionality.
Lemma predicate_extensionality_derived : ∀ A (P Q:A→Prop),
(∀ x, P x ↔ Q x) →
P = Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ x, P x ↔ Q x) →
P = Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
End Extensionality.
Historical Notes
Himpl: Heap Entailment
Set Implicit Arguments.
From SLF Require LibSepReference.
From SLF Require Export Hprop.
From SLF Require LibSepReference.
From SLF Require Export Hprop.
Implicit Types
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
First Pass
Lemma triple_conseq : ∀ t H Q H' Q',
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
- the formal definition of the entailment relations,
- the fundamental properties of the Separation Logic operators: these properties are expressed either as entailments, or as equalities, which denote symmetric entailments,
- the 4 structural rules of Separation Logic: the rule of consequence, the frame rule (which can be combined with the rule of consequence), and the extractions rules for pure facts and for quantifiers,
- the tactics xsimpl and xchange that are critically useful for manipulating entailments in practice,
- (optional) details on how to prove the fundamental properties and the structural rules.
Definition of Entailment
Definition himpl (H1 H2:hprop) : Prop :=
∀ (h:heap), H1 h → H2 h.
Notation "H1 ==> H2" := (himpl H1 H2) (at level 55).
∀ (h:heap), H1 h → H2 h.
Notation "H1 ==> H2" := (himpl H1 H2) (at level 55).
H1 ==> H2 captures the fact that H1 is a stronger precondition
than H2, in the sense that it is more restrictive.
As we show next, the entailment relation is reflexive, transitive,
and antisymmetric. It thus forms an order relation on heap predicates.
Lemma himpl_refl : ∀ H,
H ==> H.
Proof using. intros h. hnf. auto. Qed.
Lemma himpl_trans : ∀ H2 H1 H3,
(H1 ==> H2) →
(H2 ==> H3) →
(H1 ==> H3).
Proof using. introv M1 M2. intros h H1h. eauto. Qed.
H ==> H.
Proof using. intros h. hnf. auto. Qed.
Lemma himpl_trans : ∀ H2 H1 H3,
(H1 ==> H2) →
(H2 ==> H3) →
(H1 ==> H3).
Proof using. introv M1 M2. intros h H1h. eauto. Qed.
Exercise: 1 star, standard, especially useful (himpl_antisym)
Prove the antisymmetry of entailment result shown below using extensionality for heap predicates, as captured by lemma predicate_extensionality (or lemma hprop_eq) introduced in the previous chapter (Hprop).
Lemma himpl_antisym : ∀ H1 H2,
(H1 ==> H2) →
(H2 ==> H1) →
H1 = H2.
Proof using. (* FILL IN HERE *) Admitted.
☐
(H1 ==> H2) →
(H2 ==> H1) →
H1 = H2.
Proof using. (* FILL IN HERE *) Admitted.
☐
Entailment for Postconditions
Definition qimpl (Q1 Q2:val→hprop) : Prop :=
∀ (v:val), Q1 v ==> Q2 v.
Notation "Q1 ===> Q2" := (qimpl Q1 Q2) (at level 55).
∀ (v:val), Q1 v ==> Q2 v.
Notation "Q1 ===> Q2" := (qimpl Q1 Q2) (at level 55).
Remark: equivalently, Q1 ===> Q2 holds if for any value v and
any heap h, the proposition Q1 v h implies Q2 v h.
Entailment on postconditions also forms an order relation:
it is reflexive, transitive, and antisymmetric.
Lemma qimpl_refl : ∀ Q,
Q ===> Q.
Proof using. intros Q v. applys himpl_refl. Qed.
Lemma qimpl_trans : ∀ Q2 Q1 Q3,
(Q1 ===> Q2) →
(Q2 ===> Q3) →
(Q1 ===> Q3).
Proof using. introv M1 M2. intros v. applys himpl_trans; eauto. Qed.
Lemma qimpl_antisym : ∀ Q1 Q2,
(Q1 ===> Q2) →
(Q2 ===> Q1) →
(Q1 = Q2).
Proof using.
introv M1 M2. apply functional_extensionality.
intros v. applys himpl_antisym; eauto.
Qed.
Q ===> Q.
Proof using. intros Q v. applys himpl_refl. Qed.
Lemma qimpl_trans : ∀ Q2 Q1 Q3,
(Q1 ===> Q2) →
(Q2 ===> Q3) →
(Q1 ===> Q3).
Proof using. introv M1 M2. intros v. applys himpl_trans; eauto. Qed.
Lemma qimpl_antisym : ∀ Q1 Q2,
(Q1 ===> Q2) →
(Q2 ===> Q1) →
(Q1 = Q2).
Proof using.
introv M1 M2. apply functional_extensionality.
intros v. applys himpl_antisym; eauto.
Qed.
Fundamental Properties of Separation Logic Operators
Parameter hstar_assoc : ∀ H1 H2 H3,
(H1 \* H2) \* H3 = H1 \* (H2 \* H3).
(H1 \* H2) \* H3 = H1 \* (H2 \* H3).
(2) The star operator is commutative.
Parameter hstar_comm : ∀ H1 H2,
H1 \* H2 = H2 \* H1.
H1 \* H2 = H2 \* H1.
(3) The empty heap predicate is a neutral for the star.
Because star is commutative, it is equivalent to state that
hempty is a left or a right neutral for hstar.
We chose, arbitrarily, to state the left-neutral property.
Parameter hstar_hempty_l : ∀ H,
\[] \* H = H.
\[] \* H = H.
(4) Existentials can be "extruded" out of stars, that is:
(\∃ x, J x) \* H is equivalent to \∃ x, (J x \* H).
Parameter hstar_hexists : ∀ A (J:A→hprop) H,
(\∃ x, J x) \* H = \∃ x, (J x \* H).
(\∃ x, J x) \* H = \∃ x, (J x \* H).
(5) The star operator is "monotone" with respect to entailment, meaning
that if H1 ==> H1' then (H1 \* H2) ==> (H1' \* H2).
Viewed the other way around, if we have to prove the entailment relation
(H1 \* H2) ==> (H1' \* H2), we can "cancel out" H2 on both sides.
In this view, the monotonicity property is a sort of "frame rule for
the entailment relation".
Here again, due to commutativity of star, it only suffices to state
the left version of the monotonicity property.
Parameter himpl_frame_l : ∀ H2 H1 H1',
H1 ==> H1' →
(H1 \* H2) ==> (H1' \* H2).
H1 ==> H1' →
(H1 \* H2) ==> (H1' \* H2).
Exercise: 1 star, standard, especially useful (hstar_comm_assoc)
The commutativity and associativity results combine into one result that is sometimes convenient to exploit in proofs.
Lemma hstar_comm_assoc : ∀ H1 H2 H3,
H1 \* H2 \* H3 = H2 \* H1 \* H3.
Proof using. (* FILL IN HERE *) Admitted.
☐
H1 \* H2 \* H3 = H2 \* H1 \* H3.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 1 star, standard, especially useful (himpl_frame_r)
The monotonicity property of the star operator w.r.t. entailment can also be stated in a symmetric fashion, as shown next. Prove this result. Hint: exploit the transitivity of entailment (himpl_trans) and the asymmetric monotonicity result (himpl_frame_l).
Lemma himpl_frame_r : ∀ H1 H2 H2',
H2 ==> H2' →
(H1 \* H2) ==> (H1 \* H2').
Proof using. (* FILL IN HERE *) Admitted.
☐
H2 ==> H2' →
(H1 \* H2) ==> (H1 \* H2').
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 1 star, standard, especially useful (himpl_frame_lr)
The monotonicity property of the star operator w.r.t. entailment can also be stated in a symmetric fashion, as shown next. Prove this result. Hint: exploit the transitivity of entailment (himpl_trans) and the asymmetric monotonicity result (himpl_frame_l).
Lemma himpl_frame_lr : ∀ H1 H1' H2 H2',
H1 ==> H1' →
H2 ==> H2' →
(H1 \* H2) ==> (H1' \* H2').
Proof using. (* FILL IN HERE *) Admitted.
☐
H1 ==> H1' →
H2 ==> H2' →
(H1 \* H2) ==> (H1' \* H2').
Proof using. (* FILL IN HERE *) Admitted.
☐
Introduction and Elimination Rules w.r.t. Entailments
Exercise: 2 stars, standard, especially useful (himpl_hstar_hpure_r).
Prove the rule himpl_hstar_hpure_r. Hint: recall from Hprop the lemma hstar_hpure_l, which asserts the equality (\[P] \* H) h = (P ∧ H h).
Lemma himpl_hstar_hpure_r : ∀ P H H',
P →
(H ==> H') →
H ==> (\[P] \* H').
Proof using. (* FILL IN HERE *) Admitted.
☐
P →
(H ==> H') →
H ==> (\[P] \* H').
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (himpl_hstar_hpure_l).
Prove the rule himpl_hstar_hpure_l.
Lemma himpl_hstar_hpure_l : ∀ (P:Prop) (H H':hprop),
(P → H ==> H') →
(\[P] \* H) ==> H'.
Proof using. (* FILL IN HERE *) Admitted.
☐
(P → H ==> H') →
(\[P] \* H) ==> H'.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (himpl_hexists_r).
Prove the rule himpl_hexists_r.
Lemma himpl_hexists_r : ∀ A (x:A) H J,
(H ==> J x) →
H ==> (\∃ x, J x).
Proof using. (* FILL IN HERE *) Admitted.
☐
(H ==> J x) →
H ==> (\∃ x, J x).
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (himpl_hexists_l).
Prove the rule himpl_hexists_l.
Lemma himpl_hexists_l : ∀ (A:Type) (H:hprop) (J:A→hprop),
(∀ x, J x ==> H) →
(\∃ x, J x) ==> H.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ x, J x ==> H) →
(\∃ x, J x) ==> H.
Proof using. (* FILL IN HERE *) Admitted.
☐
Extracting Information from Heap Predicates
Lemma hstar_hsingle_same_loc : ∀ (p:loc) (v1 v2:val),
(p ~~> v1) \* (p ~~> v2) ==> \[False].
(p ~~> v1) \* (p ~~> v2) ==> \[False].
The proof of this result exploits a result on finite maps.
Essentially, the domain of a single singleton map that binds
a location p to some value is the singleton set \{p}, thus
such a singleton map cannot be disjoint from another singleton
map that binds the same location p.
Check disjoint_single_single_same_inv : ∀ (p:loc) (v1 v2:val),
Fmap.disjoint (Fmap.single p v1) (Fmap.single p v2) →
False.
Using this lemma, we can prove hstar_hsingle_same_loc
by unfolding the definition of hstar to reveal the
contradiction on the disjointness assumption.
Check disjoint_single_single_same_inv : ∀ (p:loc) (v1 v2:val),
Fmap.disjoint (Fmap.single p v1) (Fmap.single p v2) →
False.
Proof using.
intros. unfold hsingle. intros h (h1&h2&E1&E2&D&E). false.
subst. applys Fmap.disjoint_single_single_same_inv D.
Qed.
intros. unfold hsingle. intros h (h1&h2&E1&E2&D&E). false.
subst. applys Fmap.disjoint_single_single_same_inv D.
Qed.
More generally, a heap predicate of the form H \* H is generally
suspicious in Separation Logic. In (the simplest variant of) Separation
Logic, such a predicate can only be satisfied if H covers no memory
cell at all, that is, if H is a pure predicate of the form \[P]
for some proposition P.
Consequence, Frame, and their Combination
Parameter triple_conseq : ∀ t H Q H' Q',
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
Recall the frame rule introduced in the previous chapter.
Parameter triple_frame : ∀ t H Q H',
triple t H Q →
triple t (H \* H') (Q \*+ H').
triple t H Q →
triple t (H \* H') (Q \*+ H').
Observe that, stated as such, it is very unlikely for the frame rule
to be applicable in practice, because the precondition must be exactly
of the form H \* H' and the postcondition exactly of the form
Q \*+ H', for some H'. For example, the frame rule would not apply
to a proof obligation of the form triple t (H' \* H) (Q \*+ H'),
simply because H' \* H does not match H \* H'.
This limitation of the frame rule can be addressed by combining the frame
rule with the rule of consequence into a single rule, called the
"consequence-frame rule". This rule, shown below, enables deriving
a triple from another triple, without any restriction on the exact
shape of the precondition and postcondition of the two triples involved.
Lemma triple_conseq_frame : ∀ H2 H1 Q1 t H Q,
triple t H1 Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
triple t H Q.
triple t H1 Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
triple t H Q.
Exercise: 1 star, standard, especially useful (triple_conseq_frame)
Prove the combined consequence-frame rule.
Proof using. (* FILL IN HERE *) Admitted.
☐
☐
The Extraction Rules for Triples
Parameter triple_hpure : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
(P → triple t H Q) →
triple t (\[P] \* H) Q.
A judgment of the form triple t (\∃ x, J x) Q is equivalent
to ∀ x, triple t (J x) Q. This structural rule is called
triple_hexists and formalized as shown below. It captures the
extraction of an existential quantifier out of the precondition
of a triple, in a similar way as himpl_hexists_l for entailments.
Parameter triple_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, triple t (J x) Q) →
triple t (\∃ x, J x) Q.
(∀ x, triple t (J x) Q) →
triple t (\∃ x, J x) Q.
Module XsimplTactic.
Import LibSepReference.
Notation "'hprop''" := (Hprop.hprop).
Import LibSepReference.
Notation "'hprop''" := (Hprop.hprop).
Module CaseStudy.
Implicit Types p q : loc.
Implicit Types n m : int.
End CaseStudy.
Implicit Types p q : loc.
Implicit Types n m : int.
End CaseStudy.
Module EntailmentProofs.
Implicit Types p : loc.
Implicit Types n : int.
Implicit Types p : loc.
Implicit Types n : int.
Proving an entailment by hand is generally a tedious task. This is why
most Separation Logic based framework include an automated tactic for
simplifying entailments. In this course, the relevant tactic is named
xsimpl. Further in this chapter, we describe by means of examples the
behavior of this tactic. But in order to best appreciate what the tactic
provides and best understand how it works, it is very useful to complete
a few proofs by hand.
Exercise: 3 stars, standard, optional (himpl_example_1)
Prove the example entailment below. Hint: exploit hstar_comm, hstar_assoc, or hstar_comm_assoc which combines the two, and exploit himpl_frame_l or himpl_frame_r to cancel out matching pieces.
Lemma himpl_example_1 : ∀ p1 p2 p3 p4,
p1 ~~> 6 \* p2 ~~> 7 \* p3 ~~> 8 \* p4 ~~> 9
==> p4 ~~> 9 \* p3 ~~> 8 \* p2 ~~> 7 \* p1 ~~> 6.
Proof using. (* FILL IN HERE *) Admitted.
☐
p1 ~~> 6 \* p2 ~~> 7 \* p3 ~~> 8 \* p4 ~~> 9
==> p4 ~~> 9 \* p3 ~~> 8 \* p2 ~~> 7 \* p1 ~~> 6.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 3 stars, standard, optional (himpl_example_2)
Prove the example entailment below. Hint: use himpl_hstar_hpure_r to extract pure facts, once they appear at the head of the left-hand side of the entailment.
Lemma himpl_example_2 : ∀ p1 p2 p3 n,
p1 ~~> 6 \* \[n > 0] \* p2 ~~> 7 \* \[n < 0]
==> p3 ~~> 8.
Proof using. (* FILL IN HERE *) Admitted.
☐
p1 ~~> 6 \* \[n > 0] \* p2 ~~> 7 \* \[n < 0]
==> p3 ~~> 8.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 3 stars, standard, optional (himpl_example_3)
Prove the example entailment below. Hint: use lemmas among himpl_hexists_r, himpl_hexists_l, himpl_hstar_hpure_r and himpl_hstar_hpure_r to deal with pure facts and quantifiers.
Lemma himpl_example_3 : ∀ p,
\∃ n, p ~~> n \* \[n > 0]
==> \∃ n, \[n > 1] \* p ~~> (n-1).
Proof using. (* FILL IN HERE *) Admitted.
☐
\∃ n, p ~~> n \* \[n > 0]
==> \∃ n, \[n > 1] \* p ~~> (n-1).
Proof using. (* FILL IN HERE *) Admitted.
☐
End EntailmentProofs.
The xsimpl Tactic
xsimpl to Extract Pure Facts and Quantifiers in LHS
Lemma xsimpl_demo_lhs_hpure : ∀ H1 H2 H3 H4 (n:int),
H1 \* H2 \* \[n > 0] \* H3 ==> H4.
Proof using.
intros. xsimpl. intros Hn.
Abort.
H1 \* H2 \* \[n > 0] \* H3 ==> H4.
Proof using.
intros. xsimpl. intros Hn.
Abort.
In case the LHS includes a contradiction, such as the pure fact
False, the goal gets solved immediately by xsimpl.
Lemma xsimpl_demo_lhs_hpure : ∀ H1 H2 H3 H4,
H1 \* H2 \* \[False] \* H3 ==> H4.
Proof using.
intros. xsimpl.
Qed.
H1 \* H2 \* \[False] \* H3 ==> H4.
Proof using.
intros. xsimpl.
Qed.
The xsimpl tactic also extracts existential quantifier from the LHS.
It turns them into universally quantified variables outside of the
entailment relation, as illustrated through the following example.
Lemma xsimpl_demo_lhs_hexists : ∀ H1 H2 H3 H4 (p:loc),
H1 \* \∃ (n:int), (p ~~> n \* H2) \* H3 ==> H4.
Proof using.
intros. xsimpl. intros n.
Abort.
H1 \* \∃ (n:int), (p ~~> n \* H2) \* H3 ==> H4.
Proof using.
intros. xsimpl. intros n.
Abort.
A call to xsimpl, or to its degraded version xpull, extract at once
all the pure facts and quantifiers from the LHS, as illustrated next.
Lemma xsimpl_demo_lhs_several : ∀ H1 H2 H3 H4 (p q:loc),
H1 \* \∃ (n:int), (p ~~> n \* \[n > 0] \* H2) \* \[p ≠ q] \* H3 ==> H4.
Proof using.
intros.
xsimpl. (* or xpull *)
intros n Hn Hp.
Abort.
H1 \* \∃ (n:int), (p ~~> n \* \[n > 0] \* H2) \* \[p ≠ q] \* H3 ==> H4.
Proof using.
intros.
xsimpl. (* or xpull *)
intros n Hn Hp.
Abort.
xsimpl to Cancel Out Heap Predicates from LHS and RHS
Lemma xsimpl_demo_cancel_one : ∀ H1 H2 H3 H4 H5 H6 H7,
H1 \* H2 \* H3 \* H4 ==> H5 \* H6 \* H2 \* H7.
Proof using.
intros. xsimpl.
Abort.
H1 \* H2 \* H3 \* H4 ==> H5 \* H6 \* H2 \* H7.
Proof using.
intros. xsimpl.
Abort.
xsimpl actually cancels out at once all the heap predicates that it
can spot appearing on both sides. In the example below, H2, H3,
and H4 are canceled out.
Lemma xsimpl_demo_cancel_many : ∀ H1 H2 H3 H4 H5,
H1 \* H2 \* H3 \* H4 ==> H4 \* H3 \* H5 \* H2.
Proof using.
intros. xsimpl.
Abort.
H1 \* H2 \* H3 \* H4 ==> H4 \* H3 \* H5 \* H2.
Proof using.
intros. xsimpl.
Abort.
If all the pieces of heap predicate get canceled out, the remaining
proof obligation is \[] ==> \[]. In this case, xsimpl automatically
solves the goal by invoking the reflexivity property of entailment.
Lemma xsimpl_demo_cancel_all : ∀ H1 H2 H3 H4,
H1 \* H2 \* H3 \* H4 ==> H4 \* H3 \* H1 \* H2.
Proof using.
intros. xsimpl.
Qed.
H1 \* H2 \* H3 \* H4 ==> H4 \* H3 \* H1 \* H2.
Proof using.
intros. xsimpl.
Qed.
xsimpl to Instantiate Pure Facts and Quantifiers in RHS
Lemma xsimpl_demo_rhs_hpure : ∀ H1 H2 H3 (n:int),
H1 ==> H2 \* \[n > 0] \* H3.
Proof using.
intros. xsimpl.
Abort.
H1 ==> H2 \* \[n > 0] \* H3.
Proof using.
intros. xsimpl.
Abort.
When it encounters an existential quantifier in the RHS, the xsimpl
tactic introduces a unification variable denoted by a question mark,
that is, an "evar", in Coq terminology. In the example below, the xsimpl
tactic turns \∃ n, .. p ~~> n .. into .. p ~~> ?x ...
Lemma xsimpl_demo_rhs_hexists : ∀ H1 H2 H3 H4 (p:loc),
H1 ==> H2 \* \∃ (n:int), (p ~~> n \* H3) \* H4.
Proof using.
intros. xsimpl.
Abort.
H1 ==> H2 \* \∃ (n:int), (p ~~> n \* H3) \* H4.
Proof using.
intros. xsimpl.
Abort.
The "evar" often gets subsequently instantiated as a result of
a cancellation step. For example, in the example below, xsimpl
instantiates the existentially quantified variable n as ?x,
then cancels out p ~~> ?x from the LHS against p ~~> 3 on
the right-hand-side, thereby unifying ?x with 3.
Lemma xsimpl_demo_rhs_hexists_unify : ∀ H1 H2 H3 H4 (p:loc),
H1 \* (p ~~> 3) ==> H2 \* \∃ (n:int), (p ~~> n \* H3) \* H4.
Proof using.
intros. xsimpl.
Abort.
H1 \* (p ~~> 3) ==> H2 \* \∃ (n:int), (p ~~> n \* H3) \* H4.
Proof using.
intros. xsimpl.
Abort.
The instantiation of the evar ?x can be observed if there is
another occurrence of the same variable in the entailment.
In the next example, which refines the previous one, observe how
n > 0 becomes 3 > 0.
Lemma xsimpl_demo_rhs_hexists_unify_view : ∀ H1 H2 H4 (p:loc),
H1 \* (p ~~> 3) ==> H2 \* \∃ (n:int), (p ~~> n \* \[n > 0]) \* H4.
Proof using.
intros. xsimpl.
Abort.
H1 \* (p ~~> 3) ==> H2 \* \∃ (n:int), (p ~~> n \* \[n > 0]) \* H4.
Proof using.
intros. xsimpl.
Abort.
(Advanced.) In some cases, it is desirable to provide an explicit value
for instantiating an existential quantifier that occurs in the RHS.
The xsimpl tactic accepts arguments, which will be used to instantiate
the existentials (on a first-match basis). The syntax is xsimpl v1 .. vn,
or xsimpl (>> v1 .. vn) in the case n > 3.
Lemma xsimpl_demo_rhs_hints : ∀ H1 (p q:loc),
H1 ==> \∃ (n m:int), (p ~~> n \* q ~~> m).
Proof using.
intros. xsimpl 3 4.
Abort.
H1 ==> \∃ (n m:int), (p ~~> n \* q ~~> m).
Proof using.
intros. xsimpl 3 4.
Abort.
(Advanced.) If two existential quantifiers quantify variables of the same
type, it is possible to provide a value for only the second quantifier
by passing as first argument to xsimpl the special value __.
The following example shows how, on LHS of the form \∃ n m, ...,
the tactic xsimpl __ 4 instantiates m with 4 while leaving n
as an unresolved evar.
Lemma xsimpl_demo_rhs_hints_evar : ∀ H1 (p q:loc),
H1 ==> \∃ (n m:int), (p ~~> n \* q ~~> m).
Proof using.
intros. xsimpl __ 4.
Abort.
H1 ==> \∃ (n m:int), (p ~~> n \* q ~~> m).
Proof using.
intros. xsimpl __ 4.
Abort.
xsimpl on Entailments Between Postconditions
Lemma qimpl_example_1 : ∀ (Q1 Q2:val→hprop) (H2 H3:hprop),
Q1 \*+ H2 ===> Q2 \*+ H2 \*+ H3.
Proof using. intros. xsimpl. intros r. Abort.
Q1 \*+ H2 ===> Q2 \*+ H2 \*+ H3.
Proof using. intros. xsimpl. intros r. Abort.
Lemma himpl_example_1 : ∀ (p:loc),
p ~~> 3 ==>
\∃ (n:int), p ~~> n.
Proof using. xsimpl. Qed.
Lemma himpl_example_2 : ∀ (p q:loc),
p ~~> 3 \* q ~~> 3 ==>
\∃ (n:int), p ~~> n \* q ~~> n.
Proof using. xsimpl. Qed.
Lemma himpl_example_4 : ∀ (p:loc),
\∃ (n:int), p ~~> n ==>
\∃ (m:int), p ~~> (m + 1).
Proof using.
intros. (* observe that xsimpl here does not work well. *)
xpull. intros n. xsimpl (n-1).
replace (n-1+1) with n. { auto. } { math. }
(* variant for the last line:
applys_eq himpl_refl. fequal. math. *)
Qed.
Lemma himpl_example_5 : ∀ (H:hprop),
\[False] ==> H.
Proof using. xsimpl. Qed.
p ~~> 3 ==>
\∃ (n:int), p ~~> n.
Proof using. xsimpl. Qed.
Lemma himpl_example_2 : ∀ (p q:loc),
p ~~> 3 \* q ~~> 3 ==>
\∃ (n:int), p ~~> n \* q ~~> n.
Proof using. xsimpl. Qed.
Lemma himpl_example_4 : ∀ (p:loc),
\∃ (n:int), p ~~> n ==>
\∃ (m:int), p ~~> (m + 1).
Proof using.
intros. (* observe that xsimpl here does not work well. *)
xpull. intros n. xsimpl (n-1).
replace (n-1+1) with n. { auto. } { math. }
(* variant for the last line:
applys_eq himpl_refl. fequal. math. *)
Qed.
Lemma himpl_example_5 : ∀ (H:hprop),
\[False] ==> H.
Proof using. xsimpl. Qed.
The xchange Tactic
Lemma xchange_demo_base : ∀ H1 H2 H2' H3 H4,
H2 ==> H2' →
H1 \* H2 \* H3 ==> H4.
Proof using.
introv M. xchange M.
(* Note that freshly produced items appear to the front *)
Abort.
H2 ==> H2' →
H1 \* H2 \* H3 ==> H4.
Proof using.
introv M. xchange M.
(* Note that freshly produced items appear to the front *)
Abort.
The tactic xchange can also take as argument equalities.
The tactic xchange M exploits the left-to-right direction of an
equality M, whereas xchange <- M exploits the right-to-left
direction .
Lemma xchange_demo_eq : ∀ H1 H2 H3 H4 H5,
H1 \* H3 = H5 →
H1 \* H2 \* H3 ==> H4.
Proof using.
introv M. xchange M.
xchange <- M.
Abort.
H1 \* H3 = H5 →
H1 \* H2 \* H3 ==> H4.
Proof using.
introv M. xchange M.
xchange <- M.
Abort.
The tactic xchange M does accept a lemma or hypothesis M
featuring universal quantifiers, as long as its conclusion
is an equality or an entailment. In such case, xchange M
instantiates M before attemting to perform a replacement.
Lemma xchange_demo_inst : ∀ H1 (J J':int→hprop) H3 H4,
(∀ n, J n = J' (n+1)) →
H1 \* J 3 \* H3 ==> H4.
Proof using.
introv M. xchange M.
(* Note that freshly produced items appear to the front *)
Abort.
(∀ n, J n = J' (n+1)) →
H1 \* J 3 \* H3 ==> H4.
Proof using.
introv M. xchange M.
(* Note that freshly produced items appear to the front *)
Abort.
How does the xchange tactic work? Consider a goal of the form H ==> H'
and assume xchange is invoked with an hypothesis of type H1 ==> H1'
as argument. The tactic xchange should attempt to decompose H as the
star of H1 and the rest of H, call it H2. If it succeeds, then the
goal H ==> H' can be rewritten as H1 \* H2 ==> H'. To exploit the
hypothesis H1 ==> H1', the tactic should replace the goal with the
entailment H1' \* H2 ==> H'. The lemma shown below captures this piece
of reasoning implemented by the tactic xchange.
Exercise: 2 stars, standard, especially useful (xchange_lemma)
Prove, without using the tactic xchange, the following lemma which captures the internal working of xchange.
Lemma xchange_lemma : ∀ H1 H1' H H' H2,
H1 ==> H1' →
H ==> H1 \* H2 →
H1' \* H2 ==> H' →
H ==> H'.
Proof using. (* FILL IN HERE *) Admitted.
☐
H1 ==> H1' →
H ==> H1 \* H2 →
H1' \* H2 ==> H' →
H ==> H'.
Proof using. (* FILL IN HERE *) Admitted.
☐
End XsimplTactic.
Module FundamentalProofs.
We next show the details of the proofs establishing the fundamental
properties of the Separation Logic operators.
Note that all these results must be proved without help of the tactic
xsimpl, because the implementation of the tactic xsimpl itself
depends on these fundamental properties.
We begin with the frame property, which is the simplest to prove.
Exercise: 1 star, standard, especially useful (himpl_frame_l)
Prove the frame property for entailment. Hint: unfold the definition of hstar.
Lemma himpl_frame_l : ∀ H2 H1 H1',
H1 ==> H1' →
(H1 \* H2) ==> (H1' \* H2).
Proof using. (* FILL IN HERE *) Admitted.
☐
H1 ==> H1' →
(H1 \* H2) ==> (H1' \* H2).
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 1 star, standard, especially useful (himpl_frame_r)
Prove the lemma himpl_frame_r, symmetric to himpl_frame_l.
Lemma himpl_frame_r : ∀ H1 H2 H2',
H2 ==> H2' →
(H1 \* H2) ==> (H1 \* H2').
Proof using. (* FILL IN HERE *) Admitted.
☐
H2 ==> H2' →
(H1 \* H2) ==> (H1 \* H2').
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma hstar_hexists : ∀ A (J:A→hprop) H,
(\∃ x, J x) \* H = \∃ x, (J x \* H).
Proof using.
intros. applys himpl_antisym.
{ intros h (h1&h2&M1&M2&D&U). destruct M1 as (x&M1). ∃* x h1 h2. }
{ intros h (x&M). destruct M as (h1&h2&M1&M2&D&U). ∃ h1 h2.
splits¬. ∃* x. }
Qed.
(\∃ x, J x) \* H = \∃ x, (J x \* H).
Proof using.
intros. applys himpl_antisym.
{ intros h (h1&h2&M1&M2&D&U). destruct M1 as (x&M1). ∃* x h1 h2. }
{ intros h (x&M). destruct M as (h1&h2&M1&M2&D&U). ∃ h1 h2.
splits¬. ∃* x. }
Qed.
There remains to establish the commutativity and associativity of the star
operator, and the fact that the empty heap predicate is its neutral element.
To establish these properties, we need to exploit a few basic facts about
finite maps. We introduce them as we go along.
To prove the commutativity of the star operator, i.e. H1 \* H2 = H2 \* H1,
it is sufficient to prove the entailement in one direction, e.g.,
H1 \* H2 ==> H2 \* H1. Indeed, the other other direction is symmetrical.
The symmetry argument is captured by the following lemma, which we will
exploit in the proof of hstar_comm.
Lemma hprop_op_comm : ∀ (op:hprop→hprop→hprop),
(∀ H1 H2, op H1 H2 ==> op H2 H1) →
(∀ H1 H2, op H1 H2 = op H2 H1).
Proof using. introv M. intros. applys himpl_antisym; applys M. Qed.
(∀ H1 H2, op H1 H2 ==> op H2 H1) →
(∀ H1 H2, op H1 H2 = op H2 H1).
Proof using. introv M. intros. applys himpl_antisym; applys M. Qed.
To prove commutativity of star, we need to exploit the fact that the union
of two finite maps with disjoint domains commutes. This fact is captured
by the following lemma.
Check Fmap.union_comm_of_disjoint : ∀ h1 h2,
Fmap.disjoint h1 h2 →
h1 \u h2 = h2 \u h1.
The commutativity result is then proved as follows. Observe the use of
the lemma hprop_op_comm, which allows establishing the entailment in
only one of the two directions.
Check Fmap.union_comm_of_disjoint : ∀ h1 h2,
Fmap.disjoint h1 h2 →
h1 \u h2 = h2 \u h1.
Lemma hstar_comm : ∀ H1 H2,
H1 \* H2 = H2 \* H1.
Proof using.
applys hprop_op_comm. intros. intros h (h1&h2&M1&M2&D&U). ∃ h2 h1.
subst. splits¬. { rewrite Fmap.union_comm_of_disjoint; auto. }
Qed.
H1 \* H2 = H2 \* H1.
Proof using.
applys hprop_op_comm. intros. intros h (h1&h2&M1&M2&D&U). ∃ h2 h1.
subst. splits¬. { rewrite Fmap.union_comm_of_disjoint; auto. }
Qed.
Exercise: 3 stars, standard, especially useful (hstar_hempty_l)
Prove that the empty heap predicate is a neutral element for the star. You will need to exploit the fact that the union with an empty map is the identity, as captured by lemma Fmap.union_empty_l.Check Fmap.union_empty_l : ∀ h,
Fmap.empty \u h = h.
Lemma hstar_hempty_l : ∀ H,
\[] \* H = H.
Proof using. (* FILL IN HERE *) Admitted.
☐
\[] \* H = H.
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma hstar_hempty_r : ∀ H,
H \* \[] = H.
Proof using.
intros. rewrite hstar_comm. rewrite hstar_hempty_l. auto.
Qed.
H \* \[] = H.
Proof using.
intros. rewrite hstar_comm. rewrite hstar_hempty_l. auto.
Qed.
Associativity of star is the most tedious result to derive.
We need to exploit the associativity of union on finite maps,
as well as lemmas about the disjointness of a map with the
result of the union of two maps.
Check Fmap.union_assoc : ∀ h1 h2 h3,
(h1 \u h2) \u h3 = h1 \u (h2 \u h3).
Check Fmap.disjoint_union_eq_l : ∀ h1 h2 h3,
Fmap.disjoint (h2 \u h3) h1
= (Fmap.disjoint h1 h2 ∧ Fmap.disjoint h1 h3).
Check Fmap.disjoint_union_eq_r : ∀ h1 h2 h3,
Fmap.disjoint h1 (h2 \u h3)
= (Fmap.disjoint h1 h2 ∧ Fmap.disjoint h1 h3).
Check Fmap.union_assoc : ∀ h1 h2 h3,
(h1 \u h2) \u h3 = h1 \u (h2 \u h3).
Check Fmap.disjoint_union_eq_l : ∀ h1 h2 h3,
Fmap.disjoint (h2 \u h3) h1
= (Fmap.disjoint h1 h2 ∧ Fmap.disjoint h1 h3).
Check Fmap.disjoint_union_eq_r : ∀ h1 h2 h3,
Fmap.disjoint h1 (h2 \u h3)
= (Fmap.disjoint h1 h2 ∧ Fmap.disjoint h1 h3).
Exercise: 1 star, standard, optional (hstar_assoc)
Complete the right-to-left part of the proof of associativity of the star operator.
Lemma hstar_assoc : ∀ H1 H2 H3,
(H1 \* H2) \* H3 = H1 \* (H2 \* H3).
Proof using.
intros. applys himpl_antisym.
{ intros h (h'&h3&M1&M2&D&U). destruct M1 as (h1&h2&M3&M4&D'&U').
subst h'. rewrite Fmap.disjoint_union_eq_l in D.
∃ h1 (h2 \u h3). splits.
{ applys M3. }
{ ∃* h2 h3. }
{ rewrite* @Fmap.disjoint_union_eq_r. }
{ rewrite* @Fmap.union_assoc in U. } }
(* FILL IN HERE *) Admitted.
☐
(H1 \* H2) \* H3 = H1 \* (H2 \* H3).
Proof using.
intros. applys himpl_antisym.
{ intros h (h'&h3&M1&M2&D&U). destruct M1 as (h1&h2&M3&M4&D'&U').
subst h'. rewrite Fmap.disjoint_union_eq_l in D.
∃ h1 (h2 \u h3). splits.
{ applys M3. }
{ ∃* h2 h3. }
{ rewrite* @Fmap.disjoint_union_eq_r. }
{ rewrite* @Fmap.union_assoc in U. } }
(* FILL IN HERE *) Admitted.
☐
End FundamentalProofs.
Module ProveConsequenceRules.
Recall the statement of the rule of consequence.
Lemma triple_conseq' : ∀ t H Q H' Q',
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
A direct proof of triple_conseq goes through the low-level
interpretation of Separation Logic triples in terms of heaps.
Proof using.
(* No need to follow through this low-level proof. *)
introv M WH WQ. rewrite triple_iff_triple_lowlevel in *.
intros h1 h2 D HH. forwards (v&h1'&D'&R&HQ): M D. applys WH HH.
∃ v h1'. splits¬. applys WQ HQ.
Qed.
(* No need to follow through this low-level proof. *)
introv M WH WQ. rewrite triple_iff_triple_lowlevel in *.
intros h1 h2 D HH. forwards (v&h1'&D'&R&HQ): M D. applys WH HH.
∃ v h1'. splits¬. applys WQ HQ.
Qed.
An alternative proof consists of first establishing the consequence
rule for the hoare judgment, then derive its generalization to
the triple judgment of Separation Logic.
Exercise: 3 stars, standard, especially useful (hoare_conseq)
Prove the consequence rule for Hoare triples.
Lemma hoare_conseq : ∀ t H Q H' Q',
hoare t H' Q' →
H ==> H' →
Q' ===> Q →
hoare t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
hoare t H' Q' →
H ==> H' →
Q' ===> Q →
hoare t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (triple_conseq)
Prove the consequence rule by leveraging the lemma hoare_conseq. Hint: unfold the definition of triple, apply the lemma hoare_conseq with the appropriate arguments, then exploit himpl_frame_l to prove the entailment relations.
Lemma triple_conseq : ∀ t H Q H' Q',
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
End ProveConsequenceRules.
Module ProveExtractionRules.
Recall the extraction rule for pure facts.
Parameter triple_hpure' : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
(P → triple t H Q) →
triple t (\[P] \* H) Q.
To prove this lemma, we first the establish corresponding
result on hoare, then derive its version for triple.
Lemma hoare_hpure : ∀ t (P:Prop) H Q,
(P → hoare t H Q) →
hoare t (\[P] \* H) Q.
Proof using.
introv M. intros h (h1&h2&M1&M2&D&U). destruct M1 as (M1&HP).
subst. rewrite Fmap.union_empty_l. applys M HP M2.
Qed.
Lemma triple_hpure : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Proof using.
introv M. unfold triple. intros H'.
rewrite hstar_assoc. applys hoare_hpure.
intros HP. applys M HP.
Qed.
(P → hoare t H Q) →
hoare t (\[P] \* H) Q.
Proof using.
introv M. intros h (h1&h2&M1&M2&D&U). destruct M1 as (M1&HP).
subst. rewrite Fmap.union_empty_l. applys M HP M2.
Qed.
Lemma triple_hpure : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Proof using.
introv M. unfold triple. intros H'.
rewrite hstar_assoc. applys hoare_hpure.
intros HP. applys M HP.
Qed.
Similarly, the extraction rule for existentials for
triple can be derived from that for hoare.
Exercise: 2 stars, standard, especially useful (triple_hexists)
Prove the extraction rule triple_hexists. Hint: start by stating and proving the corresponding lemma for hoare triples.
Lemma hoare_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, hoare t (J x) Q) →
hoare t (\∃ x, J x) Q.
Proof using. introv M. intros h (x&Hh). applys M Hh. Qed.
Lemma triple_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, triple t (J x) Q) →
triple t (\∃ x, J x) Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ x, hoare t (J x) Q) →
hoare t (\∃ x, J x) Q.
Proof using. introv M. intros h (x&Hh). applys M Hh. Qed.
Lemma triple_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, triple t (J x) Q) →
triple t (\∃ x, J x) Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Parameter hstar_hexists' : ∀ A (J:A→hprop) H,
(hexists J) \* H = hexists (J \*+ H).
Parameter triple_hexists' : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, triple t (J x) Q) →
triple t (hexists J) Q.
(hexists J) \* H = hexists (J \*+ H).
Parameter triple_hexists' : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, triple t (J x) Q) →
triple t (hexists J) Q.
Remark: in chapter Hprop, we observed that \[P] can be
encoded as \∃ (p:P), \[]. When this encoding is used,
the rule triple_hpure turns out to be a particular instance
of the rule triple_hexists, as we prove next.
Parameter hpure_encoding : ∀ P,
\[P] = (\∃ (p:P), \[]).
Lemma triple_hpure_from_triple_hexists : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Proof using.
introv M. rewrite hpure_encoding.
rewrite hstar_hexists. (* disable notation printing to see the effect *)
applys triple_hexists. (* ∀ (p:P), ... is the same as P → ... *)
rewrite hstar_hempty_l. applys M.
Qed.
End ProveExtractionRules.
\[P] = (\∃ (p:P), \[]).
Lemma triple_hpure_from_triple_hexists : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Proof using.
introv M. rewrite hpure_encoding.
rewrite hstar_hexists. (* disable notation printing to see the effect *)
applys triple_hexists. (* ∀ (p:P), ... is the same as P → ... *)
rewrite hstar_hempty_l. applys M.
Qed.
End ProveExtractionRules.
Rules for Naming Heaps
Exercise: 3 stars, standard, optional (hexists_named_eq)
Prove that a heap predicate H is equivalent to the heap predicate which asserts that the heap is, for a heap h such that H h, exactly equal to H.
Lemma hexists_named_eq : ∀ H,
H = (\∃ h, \[H h] \* (= h)).
Proof using. (* FILL IN HERE *) Admitted.
☐
H = (\∃ h, \[H h] \* (= h)).
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 1 star, standard, optional (hoare_named_heap)
Prove that the proposition hoare t H Q is equivalent to: for any heap h satisfying the precondition H, the Hoare triple whose precondition requires the input heap to be exactly equal to h, and whose postcondition is Q holds.
Lemma hoare_named_heap : ∀ t H Q,
(∀ h, H h → hoare t (= h) Q) →
hoare t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ h, H h → hoare t (= h) Q) →
hoare t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 3 stars, standard, optional (triple_named_heap)
Prove the counterpart of hoare_named_heap for Separation Logic triples.
Lemma triple_named_heap : ∀ t H Q,
(∀ h, H h → triple t (= h) Q) →
triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ h, H h → triple t (= h) Q) →
triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Module AlternativeExistentialRule.
Traditional papers on Separation Logic do not include triple_hexists,
but instead a rule called triple_hexists2 that includes an existential
quantifier both in the precondition and the postcondition.
As we show next, in the presence of the consequence rule,
the two rules are equivalent.
The formulation of triple_hexists is not only more concise, it is
also better suited for practical applications.
Lemma triple_hexists2 : ∀ A (Hof:A→hprop) (Qof:A→val→hprop) t,
(∀ x, triple t (Hof x) (Qof x)) →
triple t (\∃ x, Hof x) (fun v ⇒ \∃ x, Qof x v).
Proof using.
introv M.
applys triple_hexists. intros x.
applys triple_conseq (M x).
{ applys himpl_refl. }
{ intros v. applys himpl_hexists_r x. applys himpl_refl. }
Qed.
Lemma triple_hexists_of_triple_hexists2 : ∀ t (A:Type) (Hof:A→hprop) Q,
(∀ x, triple t (Hof x) Q) →
triple t (\∃ x, Hof x) Q.
Proof using.
introv M.
applys triple_conseq (\∃ x, Hof x) (fun (v:val) ⇒ \∃ (x:A), Q v).
{ applys triple_hexists2. intros x. applys M. }
{ applys himpl_refl. }
{ intros v. applys himpl_hexists_l. intros _. applys himpl_refl. }
Qed.
End AlternativeExistentialRule.
(∀ x, triple t (Hof x) (Qof x)) →
triple t (\∃ x, Hof x) (fun v ⇒ \∃ x, Qof x v).
Proof using.
introv M.
applys triple_hexists. intros x.
applys triple_conseq (M x).
{ applys himpl_refl. }
{ intros v. applys himpl_hexists_r x. applys himpl_refl. }
Qed.
Lemma triple_hexists_of_triple_hexists2 : ∀ t (A:Type) (Hof:A→hprop) Q,
(∀ x, triple t (Hof x) Q) →
triple t (\∃ x, Hof x) Q.
Proof using.
introv M.
applys triple_conseq (\∃ x, Hof x) (fun (v:val) ⇒ \∃ (x:A), Q v).
{ applys triple_hexists2. intros x. applys M. }
{ applys himpl_refl. }
{ intros v. applys himpl_hexists_l. intros _. applys himpl_refl. }
Qed.
End AlternativeExistentialRule.
Historical Notes
Rules: Reasoning Rules
Set Implicit Arguments.
This file imports LibSepDirect.v instead of Hprop.v and Himpl.v.
The file LibSepDirect.v contains definitions that are essentially similar
to those from Hprop.v and Himpl.v, yet with one main difference:
LibSepDirect makes the definition of Separation Logic operators opaque.
As a result, one cannot unfold the definition of hstar, hpure, etc.
To carry out reasoning, one must use the introduction and elimination
lemmas (e.g. hstar_intro, hstar_elim). These lemmas enforce
abstraction: they ensure that the proofs do not depend on the particular
choice of the definitions used for constructing Separation Logic.
From SLF Require Export LibSepReference.
From SLF Require Basic.
From SLF Require Basic.
First Pass
- introduced the key heap predicate operators,
- introduced the notion of Separation Logic triple,
- introduced the entailment relation,
- introduced the structural rules of Separation Logic.
- definition of the syntax of the language,
- definition of the semantics of the language,
- statements of the reasoning rules associated with each of the term constructions from the language,
- specification of the primitive operations of the language, in particular those associated with memory operations,
- review of the 4 structural rules introduced in prior chapters,
- examples of practical verification proofs.
- proofs of the reasoning rules associated with each term construct,
- proofs of the specification of the primitive operations.
Module SyntaxAndSemantics.
Syntax
Inductive val : Type :=
| val_unit : val
| val_bool : bool → val
| val_int : int → val
| val_loc : loc → val
| val_fun : var → trm → val
| val_fix : var → var → trm → val
| val_ref : val
| val_get : val
| val_set : val
| val_free : val
| val_add : val
| val_div : val
| val_unit : val
| val_bool : bool → val
| val_int : int → val
| val_loc : loc → val
| val_fun : var → trm → val
| val_fix : var → var → trm → val
| val_ref : val
| val_get : val
| val_set : val
| val_free : val
| val_add : val
| val_div : val
The grammar for terms includes values, variables, function definitions,
recursive function definitions, function applications, sequences,
let-bindings, and conditionals.
with trm : Type :=
| trm_val : val → trm
| trm_var : var → trm
| trm_fun : var → trm → trm
| trm_fix : var → var → trm → trm
| trm_app : trm → trm → trm
| trm_seq : trm → trm → trm
| trm_let : var → trm → trm → trm
| trm_if : trm → trm → trm → trm.
Note that trm_fun and trm_fix denote functions that may feature free
variables, unlike val_fun and val_fix which denote closed values.
The intention is that the evaluation of a trm_fun in the empty context
produces a val_fun value. Likewise, a trm_fix eventually evaluates to
a val_fix.
Several value constructors are declared as coercions, to enable more
concise statements. For example, val_loc is declared as a coercion,
so that a location p of type loc can be viewed as the value
val_loc p where an expression of type val is expected. Likewise,
a boolean b may be viewed as the value val_bool b, and an integer
n may be viewed as the value val_int n.
Coercion val_loc : loc >-> val.
Coercion val_bool : bool >-> val.
Coercion val_int : Z >-> val.
Coercion val_bool : bool >-> val.
Coercion val_int : Z >-> val.
State
Definition state : Type := fmap loc val.
For technical reasons related to the internal representation of finite
maps, to enable reading in a state, we need to justify that the grammar
of values is inhabited. This property is captured by the following
command, whose details are not relevant for understanding the rest of
the chapter.
Instance Inhab_val : Inhab val.
Proof using. apply (Inhab_of_val val_unit). Qed.
Proof using. apply (Inhab_of_val val_unit). Qed.
Substitution
Fixpoint subst (y:var) (w:val) (t:trm) : trm :=
let aux t := subst y w t in
let if_y_eq x t1 t2 := if var_eq x y then t1 else t2 in
match t with
| trm_val v ⇒ trm_val v
| trm_var x ⇒ if_y_eq x (trm_val w) t
| trm_fun x t1 ⇒ trm_fun x (if_y_eq x t1 (aux t1))
| trm_fix f x t1 ⇒ trm_fix f x (if_y_eq f t1 (if_y_eq x t1 (aux t1)))
| trm_app t1 t2 ⇒ trm_app (aux t1) (aux t2)
| trm_seq t1 t2 ⇒ trm_seq (aux t1) (aux t2)
| trm_let x t1 t2 ⇒ trm_let x (aux t1) (if_y_eq x t2 (aux t2))
| trm_if t0 t1 t2 ⇒ trm_if (aux t0) (aux t1) (aux t2)
end.
let aux t := subst y w t in
let if_y_eq x t1 t2 := if var_eq x y then t1 else t2 in
match t with
| trm_val v ⇒ trm_val v
| trm_var x ⇒ if_y_eq x (trm_val w) t
| trm_fun x t1 ⇒ trm_fun x (if_y_eq x t1 (aux t1))
| trm_fix f x t1 ⇒ trm_fix f x (if_y_eq f t1 (if_y_eq x t1 (aux t1)))
| trm_app t1 t2 ⇒ trm_app (aux t1) (aux t2)
| trm_seq t1 t2 ⇒ trm_seq (aux t1) (aux t2)
| trm_let x t1 t2 ⇒ trm_let x (aux t1) (if_y_eq x t2 (aux t2))
| trm_if t0 t1 t2 ⇒ trm_if (aux t0) (aux t1) (aux t2)
end.
Implicit Types and Coercions
Implicit Types b : bool.
Implicit Types v r : val.
Implicit Types t : trm.
Implicit Types s : state.
Implicit Types v r : val.
Implicit Types t : trm.
Implicit Types s : state.
We next introduce two key coercions. First, we declare
trm_val as a coercion, so that, instead of writing trm_val v,
we may write simply v wherever a term is expected.
Coercion trm_val : val >-> trm.
Second, we declare trm_app as a "Funclass" coercion. This piece
of magic enables us to write t1 t2 as a shorthand for trm_app t1 t2.
The idea of associating trm_app as the "Funclass" coercion for the
type trm is that if a term t1 of type trm is applied like a
function to an argument, then t1 should be interpreted as trm_app t1.
Coercion trm_app : trm >-> Funclass.
Interestingly, the "Funclass" coercion for trm_app can be iterated.
The expression t1 t2 t3 is parsed by Coq as (t1 t2) t3. The first
application t1 t2 is interpreted as trm_app t1 t2. This expression,
which itself has type trm, is applied to t3. Hence, it is interpreted
as trm_app (trm_app t1 t2) t3, which indeed corresponds to a function
applied to two arguments.
Big-Step Semantics
Inductive eval : state → trm → state → val → Prop :=
1. eval for values and function definitions.
A value evaluates to itself.
A term function evaluates to a value function.
Likewise for a recursive function.
| eval_val : ∀ s v,
eval s (trm_val v) s v
| eval_fun : ∀ s x t1,
eval s (trm_fun x t1) s (val_fun x t1)
| eval_fix : ∀ s f x t1,
eval s (trm_fix f x t1) s (val_fix f x t1)
2. eval for function applications.
The beta reduction rule asserts that (val_fun x t1) v2
evaluates to the same result as subst x v2 t1.
In the recursive case, (val_fix f x t1) v2 evaluates to
subst x v2 (subst f v1 t1), where v1 denotes the recursive
function itself, that is, val_fix f x t1.
| eval_app_fun : ∀ s1 s2 v1 v2 x t1 v,
v1 = val_fun x t1 →
eval s1 (subst x v2 t1) s2 v →
eval s1 (trm_app v1 v2) s2 v
| eval_app_fix : ∀ s1 s2 v1 v2 f x t1 v,
v1 = val_fix f x t1 →
eval s1 (subst x v2 (subst f v1 t1)) s2 v →
eval s1 (trm_app v1 v2) s2 v
3. eval for structural constructs.
A sequence trm_seq t1 t2 first evaluates t1, taking the
state from s1 to s2, drops the result of t1, then evaluates
t2, taking the state from s2 to s3.
The let-binding trm_let x t1 t2 is similar, except that the
variable x gets substituted for the result of t1 inside t2.
| eval_seq : ∀ s1 s2 s3 t1 t2 v1 v,
eval s1 t1 s2 v1 →
eval s2 t2 s3 v →
eval s1 (trm_seq t1 t2) s3 v
| eval_let : ∀ s1 s2 s3 x t1 t2 v1 r,
eval s1 t1 s2 v1 →
eval s2 (subst x v1 t2) s3 r →
eval s1 (trm_let x t1 t2) s3 r
4. eval for conditionals.
A conditional in a source program is assumed to be of the form
if t0 then t1 else t2, where t0 is either a variable or a
value. If it is a variable, then by the time it reaches an
evaluation position, the variable must have been substituted
by a value. Thus, the evaluation rule only considers the form
if v0 then t1 else t2. The value v0 must be a boolean value,
otherwise evaluation gets stuck.
The term trm_if (val_bool true) t1 t2 behaves like t1, whereas
the term trm_if (val_bool false) t1 t2 behaves like t2.
This behavior is described by a single rule, leveraging Coq's "if"
constructor to factor out the two cases.
| eval_if : ∀ s1 s2 b v t1 t2,
eval s1 (if b then t1 else t2) s2 v →
eval s1 (trm_if (val_bool b) t1 t2) s2 v
5. eval for primitive stateless operations.
For similar reasons as explained above, the behavior of applied
primitive functions only need to be described for the case of value
arguments.
An arithmetic operation expects integer arguments. The addition
of val_int n1 and val_int n2 produces val_int (n1 + n2).
The division operation, on the same arguments, produces the
quotient n1 / n2, under the assumption that the dividor n2
is non-zero. In other words, if a program performs a division
by zero, then it cannot satisfy the eval judgment.
| eval_add : ∀ s n1 n2,
eval s (val_add (val_int n1) (val_int n2)) s (val_int (n1 + n2))
| eval_div : ∀ s n1 n2,
n2 ≠ 0 →
eval s (val_div (val_int n1) (val_int n2)) s (val_int (Z.quot n1 n2))
6. eval for primitive operations on memory.
There remains to describe operations that act on the mutable store.
val_ref v allocates a fresh cell with contents v. The operation
returns the location, written p, of the new cell. This location
must not be previously in the domain of the store s.
val_get (val_loc p) reads the value in the store s at location p.
The location must be bound to a value in the store, else evaluation
is stuck.
val_set (val_loc p) v updates the store at a location p assumed to
be bound in the store s. The operation modifies the store and returns
the unit value.
val_free (val_loc p) deallocates the cell at location p.
| eval_ref : ∀ s v p,
¬Fmap.indom s p →
eval s (val_ref v) (Fmap.update s p v) (val_loc p)
| eval_get : ∀ s p,
Fmap.indom s p →
eval s (val_get (val_loc p)) s (Fmap.read s p)
| eval_set : ∀ s p v,
Fmap.indom s p →
eval s (val_set (val_loc p) v) (Fmap.update s p v) val_unit
| eval_free : ∀ s p,
Fmap.indom s p →
eval s (val_free (val_loc p)) (Fmap.remove s p) val_unit.
End SyntaxAndSemantics.
Loading of Definitions from Direcŧ
Implicit Types x f : var.
Implicit Types b : bool.
Implicit Types p : loc.
Implicit Types n : int.
Implicit Types v w r : val.
Implicit Types t : trm.
Implicit Types h : heap.
Implicit Types s : state.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
Implicit Types b : bool.
Implicit Types p : loc.
Implicit Types n : int.
Implicit Types v w r : val.
Implicit Types t : trm.
Implicit Types h : heap.
Implicit Types s : state.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
Rules for Terms
Reasoning Rule for Sequences
{H} t1 {fun v => H1} {H1} t2 {Q} | |
{H} (t1;t2) {Q} |
Parameter triple_seq : ∀ t1 t2 H Q H1,
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
The variable v denotes the result of the evaluation of t1. For
well-typed programs, this result would always be val_unit. Yet,
because we here consider an untyped language, we do not bother
adding the constraint v = val_unit. Instead, we simply treat the
result of t1 as a value irrelevant to the remaining of the
evaluation.
Reasoning Rule for Let-Bindings
{H} t1 {Q1} (forall x, {Q1 x} t2 {Q}) | |
{H} (let x = t1 in t2) {Q} |
{H} t1 {Q1} (forall v, {Q1 v} (subst x v t2) {Q}) | |
{H} (let x = t1 in t2) {Q} |
Parameter triple_let : ∀ x t1 t2 Q1 H Q,
triple t1 H Q1 →
(∀ v1, triple (subst x v1 t2) (Q1 v1) Q) →
triple (trm_let x t1 t2) H Q.
triple t1 H Q1 →
(∀ v1, triple (subst x v1 t2) (Q1 v1) Q) →
triple (trm_let x t1 t2) H Q.
Reasoning Rule for Conditionals
b = true -> {H} t1 {Q} b = false -> {H} t2 {Q} | |
{H} (if b then t1 in t2) {Q} |
Parameter triple_if_case : ∀ b t1 t2 H Q,
(b = true → triple t1 H Q) →
(b = false → triple t2 H Q) →
triple (trm_if (val_bool b) t1 t2) H Q.
(b = true → triple t1 H Q) →
(b = false → triple t2 H Q) →
triple (trm_if (val_bool b) t1 t2) H Q.
Note that the two premises may be factorized into a single one
using Coq's conditional construct. Such an alternative
statement is discussed further in this chapter.
Reasoning Rule for Values
{\[]} v {fun r => \[r = v]} |
H ==> Q v
---------
{H} v {Q}
Parameter triple_val : ∀ v H Q,
H ==> Q v →
triple (trm_val v) H Q.
H ==> Q v →
triple (trm_val v) H Q.
Let us prove that the "minimalistic" rule {\[]} v {fun r ⇒ \[r = v]}
is equivalent to triple_val.
Exercise: 1 star, standard, especially useful (triple_val_minimal)
Prove that the alternative rule for values derivable from triple_val. Hint: use the tactic xsimpl to conclude the proof.
Lemma triple_val_minimal : ∀ v,
triple (trm_val v) \[] (fun r ⇒ \[r = v]).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (trm_val v) \[] (fun r ⇒ \[r = v]).
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (triple_val')
More interestingly, prove that triple_val is derivable from triple_val_minimal. Hint: you will need to exploit the appropriate structural rule(s).
Lemma triple_val' : ∀ v H Q,
H ==> Q v →
triple (trm_val v) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
H ==> Q v →
triple (trm_val v) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 4 stars, standard, especially useful (triple_let_val)
Consider a term of the form let x = v1 in t2, that is, where the argument of the let-binding is already a value. State and prove a reasoning rule of the form:Lemma triple_let_val : ∀ x v1 t2 H Q,
... →
triple (trm_let x v1 t2) H Q.
(* FILL IN HERE *)
☐
☐
Reasoning Rule for Functions
{\[]} (trm_fun x t1) {fun r => \[r = val_fun x t1]} |
Parameter triple_fun : ∀ x t1 H Q,
H ==> Q (val_fun x t1) →
triple (trm_fun x t1) H Q.
H ==> Q (val_fun x t1) →
triple (trm_fun x t1) H Q.
The rule for recursive functions is similar. It is presented
further in the file.
Last but not least, we need a reasoning rule to reason about a
function application. Consider an application trm_app v1 v2.
Assume v1 to be a function, that is, to be of the form
(* val_fun x t1. Then, according to the beta-reduction rule, the *)
semantics of trm_app v1 v2 is the same as that of subst x v2 t1. This reasoning rule is thus:
The corresponding Coq statement is as shown below.
semantics of trm_app v1 v2 is the same as that of subst x v2 t1. This reasoning rule is thus:
v1 = val_fun x t1 {H} (subst x v2 t1) {Q} | |
{H} (trm_app v1 v2) {Q} |
Parameter triple_app_fun : ∀ x v1 v2 t1 H Q,
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
The generalization to the application of recursive functions is
straightforward. It is discussed further in this chapter.
Specification of Primitive Operations
Specification of Arithmetic Primitive Operations
Parameter triple_add : ∀ n1 n2,
triple (val_add n1 n2)
\[]
(fun r ⇒ \[r = val_int (n1 + n2)]).
triple (val_add n1 n2)
\[]
(fun r ⇒ \[r = val_int (n1 + n2)]).
The specification of the division operation val_div n1 n2 is similar,
yet with the extra requirement that the argument n2 must be nonzero.
This requirement n2 ≠ 0 is a pure fact, which can be asserted as
part of the precondition, as follows.
Parameter triple_div : ∀ n1 n2,
triple (val_div n1 n2)
\[n2 ≠ 0]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
triple (val_div n1 n2)
\[n2 ≠ 0]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Equivalently, the requirement n2 ≠ 0 may be asserted as an
hypothesis to the front of the triple judgment, in the form of
a standard Coq hypothesis, as shown below.
Parameter triple_div' : ∀ n1 n2,
n2 ≠ 0 →
triple (val_div n1 n2)
\[]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
n2 ≠ 0 →
triple (val_div n1 n2)
\[]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
This latter presentation with pure facts such as n2 ≠ 0 placed
to the front of the triple turns out to be more practical to exploit
in proofs. Hence, we always follow this style of presentation, and
reserve the precondition for describing pieces of mutable state.
Specification of Primitive Operations Acting on Memory
Parameter triple_get : ∀ v p,
triple (val_get (val_loc p))
(p ~~> v)
(fun r ⇒ \[r = v] \* (p ~~> v)).
triple (val_get (val_loc p))
(p ~~> v)
(fun r ⇒ \[r = v] \* (p ~~> v)).
Remark: val_loc is registered as a coercion, so val_get (val_loc p)
could be written simply as val_get p, where p has type loc.
We here chose to write val_loc explicitly for clarity.
Recall that val_set denotes the operation for writing a memory cell.
A call of the form val_set v' w executes safely if v' is of the
form val_loc p for some location p, in a state p ~~> v.
The write operation updates this state to p ~~> w, and returns
the unit value, which can be ignored. Hence, val_set is specified
as follows.
Parameter triple_set : ∀ w p v,
triple (val_set (val_loc p) w)
(p ~~> v)
(fun _ ⇒ p ~~> w).
triple (val_set (val_loc p) w)
(p ~~> v)
(fun _ ⇒ p ~~> w).
Recall that val_ref denotes the operation for allocating a cell
with a given contents. A call to val_ref v does not depend on
the contents of the existing state. It extends the state with a fresh
singleton cell, at some location p, assigning it v as contents.
The fresh cell is then described by the heap predicate p ~~> v.
The evaluation of val_ref v produces the value val_loc p. Thus,
if r denotes the result value, we have r = val_loc p for some p.
In the corresponding specification shown below, observe how the
location p is existentially quantified in the postcondition.
Parameter triple_ref : ∀ v,
triple (val_ref v)
\[]
(fun (r:val) ⇒ \∃ (p:loc), \[r = val_loc p] \* p ~~> v).
triple (val_ref v)
\[]
(fun (r:val) ⇒ \∃ (p:loc), \[r = val_loc p] \* p ~~> v).
Using the notation funloc p ⇒ H as a shorthand for
fun (r:val) ⇒ \∃ (p:loc), \[r = val_loc p] \* H,
the specification for val_ref becomes more concise.
Parameter triple_ref' : ∀ v,
triple (val_ref v)
\[]
(funloc p ⇒ p ~~> v).
triple (val_ref v)
\[]
(funloc p ⇒ p ~~> v).
Recall that val_free denotes the operation for deallocating a cell
at a given address. A call of the form val_free p executes safely
in a state p ~~> v. The operation leaves an empty state, and
asserts that the return value, named r, is equal to unit.
Parameter triple_free : ∀ p v,
triple (val_free (val_loc p))
(p ~~> v)
(fun _ ⇒ \[]).
triple (val_free (val_loc p))
(p ~~> v)
(fun _ ⇒ \[]).
Review of the Structural Rules
Parameter triple_frame : ∀ t H Q H',
triple t H Q →
triple t (H \* H') (Q \*+ H').
triple t H Q →
triple t (H \* H') (Q \*+ H').
The consequence rule allows to strengthen the precondition and
weaken the postcondition.
Parameter triple_conseq : ∀ t H' Q' H Q,
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
triple t H' Q' →
H ==> H' →
Q' ===> Q →
triple t H Q.
In practice, it is most convenient to exploit a rule that combines
both frame and consequence into a single rule, as stated next.
(Note that this "combined structural rule" was proved as an exercise
in chapter Himpl.)
Parameter triple_conseq_frame : ∀ H2 H1 Q1 t H Q,
triple t H1 Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
triple t H Q.
triple t H1 Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
triple t H Q.
The two extraction rules enable to extract pure facts and existentially
quantified variables, from the precondition into the Coq context.
Parameter triple_hpure : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Parameter triple_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ (x:A), triple t (J x) Q) →
triple t (\∃ (x:A), J x) Q.
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Parameter triple_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ (x:A), triple t (J x) Q) →
triple t (\∃ (x:A), J x) Q.
Exercise: 1 star, standard, optional (triple_hpure')
The extraction rule triple_hpure assumes a precondition of the form \[P] \* H. The variant rule triple_hpure', stated below, assumes instead a precondition with only the pure part, i.e., of the form \[P]. Prove that triple_hpure' is indeed a corollary of triple_hpure.
Lemma triple_hpure' : ∀ t (P:Prop) Q,
(P → triple t \[] Q) →
triple t \[P] Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(P → triple t \[] Q) →
triple t \[P] Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Verification Proof in Separation Logic
Module ExamplePrograms.
Export ProgramSyntax.
Export ProgramSyntax.
Proof of incr
p := !p + 1
let n = !p in
let m = n+1 in
p := m
Definition incr : val :=
val_fun "p" (
trm_let "n" (trm_app val_get (trm_var "p")) (
trm_let "m" (trm_app (trm_app val_add
(trm_var "n")) (val_int 1)) (
trm_app (trm_app val_set (trm_var "p")) (trm_var "m")))).
val_fun "p" (
trm_let "n" (trm_app val_get (trm_var "p")) (
trm_let "m" (trm_app (trm_app val_add
(trm_var "n")) (val_int 1)) (
trm_app (trm_app val_set (trm_var "p")) (trm_var "m")))).
Alternatively, using notation and coercions, the same program can be
written as shown below.
Definition incr' : val :=
<{ fun 'p ⇒
let 'n = ! 'p in
let 'm = 'n + 1 in
'p := 'm }>.
<{ fun 'p ⇒
let 'n = ! 'p in
let 'm = 'n + 1 in
'p := 'm }>.
Let us check that the two definitions are indeed the same.
Lemma incr_eq_incr' :
incr = incr'.
Proof using. reflexivity. Qed.
incr = incr'.
Proof using. reflexivity. Qed.
Recall from the first chapter the specification of the increment function.
Its precondition requires a singleton state of the form p ~~> n.
Its postcondition describes a state of the form p ~~> (n+1).
Lemma triple_incr : ∀ (p:loc) (n:int),
triple (trm_app incr p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+1)).
triple (trm_app incr p)
(p ~~> n)
(fun _ ⇒ p ~~> (n+1)).
We next show a detailed proof for this specification. It exploits:
- the structural reasoning rules,
- the reasoning rules for terms,
- the specification of the primitive functions,
- the xsimpl tactic for simplifying entailments.
Proof using.
(* initialize the proof *)
intros. applys triple_app_fun. { reflexivity. } simpl.
(* reason about let n = .. *)
applys triple_let.
(* reason about !p *)
{ apply triple_get. }
(* name n' the result of !p *)
intros n'. simpl.
(* substitute away the equality n' = n *)
apply triple_hpure. intros →.
(* reason about let m = .. *)
applys triple_let.
(* apply the frame rule to put aside p ~~> n *)
{ applys triple_conseq_frame.
(* reason about n+1 in the empty state *)
{ applys triple_add. }
{ xsimpl. }
{ xsimpl. } }
(* name m' the result of n+1 *)
intros m'. simpl.
(* substitute away the equality m' = m *)
apply triple_hpure. intros →.
(* reason about p := m *)
{ applys triple_set. }
Qed.
(* initialize the proof *)
intros. applys triple_app_fun. { reflexivity. } simpl.
(* reason about let n = .. *)
applys triple_let.
(* reason about !p *)
{ apply triple_get. }
(* name n' the result of !p *)
intros n'. simpl.
(* substitute away the equality n' = n *)
apply triple_hpure. intros →.
(* reason about let m = .. *)
applys triple_let.
(* apply the frame rule to put aside p ~~> n *)
{ applys triple_conseq_frame.
(* reason about n+1 in the empty state *)
{ applys triple_add. }
{ xsimpl. }
{ xsimpl. } }
(* name m' the result of n+1 *)
intros m'. simpl.
(* substitute away the equality m' = m *)
apply triple_hpure. intros →.
(* reason about p := m *)
{ applys triple_set. }
Qed.
Definition succ_using_incr : val :=
<{ fun 'n ⇒
let 'r = val_ref 'n in
incr 'r;
let 'x = ! 'r in
val_free 'r;
'x }>.
<{ fun 'n ⇒
let 'r = val_ref 'n in
incr 'r;
let 'x = ! 'r in
val_free 'r;
'x }>.
Recall the specification of succ_using_incr.
Lemma triple_succ_using_incr : ∀ (n:int),
triple (trm_app succ_using_incr n)
\[]
(fun v ⇒ \[v = val_int (n+1)]).
triple (trm_app succ_using_incr n)
\[]
(fun v ⇒ \[v = val_int (n+1)]).
Exercise: 4 stars, standard, especially useful (triple_succ_using_incr)
Verify the function triple_succ_using_incr. Hint: follow the pattern of triple_incr. Hint: use applys triple_seq for reasoning about a sequence. Hint: use applys triple_val for reasoning about the final return value, namely x.
Proof using. (* FILL IN HERE *) Admitted.
☐
☐
Import Basic.Facto.
Recall from Basic the function repeat_incr.
let rec factorec n =
if n ≤ 1 then 1 else n * factorec (n-1)
if n ≤ 1 then 1 else n * factorec (n-1)
Definition factorec : val :=
<{ fix 'f 'n ⇒
let 'b = ('n ≤ 1) in
if 'b
then 1
else let 'x = 'n - 1 in
let 'y = 'f 'x in
'n * 'y }>.
<{ fix 'f 'n ⇒
let 'b = ('n ≤ 1) in
if 'b
then 1
else let 'x = 'n - 1 in
let 'y = 'f 'x in
'n * 'y }>.
Exercise: 4 stars, standard, especially useful (triple_factorec)
Verify the function factorec. Hint: exploit triple_app_fix for reasoning about the recursive function. Hint: triple_hpure', the corollary of triple_hpure, is helpful. Hint: exploit triple_le and triple_sub and triple_mul to reason about the behavior of the primitive operations involved. Hint: exploit applys triple_if. case_if as C. to reason about the conditional; alternatively, if using triple_if_case, you'll need to use the tactic rew_bool_eq in * to simplify, e.g., the expression isTrue (m ≤ 1)) = true.
Lemma triple_factorec : ∀ n,
n ≥ 0 →
triple (factorec n)
\[]
(fun r ⇒ \[r = facto n]).
Proof using. (* FILL IN HERE *) Admitted.
☐
n ≥ 0 →
triple (factorec n)
\[]
(fun r ⇒ \[r = facto n]).
Proof using. (* FILL IN HERE *) Admitted.
☐
End ExamplePrograms.
What's Next
- automating the application of the frame rule
- eliminating the need to manipulate program variables and substitutions during the verification proof.
Module DivSpec.
Recall the specification for division.
Parameter triple_div : ∀ n1 n2,
n2 ≠ 0 →
triple (val_div n1 n2)
\[]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
n2 ≠ 0 →
triple (val_div n1 n2)
\[]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Equivalently, we could place the requirement n2 ≠ 0 in the
precondition:
Parameter triple_div' : ∀ n1 n2,
triple (val_div n1 n2)
\[n2 ≠ 0]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
triple (val_div n1 n2)
\[n2 ≠ 0]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Let us formally prove that the two presentations are equivalent.
Exercise: 1 star, standard, especially useful (triple_div_from_triple_div')
Prove triple_div by exploiting triple_div'. Hint: the key proof step is applys triple_conseq
Lemma triple_div_from_triple_div' : ∀ n1 n2,
n2 ≠ 0 →
triple (val_div n1 n2)
\[]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Proof using. (* FILL IN HERE *) Admitted.
☐
n2 ≠ 0 →
triple (val_div n1 n2)
\[]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (triple_div'_from_triple_div)
Prove triple_div' by exploiting triple_div. Hint: the first key proof step is applys triple_hpure. Yet some preliminary rewriting is required for this tactic to apply. Hint: the second key proof step is applys triple_conseq.
Lemma triple_div'_from_triple_div : ∀ n1 n2,
triple (val_div n1 n2)
\[n2 ≠ 0]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (val_div n1 n2)
\[n2 ≠ 0]
(fun r ⇒ \[r = val_int (Z.quot n1 n2)]).
Proof using. (* FILL IN HERE *) Admitted.
☐
End DivSpec.
Module LetFrame.
Recall the Separation Logic let rule.
Parameter triple_let : ∀ x t1 t2 Q1 H Q,
triple t1 H Q1 →
(∀ v1, triple (subst x v1 t2) (Q1 v1) Q) →
triple (trm_let x t1 t2) H Q.
triple t1 H Q1 →
(∀ v1, triple (subst x v1 t2) (Q1 v1) Q) →
triple (trm_let x t1 t2) H Q.
At first sight, it seems that, to reason about let x = t1 in t2
in a state described by precondition H, we need to first reason
about t1 in that same state. Yet, t1 may well require only a
subset of the state H to evaluate, and not all of H.
The "let-frame" rule combines the rule for let-bindings with the
frame rule to make it more explicit that the precondition H
may be decomposed in the form H1 \* H2, where H1 is the part
needed by t1, and H2 denotes the rest of the state. The part
of the state covered by H2 remains unmodified during the evaluation
of t1, and appears as part of the precondition of t2.
The corresponding statement is as follows.
Lemma triple_let_frame : ∀ x t1 t2 Q1 H H1 H2 Q,
triple t1 H1 Q1 →
H ==> H1 \* H2 →
(∀ v1, triple (subst x v1 t2) (Q1 v1 \* H2) Q) →
triple (trm_let x t1 t2) H Q.
triple t1 H1 Q1 →
H ==> H1 \* H2 →
(∀ v1, triple (subst x v1 t2) (Q1 v1 \* H2) Q) →
triple (trm_let x t1 t2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
☐
End LetFrame.
Module Proofs.
The proofs for the Separation Logic reasoning rules all follow
a similar pattern: first establish a corresponding rule for
Hoare triples, then generalize it to a Separation Logic triple.
To establish a reasoning rule w.r.t. a Hoare triple, we reveal
the definition expressed in terms of the big-step semantics.
Definition hoare (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ s, H s →
∃ s' v, eval s t s' v ∧ Q v s'.
Concretely, we consider a given initial state s satisfying the
precondition, and we have to provide witnesses for the output
value v and output state s' such that the reduction holds and
the postcondition holds.
Then, to lift the reasoning rule from Hoare logic to Separation
Logic, we reveal the definition of a Separation Logic triple.
Definition triple t H Q :=
∀ H', hoare t (H \* H') (Q \*+ H').
Recall that we already employed this two-step scheme in the
previous chapter, e.g., to establish the consequence rule
(rule_conseq).
Definition hoare (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ s, H s →
∃ s' v, eval s t s' v ∧ Q v s'.
Definition triple t H Q :=
∀ H', hoare t (H \* H') (Q \*+ H').
Proof of triple_val
Parameter eval_val : ∀ s v,
eval s v s v.
eval s v s v.
The Hoare version of the reasoning rule for values is as follows.
Lemma hoare_val : ∀ v H Q,
H ==> Q v →
hoare (trm_val v) H Q.
Proof using.
H ==> Q v →
hoare (trm_val v) H Q.
Proof using.
1. We unfold the definition of hoare.
introv M. intros s K0.
2. We provide the witnesses for the output value and heap.
These witnesses are dictated by the statement of eval_val.
∃ s v. splits.
3. We invoke the big-step rule eval_val
{ applys eval_val. }
4. We establish the postcondition, exploiting the entailment hypothesis.
{ applys M. auto. }
Qed.
Qed.
The Separation Logic version of the rule then follows.
Lemma triple_val : ∀ v H Q,
H ==> Q v →
triple (trm_val v) H Q.
Proof using.
H ==> Q v →
triple (trm_val v) H Q.
Proof using.
1. We unfold the definition of triple to reveal a hoare judgment.
introv M. intros H'.
2. We invoke the reasoning rule hoare_val that was just established.
applys hoare_val.
3. We exploit the assumption and conclude using xchange.
xchange M.
Qed.
Qed.
Parameter eval_seq : ∀ s1 s2 s3 t1 t2 v1 v,
eval s1 t1 s2 v1 →
eval s2 t2 s3 v →
eval s1 (trm_seq t1 t2) s3 v.
eval s1 t1 s2 v1 →
eval s2 t2 s3 v →
eval s1 (trm_seq t1 t2) s3 v.
The Hoare triple version of the reasoning rule is proved as follows.
This lemma, called hoare_seq, has the same statement as triple_seq,
except with occurrences of triple replaced with hoare.
Lemma hoare_seq : ∀ t1 t2 H Q H1,
hoare t1 H (fun v ⇒ H1) →
hoare t2 H1 Q →
hoare (trm_seq t1 t2) H Q.
Proof using.
hoare t1 H (fun v ⇒ H1) →
hoare t2 H1 Q →
hoare (trm_seq t1 t2) H Q.
Proof using.
1. We unfold the definition of hoare.
Let K0 describe the initial state.
introv M1 M2. intros s K0. (* optional: *) unfolds hoare.
2. We exploit the first hypothesis to obtain information about
the evaluation of the first subterm t1.
The state before t1 executes is described by K0.
The state after t1 executes is described by K1.
forwards (s1'&v1&R1&K1): (rm M1) K0.
3. We exploit the second hypothesis to obtain information about
the evaluation of the first subterm t2.
The state before t2 executes is described by K1.
The state after t2 executes is described by K2.
forwards (s2'&v2&R2&K2): (rm M2) K1.
4. We provide witness for the output value and heap.
They correspond to those produced by the evaluation of t2.
∃ s2' v2. split.
5. We invoke the big-step rule.
{ applys eval_seq R1 R2. }
6. We establish the final postcondition, which is directly
inherited from the reasoning on t2.
{ apply K2. }
Qed.
Qed.
The Separation Logic reasoning rule is proved as follows.
Lemma triple_seq : ∀ t1 t2 H Q H1,
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
Proof using.
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
Proof using.
1. We unfold the definition of triple to reveal a hoare judgment.
introv M1 M2. intros H'. (* optional: *) unfolds triple.
2. We invoke the reasoning rule hoare_seq that we just established.
applys hoare_seq.
3. For the hypothesis on the first subterm t1,
we can invoke directly our first hypothesis.
{ applys M1. }
{ applys M2. }
Qed.
{ applys M2. }
Qed.
Parameter eval_let : ∀ s1 s2 s3 x t1 t2 v1 v,
eval s1 t1 s2 v1 →
eval s2 (subst x v1 t2) s3 v →
eval s1 (trm_let x t1 t2) s3 v.
eval s1 t1 s2 v1 →
eval s2 (subst x v1 t2) s3 v →
eval s1 (trm_let x t1 t2) s3 v.
Exercise: 3 stars, standard, especially useful (triple_let)
Following the same proof scheme as for triple_seq, establish the reasoning rule for triple_let, whose statement appears earlier in this file. Make sure to first state and prove the lemma hoare_let, which has the same statement as triple_let yet with occurrences of triple replaced with hoare.
(* FILL IN HERE *)
☐
☐
Parameter eval_add : ∀ s n1 n2,
eval s (val_add (val_int n1) (val_int n2)) s (val_int (n1 + n2)).
eval s (val_add (val_int n1) (val_int n2)) s (val_int (n1 + n2)).
In the proof, we will need to use the following result,
established in the first chapter.
Parameter hstar_hpure_l : ∀ P H h,
(\[P] \* H) h = (P ∧ H h).
(\[P] \* H) h = (P ∧ H h).
As usual, we first establish a Hoare triple.
Lemma hoare_add : ∀ H n1 n2,
hoare (val_add n1 n2)
H
(fun r ⇒ \[r = val_int (n1 + n2)] \* H).
Proof using.
intros. intros s K0. ∃ s (val_int (n1 + n2)). split.
{ applys eval_add. }
{ rewrite hstar_hpure_l. split.
{ auto. }
{ applys K0. } }
Qed.
hoare (val_add n1 n2)
H
(fun r ⇒ \[r = val_int (n1 + n2)] \* H).
Proof using.
intros. intros s K0. ∃ s (val_int (n1 + n2)). split.
{ applys eval_add. }
{ rewrite hstar_hpure_l. split.
{ auto. }
{ applys K0. } }
Qed.
Deriving triple_add is then straightforward.
Lemma triple_add : ∀ n1 n2,
triple (val_add n1 n2)
\[]
(fun r ⇒ \[r = val_int (n1 + n2)]).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_add. } { xsimpl. } { xsimpl. auto. }
Qed.
triple (val_add n1 n2)
\[]
(fun r ⇒ \[r = val_int (n1 + n2)]).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_add. } { xsimpl. } { xsimpl. auto. }
Qed.
Parameter eval_div' : ∀ s n1 n2,
n2 ≠ 0 →
eval s (val_div (val_int n1) (val_int n2)) s (val_int (Z.quot n1 n2)).
n2 ≠ 0 →
eval s (val_div (val_int n1) (val_int n2)) s (val_int (Z.quot n1 n2)).
Exercise: 3 stars, standard, optional (triple_div)
Following the same proof scheme as for triple_add, establish the reasoning rule for triple_div. Make sure to first state and prove hoare_div, which is like triple_div except with hoare instead of triple.
(* FILL IN HERE *)
☐
☐
Proofs for Primitive Operations Operating on the State
Inductive hval : Type :=
| hval_val : val → hval.
Read in a Reference
Parameter eval_get : ∀ v s p,
Fmap.indom s p →
Fmap.read s p = v →
eval s (val_get (val_loc p)) s v.
Fmap.indom s p →
Fmap.read s p = v →
eval s (val_get (val_loc p)) s v.
We reformulate this rule by isolating from the current state s
the singleton heap made of the cell at location p, and let s2
denote the rest of the heap. When the singleton heap is described
as Fmap.single p v, then v is the result value returned by
get p.
Lemma eval_get_sep : ∀ s s2 p v,
s = Fmap.union (Fmap.single p v) s2 →
eval s (val_get (val_loc p)) s v.
s = Fmap.union (Fmap.single p v) s2 →
eval s (val_get (val_loc p)) s v.
The proof of this lemma is of little interest. We show it only to
demonstrate that it relies only a few basic facts related to finite
maps.
Proof using.
introv →. forwards Dv: Fmap.indom_single p v.
applys eval_get.
{ applys* Fmap.indom_union_l. }
{ rewrite* Fmap.read_union_l. rewrite* Fmap.read_single. }
Qed.
introv →. forwards Dv: Fmap.indom_single p v.
applys eval_get.
{ applys* Fmap.indom_union_l. }
{ rewrite* Fmap.read_union_l. rewrite* Fmap.read_single. }
Qed.
Our goal is to establish the triple:
triple (val_get p)
(p ~~> v)
(fun r ⇒ \[r = v] \* (p ~~> v)).
Establishing this lemma requires us to reason about propositions
of the form (\[P] \* H) h and (p ~~> v) h. To that end,
recall lemma hstar_hpure_l, which was already exploited in the
proof of triple_add, and recall hsingle_inv from Hprop.
triple (val_get p)
(p ~~> v)
(fun r ⇒ \[r = v] \* (p ~~> v)).
Parameter hsingle_inv: ∀ p v h,
(p ~~> v) h →
h = Fmap.single p v.
(p ~~> v) h →
h = Fmap.single p v.
We establish the specification of get first w.r.t. to
the hoare judgment.
Lemma hoare_get : ∀ H v p,
hoare (val_get p)
((p ~~> v) \* H)
(fun r ⇒ \[r = v] \* (p ~~> v) \* H).
Proof using.
(* 1. We unfold the definition of hoare. *)
intros. intros s K0.
(* 2. We provide the witnesses for the reduction,
as dictated by eval_get_sep. *)
∃ s v. split.
{ (* 3. To justify the reduction using eval_get_sep, we need to
argue that the state s decomposes as a singleton heap
Fmap.single p v and the rest of the state s2. This is
obtained by eliminating the star in hypothesis K0. *)
destruct K0 as (s1&s2&P1&P2&D&U).
(* 4. Inverting (p ~~> v) h1 simplifies the goal. *)
lets E1: hsingle_inv P1. subst s1.
(* 5. At this point, the goal matches exactly eval_get_sep. *)
applys eval_get_sep U. }
{ (* 6. To establish the postcondition, we check the pure fact
\v = v, and check that the state, which has not changed,
satisfies the same heap predicate as in the precondition. *)
rewrite hstar_hpure_l. auto. }
Qed.
hoare (val_get p)
((p ~~> v) \* H)
(fun r ⇒ \[r = v] \* (p ~~> v) \* H).
Proof using.
(* 1. We unfold the definition of hoare. *)
intros. intros s K0.
(* 2. We provide the witnesses for the reduction,
as dictated by eval_get_sep. *)
∃ s v. split.
{ (* 3. To justify the reduction using eval_get_sep, we need to
argue that the state s decomposes as a singleton heap
Fmap.single p v and the rest of the state s2. This is
obtained by eliminating the star in hypothesis K0. *)
destruct K0 as (s1&s2&P1&P2&D&U).
(* 4. Inverting (p ~~> v) h1 simplifies the goal. *)
lets E1: hsingle_inv P1. subst s1.
(* 5. At this point, the goal matches exactly eval_get_sep. *)
applys eval_get_sep U. }
{ (* 6. To establish the postcondition, we check the pure fact
\v = v, and check that the state, which has not changed,
satisfies the same heap predicate as in the precondition. *)
rewrite hstar_hpure_l. auto. }
Qed.
Deriving the Separation Logic triple follows the usual pattern.
Lemma triple_get : ∀ v p,
triple (val_get p)
(p ~~> v)
(fun r ⇒ \[r = v] \* (p ~~> v)).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_get. }
{ xsimpl. }
{ xsimpl. auto. }
Qed.
triple (val_get p)
(p ~~> v)
(fun r ⇒ \[r = v] \* (p ~~> v)).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_get. }
{ xsimpl. }
{ xsimpl. auto. }
Qed.
Allocation of a Reference
Parameter eval_ref : ∀ s v p,
¬Fmap.indom s p →
eval s (val_ref v) (Fmap.update s p v) (val_loc p).
¬Fmap.indom s p →
eval s (val_ref v) (Fmap.update s p v) (val_loc p).
Let us reformulate eval_ref to replace references to Fmap.indom
and Fmap.update with references to Fmap.single and Fmap.disjoint.
Concretely, ref v extends the state from s1 to s1 \u s2,
where s2 denotes the singleton heap Fmap.single p v, and with
the requirement that Fmap.disjoint s2 s1, to capture freshness.
Lemma eval_ref_sep : ∀ s1 s2 v p,
s2 = Fmap.single p v →
Fmap.disjoint s2 s1 →
eval s1 (val_ref v) (Fmap.union s2 s1) (val_loc p).
Proof using.
s2 = Fmap.single p v →
Fmap.disjoint s2 s1 →
eval s1 (val_ref v) (Fmap.union s2 s1) (val_loc p).
Proof using.
It is not needed to follow through this proof.
introv → D. forwards Dv: Fmap.indom_single p v.
rewrite <- Fmap.update_eq_union_single. applys* eval_ref.
{ intros N. applys* Fmap.disjoint_inv_not_indom_both D N. }
Qed.
rewrite <- Fmap.update_eq_union_single. applys* eval_ref.
{ intros N. applys* Fmap.disjoint_inv_not_indom_both D N. }
Qed.
In order to apply the rules eval_ref or eval_ref_sep, we need
to be able to synthetize fresh locations. The following lemma
(from LibSepFmap.v) captures the existence, for any state s, of
a (non-null) location p not already bound in s.
Parameter exists_fresh : ∀ s,
∃ p, ¬Fmap.indom s p ∧ p ≠ null.
∃ p, ¬Fmap.indom s p ∧ p ≠ null.
We reformulate the lemma above in a way that better matches
the premise of the lemma eval_ref_sep, which we need to apply
for establishing the specification of ref.
This reformulation, shown below, asserts that, for any h,
there existence a non-null location p such that the singleton
heap Fmap.single p v is disjoint from h.
Lemma single_fresh : ∀ h v,
∃ p, Fmap.disjoint (Fmap.single p v) h.
Proof using.
∃ p, Fmap.disjoint (Fmap.single p v) h.
Proof using.
It is not needed to follow through this proof.
intros. forwards (p&F&N): exists_fresh h.
∃ p. applys* Fmap.disjoint_single_of_not_indom.
Qed.
∃ p. applys* Fmap.disjoint_single_of_not_indom.
Qed.
The proof of the Hoare triple for ref is as follows.
Lemma hoare_ref : ∀ H v,
hoare (val_ref v)
H
(fun r ⇒ (\∃ p, \[r = val_loc p] \* p ~~> v) \* H).
Proof using.
(* 1. We unfold the definition of hoare. *)
intros. intros s1 K0.
(* 2. We claim the disjointness relation
Fmap.disjoint (Fmap.single p v) s1. *)
forwards* (p&D): (single_fresh s1 v).
(* 3. We provide the witnesses for the reduction,
as dictated by eval_ref_sep. *)
∃ ((Fmap.single p v) \u s1) (val_loc p). split.
{ (* 4. We exploit eval_ref_sep, which has exactly the desired shape! *)
applys eval_ref_sep D. auto. }
{ (* 5. We establish the postcondition
(\∃ p, \[r = val_loc p] \* p ~~> v) \* H
by providing p and the relevant pieces of heap. *)
applys hstar_intro.
{ ∃ p. rewrite hstar_hpure_l.
split. { auto. } { applys¬hsingle_intro. } }
{ applys K0. }
{ applys D. } }
Qed.
hoare (val_ref v)
H
(fun r ⇒ (\∃ p, \[r = val_loc p] \* p ~~> v) \* H).
Proof using.
(* 1. We unfold the definition of hoare. *)
intros. intros s1 K0.
(* 2. We claim the disjointness relation
Fmap.disjoint (Fmap.single p v) s1. *)
forwards* (p&D): (single_fresh s1 v).
(* 3. We provide the witnesses for the reduction,
as dictated by eval_ref_sep. *)
∃ ((Fmap.single p v) \u s1) (val_loc p). split.
{ (* 4. We exploit eval_ref_sep, which has exactly the desired shape! *)
applys eval_ref_sep D. auto. }
{ (* 5. We establish the postcondition
(\∃ p, \[r = val_loc p] \* p ~~> v) \* H
by providing p and the relevant pieces of heap. *)
applys hstar_intro.
{ ∃ p. rewrite hstar_hpure_l.
split. { auto. } { applys¬hsingle_intro. } }
{ applys K0. }
{ applys D. } }
Qed.
We then derive the Separation Logic triple as usual.
Lemma triple_ref : ∀ v,
triple (val_ref v)
\[]
(funloc p ⇒ p ~~> v).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_ref. }
{ xsimpl. }
{ xsimpl. auto. }
Qed.
End Proofs.
triple (val_ref v)
\[]
(funloc p ⇒ p ~~> v).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_ref. }
{ xsimpl. }
{ xsimpl. auto. }
Qed.
End Proofs.
Reasoning Rules for Recursive Functions
Parameter triple_fix : ∀ f x t1 H Q,
H ==> Q (val_fix f x t1) →
triple (trm_fix f x t1) H Q.
H ==> Q (val_fix f x t1) →
triple (trm_fix f x t1) H Q.
The reasoning rule that corresponds to beta-reduction for
a recursive function involves two substitutions: a first
substitution for recursive occurrences of the function,
followed with a second substitution for the argument
provided to the call.
Parameter triple_app_fix : ∀ v1 v2 f x t1 H Q,
v1 = val_fix f x t1 →
triple (subst x v2 (subst f v1 t1)) H Q →
triple (trm_app v1 v2) H Q.
v1 = val_fix f x t1 →
triple (subst x v2 (subst f v1 t1)) H Q →
triple (trm_app v1 v2) H Q.
Module ProofsContinued.
Proof of triple_fun and triple_fix
Proof of triple_if
Lemma eval_if : ∀ s1 s2 b v t1 t2,
eval s1 (if b then t1 else t2) s2 v →
eval s1 (trm_if b t1 t2) s2 v.
Proof using.
intros. case_if; applys eval_if; auto_false.
Qed.
eval s1 (if b then t1 else t2) s2 v →
eval s1 (trm_if b t1 t2) s2 v.
Proof using.
intros. case_if; applys eval_if; auto_false.
Qed.
The reasoning rule for conditional w.r.t. Hoare triples is as follows.
Lemma hoare_if_case : ∀ b t1 t2 H Q,
(b = true → hoare t1 H Q) →
(b = false → hoare t2 H Q) →
hoare (trm_if b t1 t2) H Q.
Proof using.
introv M1 M2. intros s K0. destruct b.
{ forwards* (s1'&v1&R1&K1): (rm M1) K0.
∃ s1' v1. split*. { applys* eval_if. } }
{ forwards* (s1'&v1&R1&K1): (rm M2) K0.
∃ s1' v1. split*. { applys* eval_if. } }
Qed.
(b = true → hoare t1 H Q) →
(b = false → hoare t2 H Q) →
hoare (trm_if b t1 t2) H Q.
Proof using.
introv M1 M2. intros s K0. destruct b.
{ forwards* (s1'&v1&R1&K1): (rm M1) K0.
∃ s1' v1. split*. { applys* eval_if. } }
{ forwards* (s1'&v1&R1&K1): (rm M2) K0.
∃ s1' v1. split*. { applys* eval_if. } }
Qed.
The corresponding Separation Logic reasoning rule is as follows.
Lemma triple_if_case : ∀ b t1 t2 H Q,
(b = true → triple t1 H Q) →
(b = false → triple t2 H Q) →
triple (trm_if (val_bool b) t1 t2) H Q.
Proof using.
unfold triple. introv M1 M2. intros H'.
applys hoare_if_case; intros Eb.
{ applys* M1. }
{ applys* M2. }
Qed.
(b = true → triple t1 H Q) →
(b = false → triple t2 H Q) →
triple (trm_if (val_bool b) t1 t2) H Q.
Proof using.
unfold triple. introv M1 M2. intros H'.
applys hoare_if_case; intros Eb.
{ applys* M1. }
{ applys* M2. }
Qed.
Observe that the above proofs contain a fair amount of duplication,
due to the symmetry between the b = true and b = false branches.
If we state the reasoning rules using Coq's conditional just like
it appears in the evaluation rule eval_if, we can better factorize
the proof script.
Lemma hoare_if : ∀ (b:bool) t1 t2 H Q,
hoare (if b then t1 else t2) H Q →
hoare (trm_if b t1 t2) H Q.
Proof using.
introv M1. intros s K0.
forwards (s'&v&R1&K1): (rm M1) K0.
∃ s' v. split. { applys eval_if R1. } { applys K1. }
Qed.
Lemma triple_if : ∀ b t1 t2 H Q,
triple (if b then t1 else t2) H Q →
triple (trm_if (val_bool b) t1 t2) H Q.
Proof using.
unfold triple. introv M1. intros H'. applys hoare_if. applys M1.
Qed.
hoare (if b then t1 else t2) H Q →
hoare (trm_if b t1 t2) H Q.
Proof using.
introv M1. intros s K0.
forwards (s'&v&R1&K1): (rm M1) K0.
∃ s' v. split. { applys eval_if R1. } { applys K1. }
Qed.
Lemma triple_if : ∀ b t1 t2 H Q,
triple (if b then t1 else t2) H Q →
triple (trm_if (val_bool b) t1 t2) H Q.
Proof using.
unfold triple. introv M1. intros H'. applys hoare_if. applys M1.
Qed.
Proof of triple_app_fun
Parameter eval_app_fun : ∀ s1 s2 v1 v2 x t1 v,
v1 = val_fun x t1 →
eval s1 (subst x v2 t1) s2 v →
eval s1 (trm_app v1 v2) s2 v.
v1 = val_fun x t1 →
eval s1 (subst x v2 t1) s2 v →
eval s1 (trm_app v1 v2) s2 v.
Lemma hoare_app_fun : ∀ v1 v2 x t1 H Q,
v1 = val_fun x t1 →
hoare (subst x v2 t1) H Q →
hoare (trm_app v1 v2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
v1 = val_fun x t1 →
hoare (subst x v2 t1) H Q →
hoare (trm_app v1 v2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma triple_app_fun : ∀ x v1 v2 t1 H Q,
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Deallocation of a Reference
Parameter eval_free : ∀ s p,
Fmap.indom s p →
eval s (val_set (val_loc p)) (Fmap.remove s p) val_unit.
Fmap.indom s p →
eval s (val_set (val_loc p)) (Fmap.remove s p) val_unit.
Let us reformulate eval_free to replace references to Fmap.indom
and Fmap.remove with references to Fmap.single and Fmap.union
and Fmap.disjoint. The details are not essential, thus omitted.
Parameter eval_free_sep : ∀ s1 s2 v p,
s1 = Fmap.union (Fmap.single p v) s2 →
Fmap.disjoint (Fmap.single p v) s2 →
eval s1 (val_free p) s2 val_unit.
s1 = Fmap.union (Fmap.single p v) s2 →
Fmap.disjoint (Fmap.single p v) s2 →
eval s1 (val_free p) s2 val_unit.
Exercise: 3 stars, standard, optional (hoare_free)
Prove the Hoare triple for the operation free. Hint: exploit the lemma eval_free_sep.
Lemma hoare_free : ∀ H p v,
hoare (val_free (val_loc p))
((p ~~> v) \* H)
(fun _ ⇒ H).
Proof using. (* FILL IN HERE *) Admitted.
☐
hoare (val_free (val_loc p))
((p ~~> v) \* H)
(fun _ ⇒ H).
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 1 star, standard, optional (triple_free)
Derive from the Hoare triple for the operation free the corresponding Separation Logic triple. Hint: adapt the proof of lemma triple_set.
Lemma triple_free : ∀ p v,
triple (val_free (val_loc p))
(p ~~> v)
(fun _ ⇒ \[]).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (val_free (val_loc p))
(p ~~> v)
(fun _ ⇒ \[]).
Proof using. (* FILL IN HERE *) Admitted.
☐
Write in a Reference
Parameter eval_set : ∀ m p v,
Fmap.indom m p →
eval m (val_set (val_loc p) v) (Fmap.update m p v) val_unit.
Fmap.indom m p →
eval m (val_set (val_loc p) v) (Fmap.update m p v) val_unit.
As for get, we first reformulate this lemma, to replace
references to Fmap.indom and Fmap.update with references
to Fmap.union, Fmap.single, and Fmap.disjoint, to
prepare for the introduction of separating conjunctions.
Lemma eval_set_sep : ∀ s1 s2 h2 p v1 v2,
s1 = Fmap.union (Fmap.single p v1) h2 →
s2 = Fmap.union (Fmap.single p v2) h2 →
Fmap.disjoint (Fmap.single p v1) h2 →
eval s1 (val_set (val_loc p) v2) s2 val_unit.
Proof using.
s1 = Fmap.union (Fmap.single p v1) h2 →
s2 = Fmap.union (Fmap.single p v2) h2 →
Fmap.disjoint (Fmap.single p v1) h2 →
eval s1 (val_set (val_loc p) v2) s2 val_unit.
Proof using.
It is not needed to follow through this proof.
introv → → D. forwards Dv: Fmap.indom_single p v1.
applys_eq eval_set.
{ rewrite* Fmap.update_union_l. fequals.
rewrite* Fmap.update_single. }
{ applys* Fmap.indom_union_l. }
Qed.
applys_eq eval_set.
{ rewrite* Fmap.update_union_l. fequals.
rewrite* Fmap.update_single. }
{ applys* Fmap.indom_union_l. }
Qed.
The proof of the Hoare rule for set makes use of the following
fact (from LibSepFmap.v) about Fmap.disjoint: when one of its argument
is a singleton map, the value stored in that singleton map is irrelevant.
Check Fmap.disjoint_single_set : ∀ p v1 v2 h2,
Fmap.disjoint (Fmap.single p v1) h2 →
Fmap.disjoint (Fmap.single p v2) h2.
Check Fmap.disjoint_single_set : ∀ p v1 v2 h2,
Fmap.disjoint (Fmap.single p v1) h2 →
Fmap.disjoint (Fmap.single p v2) h2.
Exercise: 5 stars, standard, optional (hoare_set)
Prove the lemma hoare_set. Hints:- exploit the evaluation rule eval_set_sep presented above,
- exploit the lemma Fmap.disjoint_single_set presented above,
- to obtain an elegant proof, prefer invoking the lemmas hsingle_intro, hsingle_inv, hstar_intro, and hstar_inv, rather than unfolding the definitions of hstar and hsingle.
Lemma hoare_set : ∀ H v p v',
hoare (val_set (val_loc p) v)
((p ~~> v') \* H)
(fun _ ⇒ (p ~~> v) \* H).
Proof using. (* FILL IN HERE *) Admitted.
☐
hoare (val_set (val_loc p) v)
((p ~~> v') \* H)
(fun _ ⇒ (p ~~> v) \* H).
Proof using. (* FILL IN HERE *) Admitted.
☐
Lemma triple_set : ∀ w p v,
triple (val_set (val_loc p) w)
(p ~~> v)
(fun _ ⇒ p ~~> w).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_set. }
{ xsimpl. }
{ xsimpl. }
Qed.
End ProofsContinued.
triple (val_set (val_loc p) w)
(p ~~> v)
(fun _ ⇒ p ~~> w).
Proof using.
intros. intros H'. applys hoare_conseq.
{ applys hoare_set. }
{ xsimpl. }
{ xsimpl. }
Qed.
End ProofsContinued.
Module ProofsFactorization.
The proof that, e.g., triple_add is a consequence of
hoare_add follows the same pattern as many other similar
proofs, each time invoking the lemma hoare_conseq.
Thus, we could attempt at factorizing this proof pattern.
The following lemma corresponds to such an attempt.
Exercise: 2 stars, standard, optional (triple_of_hoare)
Prove the lemma triple_of_hoare stated below.
Lemma triple_of_hoare : ∀ t H Q,
(∀ H', ∃ Q', hoare t (H \* H') Q'
∧ Q' ===> Q \*+ H') →
triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ H', ∃ Q', hoare t (H \* H') Q'
∧ Q' ===> Q \*+ H') →
triple t H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, optional (triple_add')
Prove that triple_add is a consequence of hoare_add by exploiting triple_of_hoare.
Lemma triple_add' : ∀ n1 n2,
triple (val_add n1 n2)
\[]
(fun r ⇒ \[r = val_int (n1 + n2)]).
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (val_add n1 n2)
\[]
(fun r ⇒ \[r = val_int (n1 + n2)]).
Proof using. (* FILL IN HERE *) Admitted.
☐
End ProofsFactorization.
Module ProofsSameSemantics.
A general principle is that if t1 has the same semantics
as t2 (w.r.t. the big-step evaluation judgment eval),
then t1 and t2 satisfy the same triples.
Let us formalize this principle.
Two (closed) terms are semantically equivalent, written
trm_equiv t1 t2, if two terms, when evaluated in the same
state, produce the same output.
Definition trm_equiv (t1 t2:trm) : Prop :=
∀ s s' v, eval s t1 s' v ↔ eval s t2 s' v.
∀ s s' v, eval s t1 s' v ↔ eval s t2 s' v.
Two terms that are equivalent satisfy the same Separation Logic
triples (and the same Hoare triples).
Indeed, the definition of a Separation Logic triple directly depends
on the notion of Hoare triple, and the latter directly depends
on the semantics captured by the predicate eval.
Let us formalize the result in three steps.
eval_like t1 t2 asserts that t2 evaluates like t1.
In particular, this relation hold whenever t2 reduces
in small-step to t1.
Definition eval_like (t1 t2:trm) : Prop :=
∀ s s' v, eval s t1 s' v → eval s t2 s' v.
∀ s s' v, eval s t1 s' v → eval s t2 s' v.
For example eval_like t (trm_let x t x) holds, reflecting the
fact that let x = t in x reduces in small-step to t.
Lemma eval_like_eta_reduction : ∀ (t:trm) (x:var),
eval_like t (trm_let x t x).
Proof using.
introv R. applys eval_let R.
simpl. rewrite var_eq_spec. case_if. applys eval_val.
Qed.
eval_like t (trm_let x t x).
Proof using.
introv R. applys eval_let R.
simpl. rewrite var_eq_spec. case_if. applys eval_val.
Qed.
It turns out that the symmetric relation eval_like (trm_let x t x) t
also holds: the term t does not exhibit more behaviors than those
of let x = t in x.
Lemma eval_like_eta_expansion : ∀ (t:trm) (x:var),
eval_like (trm_let x t x) t.
Proof using.
introv R. inverts R as. introv R1 R2.
simpl in R2. rewrite var_eq_spec in R2. case_if.
inverts R2. auto.
Qed.
eval_like (trm_let x t x) t.
Proof using.
introv R. inverts R as. introv R1 R2.
simpl in R2. rewrite var_eq_spec in R2. case_if.
inverts R2. auto.
Qed.
We deduce that a term t denotes a program equivalent to
the program let x = t in x.
Lemma trm_equiv_eta : ∀ (t:trm) (x:var),
trm_equiv t (trm_let x t x).
Proof using.
intros. intros s s' v. iff M.
{ applys eval_like_eta_reduction M. }
{ applys eval_like_eta_expansion M. }
Qed.
trm_equiv t (trm_let x t x).
Proof using.
intros. intros s s' v. iff M.
{ applys eval_like_eta_reduction M. }
{ applys eval_like_eta_expansion M. }
Qed.
If eval_like t1 t2, then any triple that holds for t1
also holds for t2.
Lemma hoare_eval_like : ∀ t1 t2 H Q,
eval_like t1 t2 →
hoare t1 H Q →
hoare t2 H Q.
Proof using.
introv E M1 K0. forwards (s'&v&R1&K1): M1 K0.
∃ s' v. split. { applys E R1. } { applys K1. }
Qed.
Lemma triple_eval_like : ∀ t1 t2 H Q,
eval_like t1 t2 →
triple t1 H Q →
triple t2 H Q.
Proof using.
introv E M1. intros H'. applys hoare_eval_like E. applys M1.
Qed.
eval_like t1 t2 →
hoare t1 H Q →
hoare t2 H Q.
Proof using.
introv E M1 K0. forwards (s'&v&R1&K1): M1 K0.
∃ s' v. split. { applys E R1. } { applys K1. }
Qed.
Lemma triple_eval_like : ∀ t1 t2 H Q,
eval_like t1 t2 →
triple t1 H Q →
triple t2 H Q.
Proof using.
introv E M1. intros H'. applys hoare_eval_like E. applys M1.
Qed.
It follows that if two terms are equivalent, then they admit
the same triples.
Lemma triple_trm_equiv : ∀ t1 t2 H Q,
trm_equiv t1 t2 →
triple t1 H Q ↔ triple t2 H Q.
Proof using.
introv E. unfolds trm_equiv. iff M.
{ applys triple_eval_like M. introv R. applys* E. }
{ applys triple_eval_like M. introv R. applys* E. }
Qed.
trm_equiv t1 t2 →
triple t1 H Q ↔ triple t2 H Q.
Proof using.
introv E. unfolds trm_equiv. iff M.
{ applys triple_eval_like M. introv R. applys* E. }
{ applys triple_eval_like M. introv R. applys* E. }
Qed.
The reasoning rule triple_eval_like has a number of practical applications.
One, show below, is to revisit the proof of triple_app_fun in a
much more succint way, by arguing that trm_app (val_fun x t1) v2 and
subst x v2 t1 are equivalent terms, hence they admit the same behavior.
Lemma triple_app_fun : ∀ x v1 v2 t1 H Q,
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
Proof using.
introv E M1. applys triple_eval_like M1.
introv R. applys eval_app_fun E R.
Qed.
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
Proof using.
introv E M1. applys triple_eval_like M1.
introv R. applys eval_app_fun E R.
Qed.
Another application is the following rule, which allows to modify the
parenthesis structure of a sequence.
Exercise: 3 stars, standard, optional (triple_trm_seq_assoc)
Prove that the term t1; (t2; t3) satisfies the same triples as the term (t1;t2); t3.
Lemma triple_trm_seq_assoc : ∀ t1 t2 t3 H Q,
triple (trm_seq (trm_seq t1 t2) t3) H Q →
triple (trm_seq t1 (trm_seq t2 t3)) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
triple (trm_seq (trm_seq t1 t2) t3) H Q →
triple (trm_seq t1 (trm_seq t2 t3)) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
End ProofsSameSemantics.
Module MatchStyle.
Recall the specification for the function ref.
Parameter triple_ref : ∀ v,
triple (val_ref v)
\[]
(fun r ⇒ \∃ p, \[r = val_loc p] \* p ~~> v).
triple (val_ref v)
\[]
(fun r ⇒ \∃ p, \[r = val_loc p] \* p ~~> v).
Its postcondition could be equivalently stated by using, instead
of an existential quantifier \∃, a pattern matching.
Parameter triple_ref' : ∀ v,
triple (val_ref v)
\[]
(fun r ⇒ match r with
| val_loc p ⇒ (p ~~> v)
| _ ⇒ \[False]
end).
triple (val_ref v)
\[]
(fun r ⇒ match r with
| val_loc p ⇒ (p ~~> v)
| _ ⇒ \[False]
end).
However, the pattern-matching presentation is less readable and
would be fairly cumbersome to work with in practice. Thus, we
systematically prefer using existentials.
End MatchStyle.
Historical Notes
WPsem: Semantics of Weakest Preconditions
Set Implicit Arguments.
From SLF Require Export Rules.
Implicit Types f : var.
Implicit Types b : bool.
Implicit Types v : val.
Implicit Types h : heap.
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
From SLF Require Export Rules.
Implicit Types f : var.
Implicit Types b : bool.
Implicit Types v : val.
Implicit Types h : heap.
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
First Pass
- the use of wp greatly reduces the number of structural rules required, and thus reduces accordingly the number of tactics required for carrying out proofs in practice;
- the predicate wp will serve as guidelines for setting up in the next chapter a "characteristic formula generator", which is the key ingredient at the heart of the implementation of the CFML tool.
- the notion of weakest precondition, as captured by wp,
- the reformulation of structural rules in wp-style,
- the reformulation of reasoning rules in wp-style,
- (optional) alternative, equivalent definitions for wp, and alternative proofs for deriving wp-style reasoning rules.
Notion of Weakest Precondition
Parameter wp : trm → (val→hprop) → hprop.
Parameter wp_equiv : ∀ t H Q,
(H ==> wp t Q) ↔ (triple t H Q).
Parameter wp_equiv : ∀ t H Q,
(H ==> wp t Q) ↔ (triple t H Q).
The wp t Q is called "weakest precondition" for two reasons:
because (1) it is a precondition, and (2) it is the weakest one,
as we explain next.
First, wp t Q is always a "valid precondition" for a
triple associated with the term t and the postcondition Q.
Lemma wp_pre : ∀ t Q,
triple t (wp t Q) Q.
Proof using. intros. rewrite <- wp_equiv. applys himpl_refl. Qed.
triple t (wp t Q) Q.
Proof using. intros. rewrite <- wp_equiv. applys himpl_refl. Qed.
Second, wp t Q is the "weakest" of all valid preconditions
for the term t and the postcondition Q, in the sense that
any other valid precondition H, i.e. satisfying triple t H Q,
is such that H entails wp t Q.
Lemma wp_weakest : ∀ t H Q,
triple t H Q →
H ==> wp t Q.
Proof using. introv M. rewrite wp_equiv. applys M. Qed.
triple t H Q →
H ==> wp t Q.
Proof using. introv M. rewrite wp_equiv. applys M. Qed.
In other words, wp t Q is the "smallest" H satisfying
triple t H Q with respect to the order on heap predicates
induced by the entailment relation ==>.
There are several equivalent ways to define wp, as we show
in the optional contents of this chapter. It turns out that
the equivalence (H ==> wp t Q) ↔ (triple t H Q) fully
characterizes the predicate wp, and that it is all we need
to carry out formal reasoning.
For this reason, we postpone to further on in this chapter the
description of alternative, direct definitions for wp.
Structural Rules in Weakest-Precondition Style
The Frame Rule
Lemma wp_frame : ∀ t H Q,
(wp t Q) \* H ==> wp t (Q \*+ H).
(wp t Q) \* H ==> wp t (Q \*+ H).
The lemma is proved by exploiting the frame rule for triples
and the equivalence that characterizes wp.
Proof using.
intros. rewrite wp_equiv.
applys triple_frame. rewrite* <- wp_equiv.
Qed.
intros. rewrite wp_equiv.
applys triple_frame. rewrite* <- wp_equiv.
Qed.
The connection with the frame might not be totally obvious.
Recall the frame rule for triples.
triple t H1 Q →
triple t (H1 \* H) (Q \*+ H)
Let us replace the form triple t H Q with the form
H ==> wp t Q. We obtain the following statement.
triple t H1 Q →
triple t (H1 \* H) (Q \*+ H)
Lemma wp_frame_trans : ∀ t H1 Q H,
H1 ==> wp t Q →
(H1 \* H) ==> wp t (Q \*+ H).
H1 ==> wp t Q →
(H1 \* H) ==> wp t (Q \*+ H).
If we exploit transitivity of entailment to eliminate H1, then we
obtain exactly wp_frame, as illustrated by the proof script below.
Proof using. introv M. xchange M. applys* wp_frame. Qed.
The Rule of Consequence
Lemma wp_conseq : ∀ t Q1 Q2,
Q1 ===> Q2 →
wp t Q1 ==> wp t Q2.
Proof using.
introv M. rewrite wp_equiv. applys* triple_conseq (wp t Q1) M. applys wp_pre.
Qed.
Q1 ===> Q2 →
wp t Q1 ==> wp t Q2.
Proof using.
introv M. rewrite wp_equiv. applys* triple_conseq (wp t Q1) M. applys wp_pre.
Qed.
The connection with the rule of consequence is, again, not
totally obvious. Recall the rule of consequence for triples.
triple t H1 Q1 →
H2 ==> H1 →
Q1 ===> Q2 →
triple t H2 Q2
Let us replace the form triple t H Q with the form H ==> wp t Q.
We obtain the following statement:
triple t H1 Q1 →
H2 ==> H1 →
Q1 ===> Q2 →
triple t H2 Q2
Lemma wp_conseq_trans : ∀ t H1 H2 Q1 Q2,
H1 ==> wp t Q1 →
H2 ==> H1 →
Q1 ===> Q2 →
H2 ==> wp t Q2.
H1 ==> wp t Q1 →
H2 ==> H1 →
Q1 ===> Q2 →
H2 ==> wp t Q2.
If we exploit transitivity of entailment to eliminate H1 and H2,
then we obtain exactly wp_conseq, as illustrated below.
Proof using.
introv M WH WQ. xchange WH. xchange M. applys wp_conseq WQ.
Qed.
introv M WH WQ. xchange WH. xchange M. applys wp_conseq WQ.
Qed.
The Extraction Rules
Parameter triple_hpure : ∀ t (P:Prop) H Q,
(P → triple t H Q) →
triple t (\[P] \* H) Q.
(P → triple t H Q) →
triple t (\[P] \* H) Q.
Replacing the form triple t H Q with H ==> wp t Q yields
the following statement.
Lemma triple_hpure_with_wp : ∀ t H Q (P:Prop),
(P → (H ==> wp t Q)) →
(\[P] \* H) ==> wp t Q.
(P → (H ==> wp t Q)) →
(\[P] \* H) ==> wp t Q.
The above implication is just a special case of the extraction
lemma for pure facts on the left on an entailment, named
himpl_hstar_hpure_l, and whose statement is as follows.
(P → (H ==> H')) →
(\[P] \* H) ==> H'.
Instantiating H' with wp t Q proves triple_hpure_with_wp.
(P → (H ==> H')) →
(\[P] \* H) ==> H'.
Proof using. introv M. applys himpl_hstar_hpure_l M. Qed.
A similar reasoning applies to the extraction rule for existentials.
Rule for Values
Parameter triple_val : ∀ v H Q,
H ==> Q v →
triple (trm_val v) H Q.
H ==> Q v →
triple (trm_val v) H Q.
If we rewrite this rule in wp style, we obtain the rule below.
H ==> Q v →
H ==> wp (trm_val v) Q.
By exploiting transitivity of entailment, we can eliminate H.
We obtain the following statement, which reads as follows:
if you own a state satisfying Q v, then you own a state
from which the evaluation of the value v produces Q.
H ==> Q v →
H ==> wp (trm_val v) Q.
Lemma wp_val : ∀ v Q,
Q v ==> wp (trm_val v) Q.
Proof using.
intros. rewrite wp_equiv. applys* triple_val.
Qed.
Q v ==> wp (trm_val v) Q.
Proof using.
intros. rewrite wp_equiv. applys* triple_val.
Qed.
We can verify that, when migrating to the wp presentation, we have
not lost any expressiveness. To that end, we prove that triple_val is
derivable from wp_val.
Lemma triple_val_derived_from_wp_val : ∀ v H Q,
H ==> Q v →
triple (trm_val v) H Q.
Proof using. introv M. rewrite <- wp_equiv. xchange M. applys wp_val. Qed.
H ==> Q v →
triple (trm_val v) H Q.
Proof using. introv M. rewrite <- wp_equiv. xchange M. applys wp_val. Qed.
Parameter triple_seq : ∀ t1 t2 H Q H1,
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
Replacing triple t H Q with H ==> wp t Q throughout the rule
gives the statement below.
H ==> (wp t1) (fun v ⇒ H1) →
H1 ==> (wp t2) Q →
H ==> wp (trm_seq t1 t2) Q.
This entailment holds for any H and H1. Let us specialize it to
H1 := (wp t2) Q and H := (wp t1) (fun v ⇒ (wp t2) Q).
This leads us to the following statement, which reads as follows:
if you own a state from which the evaluation of t1 produces a
state from which the evaluation of t2 produces the postcondition
Q, then you own a state from which the evaluation of the sequence
t1;t2 produces Q.
H ==> (wp t1) (fun v ⇒ H1) →
H1 ==> (wp t2) Q →
H ==> wp (trm_seq t1 t2) Q.
Lemma wp_seq : ∀ t1 t2 Q,
wp t1 (fun v ⇒ wp t2 Q) ==> wp (trm_seq t1 t2) Q.
Proof using.
intros. rewrite wp_equiv. applys triple_seq.
{ rewrite* <- wp_equiv. }
{ rewrite* <- wp_equiv. }
Qed.
wp t1 (fun v ⇒ wp t2 Q) ==> wp (trm_seq t1 t2) Q.
Proof using.
intros. rewrite wp_equiv. applys triple_seq.
{ rewrite* <- wp_equiv. }
{ rewrite* <- wp_equiv. }
Qed.
Exercise: 2 stars, standard, optional (triple_seq_from_wp_seq)
Check that wp_seq is just as expressive as triple_seq, by proving that triple_seq is derivable from wp_seq and from the structural rules for wp and/or the structural rules for triple.
Lemma triple_seq_from_wp_seq : ∀ t1 t2 H Q H1,
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
triple t1 H (fun v ⇒ H1) →
triple t2 H1 Q →
triple (trm_seq t1 t2) H Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Rule for Functions
Parameter triple_fun : ∀ x t1 H Q,
H ==> Q (val_fun x t1) →
triple (trm_fun x t1) H Q.
H ==> Q (val_fun x t1) →
triple (trm_fun x t1) H Q.
The rule for functions follow exactly the same pattern as for values.
Lemma wp_fun : ∀ x t Q,
Q (val_fun x t) ==> wp (trm_fun x t) Q.
Proof using. intros. rewrite wp_equiv. applys* triple_fun. Qed.
Q (val_fun x t) ==> wp (trm_fun x t) Q.
Proof using. intros. rewrite wp_equiv. applys* triple_fun. Qed.
A similar rule holds for the evaluation of a recursive function.
Lemma wp_fix : ∀ f x t Q,
Q (val_fix f x t) ==> wp (trm_fix f x t) Q.
Proof using. intros. rewrite wp_equiv. applys* triple_fix. Qed.
Q (val_fix f x t) ==> wp (trm_fix f x t) Q.
Proof using. intros. rewrite wp_equiv. applys* triple_fix. Qed.
Parameter triple_if : ∀ b t1 t2 H Q,
triple (if b then t1 else t2) H Q →
triple (trm_if (val_bool b) t1 t2) H Q.
triple (if b then t1 else t2) H Q →
triple (trm_if (val_bool b) t1 t2) H Q.
Replacing triple using wp entailments yields:
H ==> wp (if b then t1 else t2) Q →
H ==> wp (trm_if (val_bool b) t1 t2) Q.
which simplifies by transitivity to:
wp (if b then t1 else t2) Q ==> wp (trm_if (val_bool b) t1 t2) Q.
This statement corresponds to the wp-style reasoning rule
for conditionals. The proof appears next.
H ==> wp (if b then t1 else t2) Q →
H ==> wp (trm_if (val_bool b) t1 t2) Q.
wp (if b then t1 else t2) Q ==> wp (trm_if (val_bool b) t1 t2) Q.
Lemma wp_if : ∀ b t1 t2 Q,
wp (if b then t1 else t2) Q ==> wp (trm_if (val_bool b) t1 t2) Q.
Proof using.
intros. rewrite wp_equiv. applys triple_if. rewrite* <- wp_equiv.
Qed.
wp (if b then t1 else t2) Q ==> wp (trm_if (val_bool b) t1 t2) Q.
Proof using.
intros. rewrite wp_equiv. applys triple_if. rewrite* <- wp_equiv.
Qed.
Parameter triple_let : ∀ x t1 t2 Q1 H Q,
triple t1 H Q1 →
(∀ v1, triple (subst x v1 t2) (Q1 v1) Q) →
triple (trm_let x t1 t2) H Q.
triple t1 H Q1 →
(∀ v1, triple (subst x v1 t2) (Q1 v1) Q) →
triple (trm_let x t1 t2) H Q.
The rule of trm_let x t1 t2 is very similar to that for
trm_seq, the only difference being the substitution of
x by v in t2, where v denotes the result of t1.
Lemma wp_let : ∀ x t1 t2 Q,
wp t1 (fun v1 ⇒ wp (subst x v1 t2) Q) ==> wp (trm_let x t1 t2) Q.
Proof using.
intros. rewrite wp_equiv. applys triple_let.
{ rewrite* <- wp_equiv. }
{ intros v. rewrite* <- wp_equiv. }
Qed.
wp t1 (fun v1 ⇒ wp (subst x v1 t2) Q) ==> wp (trm_let x t1 t2) Q.
Proof using.
intros. rewrite wp_equiv. applys triple_let.
{ rewrite* <- wp_equiv. }
{ intros v. rewrite* <- wp_equiv. }
Qed.
Parameter triple_app_fun : ∀ x v1 v2 t1 H Q,
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
v1 = val_fun x t1 →
triple (subst x v2 t1) H Q →
triple (trm_app v1 v2) H Q.
The corresponding wp rule is stated and proved next.
Lemma wp_app_fun : ∀ x v1 v2 t1 Q,
v1 = val_fun x t1 →
wp (subst x v2 t1) Q ==> wp (trm_app v1 v2) Q.
Proof using.
introv EQ1. rewrite wp_equiv. applys* triple_app_fun.
rewrite* <- wp_equiv.
Qed.
v1 = val_fun x t1 →
wp (subst x v2 t1) Q ==> wp (trm_app v1 v2) Q.
Proof using.
introv EQ1. rewrite wp_equiv. applys* triple_app_fun.
rewrite* <- wp_equiv.
Qed.
A similar rule holds for the application of a recursive function.
Module WpHighLevel.
The lemma wp_equiv captures the characteristic property of wp,
that is, (H ==> wp t Q) ↔ (triple t H Q) .
However, it does not give evidence that there exists a predicate wp
satisfying this equivalence. We next present one possible definition.
The idea is to define wp t Q as the predicate
\∃ H, H \* \[triple t H Q], which, reading litterally, is
satisfied by "any" heap predicate H which is a valid precondition
for a triple for the term t and the postcondition Q.
Definition wp (t:trm) (Q:val→hprop) : hprop :=
\∃ (H:hprop), H \* \[triple t H Q].
\∃ (H:hprop), H \* \[triple t H Q].
First, let us prove that wp t Q is itself a valid precondition,
in the sense that triple t (wp t Q) Q always holds (as asserted
by the lemma wp_pre).
To establish this fact, we have to prove:
triple t (\∃ H, H \* \[triple t H Q]) Q.
Applying the extraction rule for existentials gives:
∀ H, triple t (H \* \[triple t H Q]) Q.
Applying the extraction rule for pure facts gives:
∀ H, (triple t H Q) → (triple t H Q), which is true.
Second, let us demonstrate that the heap predicate wp t Q
is entailed by any precondition H that satisfies triple t H Q,
as asserted by the lemma wp_weakest.
Assume triple t H Q. Let us prove H ==> wp t Q, that is
H ==> \∃ H, H \* \[triple t H Q]. Instantiating the
H on the right-hand side as the H from the left-hand side
suffices to satisfy the entailment.
Recall that the properties wp_pre and wp_weakest were derivable
from the characteristic equivalence triple t H Q ↔ H ==> wp Q.
Thus, to formalize the proofs of wp_pre and wp_weakest, all we
have to do is to establish that equivalence.
Exercise: 2 stars, standard, especially useful (wp_equiv)
Prove that the definition wp_high satisfies the characteristic equivalence for weakest preconditions.
Lemma wp_equiv : ∀ t H Q,
(H ==> wp t Q) ↔ (triple t H Q).
Proof using. (* FILL IN HERE *) Admitted.
☐
(H ==> wp t Q) ↔ (triple t H Q).
Proof using. (* FILL IN HERE *) Admitted.
☐
End WpHighLevel.
Equivalence Between all Definitions Of wp
Lemma wp_unique : ∀ wp1 wp2,
(∀ t H Q, (triple t H Q) ↔ (H ==> wp1 t Q)) →
(∀ t H Q, (triple t H Q) ↔ (H ==> wp2 t Q)) →
wp1 = wp2.
Proof using.
introv M1 M2. applys fun_ext_2. intros t Q. applys himpl_antisym.
{ rewrite <- M2. rewrite M1. auto. }
{ rewrite <- M1. rewrite M2. auto. }
Qed.
(∀ t H Q, (triple t H Q) ↔ (H ==> wp1 t Q)) →
(∀ t H Q, (triple t H Q) ↔ (H ==> wp2 t Q)) →
wp1 = wp2.
Proof using.
introv M1 M2. applys fun_ext_2. intros t Q. applys himpl_antisym.
{ rewrite <- M2. rewrite M1. auto. }
{ rewrite <- M1. rewrite M2. auto. }
Qed.
Recall that both wp_pre and wp_weakest are derivable from
wp_equiv. Let us also that, reciprocally, wp_equiv is derivable
from the conjunction of wp_pre and wp_weakest.
In other words, the property of "being the weakest precondition"
also uniquely characterizes the definition of wp.
Lemma wp_from_weakest_pre : ∀ wp',
(∀ t Q, triple t (wp' t Q) Q) → (* wp_pre *)
(∀ t H Q, triple t H Q → H ==> wp' t Q) → (* wp_weakest *)
(∀ t H Q, H ==> wp' t Q ↔ triple t H Q). (* wp_equiv *)
Proof using.
introv M1 M2. iff M.
{ applys triple_conseq M1 M. auto. }
{ applys M2. auto. }
Qed.
(∀ t Q, triple t (wp' t Q) Q) → (* wp_pre *)
(∀ t H Q, triple t H Q → H ==> wp' t Q) → (* wp_weakest *)
(∀ t H Q, H ==> wp' t Q ↔ triple t H Q). (* wp_equiv *)
Proof using.
introv M1 M2. iff M.
{ applys triple_conseq M1 M. auto. }
{ applys M2. auto. }
Qed.
Module WpLowLevel.
The concrete definition for wp given above is expressed
in terms of Separation Logic combinators. In contrast to
this "high level" definition, there exists a more "low level"
definition, expressed directly as a function over heaps.
In that alternative definition, the heap predicate wp t Q
is defined as a predicate that holds of a heap h
if and only if the execution of t starting in exactly
the heap h produces the post-condition Q.
Technically, wp t Q can be defined as:
fun (h:heap) ⇒ triple t (fun h' ⇒ h' = h) Q.
In other words, the precondition requires the input heap
to be exactly h.
Definition wp (t:trm) (Q:val→hprop) : hprop :=
fun (h:heap) ⇒ triple t (fun h' ⇒ (h' = h)) Q.
fun (h:heap) ⇒ triple t (fun h' ⇒ (h' = h)) Q.
Exercise: 4 stars, standard, optional (wp_equiv_wp_low)
Prove this alternative definition of wp also satisfies the characteristic equivalence H ==> wp Q ↔ triple t H Q. Hint: exploit the lemma triple_named_heap which was established as an exercise in the appendix of the chapter Himpl.)
Lemma wp_equiv_wp_low : ∀ t H Q,
(H ==> wp t Q) ↔ (triple t H Q).
Proof using. (* FILL IN HERE *) Admitted.
☐
(H ==> wp t Q) ↔ (triple t H Q).
Proof using. (* FILL IN HERE *) Admitted.
☐
End WpLowLevel.
Parameter triple_hexists : ∀ t (A:Type) (J:A→hprop) Q,
(∀ x, triple t (J x) Q) →
triple t (\∃ x, J x) Q.
(∀ x, triple t (J x) Q) →
triple t (\∃ x, J x) Q.
Replacing triple t H Q with H ==> wp t Q yields the
lemma stated below.
Exercise: 1 star, standard, optional (triple_hexists_in_wp)
Prove the extraction rule for existentials in wp style.
Lemma triple_hexists_in_wp : ∀ t Q A (J:A→hprop),
(∀ x, (J x ==> wp t Q)) →
(\∃ x, J x) ==> wp t Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(∀ x, (J x ==> wp t Q)) →
(\∃ x, J x) ==> wp t Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Parameter triple_conseq_frame : ∀ H2 H1 Q1 t H Q,
triple t H1 Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
triple t H Q.
triple t H1 Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
triple t H Q.
Let us reformulate this rule using wp, replacing the
form triple t H Q with the form H ==> wp t Q.
Exercise: 2 stars, standard, especially useful (wp_conseq_frame_trans)
Prove the combined structural rule in wp style. Hint: exploit wp_conseq_trans and wp_frame.
Lemma wp_conseq_frame_trans : ∀ t H H1 H2 Q1 Q,
H1 ==> wp t Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
H ==> wp t Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
H1 ==> wp t Q1 →
H ==> H1 \* H2 →
Q1 \*+ H2 ===> Q →
H ==> wp t Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (wp_conseq_frame)
Prove the concise version of the combined structural rule in wp style. Many proofs are possible.
Lemma wp_conseq_frame : ∀ t H Q1 Q2,
Q1 \*+ H ===> Q2 →
(wp t Q1) \* H ==> (wp t Q2).
Proof using. (* FILL IN HERE *) Admitted.
☐
Q1 \*+ H ===> Q2 →
(wp t Q1) \* H ==> (wp t Q2).
Proof using. (* FILL IN HERE *) Admitted.
☐
Module WpIfAlt.
We have established the following rule for reasoning about
conditionals using wp.
Parameter wp_if : ∀ b t1 t2 Q,
wp (if b then t1 else t2) Q ==> wp (trm_if b t1 t2) Q.
wp (if b then t1 else t2) Q ==> wp (trm_if b t1 t2) Q.
Equivalently, the rule may be stated with the conditional around
the calls to wp t1 Q and wp t2 Q.
Exercise: 1 star, standard, optional (wp_if')
Prove the alternative statement of rule wp_if, either from wp_if or directly from triple_if.
Lemma wp_if' : ∀ b t1 t2 Q,
(if b then (wp t1 Q) else (wp t2 Q)) ==> wp (trm_if b t1 t2) Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
(if b then (wp t1 Q) else (wp t2 Q)) ==> wp (trm_if b t1 t2) Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
End WpIfAlt.
Definition of wp Directly from hoare
Module WpFromHoare.
Recall the definition of triple in terms of hoare.
Definition triple (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ (H':hprop), hoare t (H \* H') (Q \*+ H').
In what follows, we conduct the proofs by assuming a concrete definition
for wp, namely wp_high, which lends itself better to automated
proofs.
Definition triple (t:trm) (H:hprop) (Q:val→hprop) : Prop :=
∀ (H':hprop), hoare t (H \* H') (Q \*+ H').
Definition wp (t:trm) := fun (Q:val→hprop) ⇒
\∃ H, H \* \[triple t H Q].
\∃ H, H \* \[triple t H Q].
First, we check the equivalence between triple t H Q and H ==> wp t Q.
This proof is the same as wp_equiv from the module WpHighLevel
given earlier in this chapter.
Lemma wp_equiv : ∀ t H Q,
(H ==> wp t Q) ↔ (triple t H Q).
Proof using.
unfold wp. iff M.
{ applys* triple_conseq Q M.
applys triple_hexists. intros H'.
rewrite hstar_comm. applys* triple_hpure. }
{ xsimpl* H. }
Qed.
(H ==> wp t Q) ↔ (triple t H Q).
Proof using.
unfold wp. iff M.
{ applys* triple_conseq Q M.
applys triple_hexists. intros H'.
rewrite hstar_comm. applys* triple_hpure. }
{ xsimpl* H. }
Qed.
Second, we prove the consequence-frame rule associated with wp.
It is the only structural rule that is needed for working with
weakest preconditions.
Lemma wp_conseq_frame : ∀ t H Q1 Q2,
Q1 \*+ H ===> Q2 →
(wp t Q1) \* H ==> (wp t Q2).
Q1 \*+ H ===> Q2 →
(wp t Q1) \* H ==> (wp t Q2).
The proof leverages the consequence rule for hoare triples,
and the frame property comes from the ∀ H' quantification
baked in the definition of triple.
Proof using.
introv M. unfold wp. xpull. intros H' N. xsimpl (H' \* H).
unfolds triple. intros H''. specializes N (H \* H'').
applys hoare_conseq N. { xsimpl. } { xchange M. }
Qed.
introv M. unfold wp. xpull. intros H' N. xsimpl (H' \* H).
unfolds triple. intros H''. specializes N (H \* H'').
applys hoare_conseq N. { xsimpl. } { xchange M. }
Qed.
Third and last, we establish reasoning rules for terms in wp-style
directly from the corresponding rules for hoare triples.
The proof details are beyond the scope of this course.
The point here is to show that the proofs are fairly concise.
Lemma wp_val : ∀ v Q,
Q v ==> wp (trm_val v) Q.
Proof using.
intros. unfold wp. xsimpl. intros H'. applys hoare_val. xsimpl.
Qed.
Lemma wp_fun : ∀ x t Q,
Q (val_fun x t) ==> wp (trm_fun x t) Q.
Proof using.
intros. unfold wp. xsimpl. intros H'. applys hoare_fun. xsimpl.
Qed.
Lemma wp_fix : ∀ f x t Q,
Q (val_fix f x t) ==> wp (trm_fix f x t) Q.
Proof using.
intros. unfold wp. xsimpl. intros H'. applys hoare_fix. xsimpl.
Qed.
Lemma wp_if : ∀ b t1 t2 Q,
wp (if b then t1 else t2) Q ==> wp (trm_if b t1 t2) Q.
Proof using.
intros. unfold wp. xsimpl. intros H M H'.
applys hoare_if. applys M.
Qed.
Lemma wp_app_fun : ∀ x v1 v2 t1 Q,
v1 = val_fun x t1 →
wp (subst x v2 t1) Q ==> wp (trm_app v1 v2) Q.
Proof using.
introv EQ1. unfold wp. xsimpl. intros H' M. intros H''. applys* hoare_app_fun.
Qed.
Lemma wp_app_fix : ∀ f x v1 v2 t1 Q,
v1 = val_fix f x t1 →
wp (subst x v2 (subst f v1 t1)) Q ==> wp (trm_app v1 v2) Q.
Proof using.
introv EQ1. unfold wp. xsimpl. intros H' M. intros H''. applys* hoare_app_fix.
Qed.
Q v ==> wp (trm_val v) Q.
Proof using.
intros. unfold wp. xsimpl. intros H'. applys hoare_val. xsimpl.
Qed.
Lemma wp_fun : ∀ x t Q,
Q (val_fun x t) ==> wp (trm_fun x t) Q.
Proof using.
intros. unfold wp. xsimpl. intros H'. applys hoare_fun. xsimpl.
Qed.
Lemma wp_fix : ∀ f x t Q,
Q (val_fix f x t) ==> wp (trm_fix f x t) Q.
Proof using.
intros. unfold wp. xsimpl. intros H'. applys hoare_fix. xsimpl.
Qed.
Lemma wp_if : ∀ b t1 t2 Q,
wp (if b then t1 else t2) Q ==> wp (trm_if b t1 t2) Q.
Proof using.
intros. unfold wp. xsimpl. intros H M H'.
applys hoare_if. applys M.
Qed.
Lemma wp_app_fun : ∀ x v1 v2 t1 Q,
v1 = val_fun x t1 →
wp (subst x v2 t1) Q ==> wp (trm_app v1 v2) Q.
Proof using.
introv EQ1. unfold wp. xsimpl. intros H' M. intros H''. applys* hoare_app_fun.
Qed.
Lemma wp_app_fix : ∀ f x v1 v2 t1 Q,
v1 = val_fix f x t1 →
wp (subst x v2 (subst f v1 t1)) Q ==> wp (trm_app v1 v2) Q.
Proof using.
introv EQ1. unfold wp. xsimpl. intros H' M. intros H''. applys* hoare_app_fix.
Qed.
Lemma wp_let : ∀ x t1 t2 Q,
wp t1 (fun v ⇒ wp (subst x v t2) Q) ==> wp (trm_let x t1 t2) Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
wp t1 (fun v ⇒ wp (subst x v t2) Q) ==> wp (trm_let x t1 t2) Q.
Proof using. (* FILL IN HERE *) Admitted.
☐
End WpFromHoare.
Historical Notes
WPgen: Weakest Precondition Generator
Set Implicit Arguments.
From SLF Require Import WPsem.
Implicit Types f : var.
Implicit Types b : bool.
Implicit Types v : val.
Implicit Types h : heap.
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.
From SLF Require Import WPsem.
Implicit Types f : var.
Implicit Types b : bool.
Implicit Types v : val.
Implicit Types h : heap.
Implicit Types P : Prop.
Implicit Types H : hprop.
Implicit Types Q : val→hprop.