Closed jtristan closed 1 year ago
This could potentially happen in other subtle contexts so it is probably best to think of the general problem of interaction of lambda expressions and heaps in the context of verification. In this particular example, I suppose that one could blame extentionality or a lack strict positivity, but seeing the problem as such might be too restrictive for the expressiveness of the language, and might solve only one piece of the puzzle.
https://github.com/dafny-lang/dafny/issues/2178 is related, though it uses a variable instead of a constant. @jtristan , do you think these two are distinct issues?
It is worth thinking about carefully, but I think that indeed this issue and #2178 and #1379 might be symptoms of the same fundamental issue. If so, #2178 and #1379 are convoluted and make it look like the problem might have to do with some advanced features like traits or reads/requires, when it might in fact more fundamental, as shown in this example.
OK. I propose to track these jointly here.
My opinion FWIW (admittedly biased by a desire to see my pet issue #2178 get attention):
The example in this issue is almost the same as the first example in #1379, with app
inlined into Main
. The only difference is that we add 1 to get a contradiction instead of just a stack overflow. The remark about traits in #1379 is non-essential. So I would just blame this on strict positivity and call it a duplicate of #1379. #2178 is different: it doesn't violate strict positivity and does rely on a mutable field holding a lambda that reads memory.
If we enforce strict positivity on class fields, we fix #1379 but not #2178. If we disallow storing lambdas that read memory on the heap, we fix #2178 (hopefully!) but not #1379. There may well be a more "general problem of interaction of lambda expressions and heaps in the context of verification", but to establish that, I'd want to see either an example that isn't fixed by the combination of the two already proposed changes or an idea for a single change that would fix both #1379 and #2178. As it stands, I don't see any evidence of commonality between #1379 and #2178 to justify combining them for issue tracking purposes.
I'll take some of that back. I take the point that we don't have a soundness proof of Dafny with lambdas and the heap, and at some point, we'll probably get more benefit out of working on such a proof and seeing what new restrictions we need to impose for it to go through than addressing individual unsound examples with plausible individual fixes as we find them. But will work on a soundness proof happen anytime soon? In the meantime, is it appropriate to go ahead and impose restrictions that fix some known unsound examples, even if they might not be part of the comprehensive solution we'd choose to complete a soundness proof and might even force users to do work adapting to breaking changes that could have been avoided if we went right to the comprehensive solution? Are we making the perfect the enemy of the good? And what's the best way to represent this whole situation in the issue tracker? I don't know. One option would be to keep all 3 issues open: #1379 and #2178 to track their specific examples and proposed fixes and this issue to track broader study of soundness of lambdas and the heap. That would convey no implication one way or the other about which approach might actually be pursued first.
One more thought I want to add: I'm hopeful that if we require strict positivity for const class fields (which makes sense because they're basically like datatype fields) and we require that mutable fields don't directly or indirectly contain lambdas that read memory, then we can get away without requiring strict positivity for mutable fields.
Waving my hands a bit, each of the unsound examples we've seen seems to involve a cycle of fields of some kind. So we have two cases:
reads
set, and then the lambda itself can't be stored anywhere reachable from mutable memory, so there's no way to traverse the entire cycle and then reach the lambda again in order to cause an infinite recursion.If a user needs a non-positive field, they can declare it mutable even if they have no intent to actually mutate it if they can comply with the restriction on lambdas that read memory. @jtristan Does this address your concern about "expressiveness of the language"?
Let me add a comment here, since you raise the true fact that we don't have a soundness proof for lambdas and for mutable fields.
Dafny has this principle that it normally detect potential cycles in methods and functions, and requires a decrease clause in that case to ensure termination.
In each of this issue https://github.com/dafny-lang/dafny/issues/1379 and https://github.com/dafny-lang/dafny/issues/2178, the "lambda" (or function) is not checking for termination because it (wrongly) assumes that the function it's calling is "surely on different termination cycle" (x.f(x)
for this issue and https://github.com/dafny-lang/dafny/issues/1379, !r.target()
for https://github.com/dafny-lang/dafny/issues/2178
If we were to write the problem using always a (potentially partially applied) function rather than a lambda in the code -- lambdas are just partially applied functions where the first inputs are implicitly given, then the error seems more obvious:
function method rec(x: Y): nat {
1 + x.f(x) /// Hey, Dafny does not consider this call as a potential cycle and does not issue termination warnings!
}
method Main() {
var knot := new Y(rec);
and for https://github.com/dafny-lang/dafny/issues/2178:
function tf(r: Ref<() ~> bool>): () ~> bool
reads r, r.target.reads()
{
() reads r, r.target.reads() =>
if r.target.requires() then
!r.target() /// Hey, possible non-termination !
else false
}
So we have to assume that, for every call to a lambda stored in a field, it could be the function itself, or a function in the group of mutually recursive functions in which this function is part of, or points to it. It does not need to have the same signature as the function itself. But it requires a termination check.
Let's imagine that every lambda has a "decreases" clause that becomes a "decreases" ghost function field on it (the one that returns the .termination computation in a set which does not have infinite descending sequences).
Then, all the code above will fail because, since Dafny cannot recognize the lambda itself, it will add the termination check automatically:
function method rec(x: Y): nat {
1 + x.f(x) /// Here there will be a check that says 0 <= x.f.decreases(x) < rec.decreases(x)
}
and that will obviously fail, the model indicating that if x.f == rec, then it does not terminate. But, on the other hand, it would become possible to add a precondition:
function method rec(x: Y): nat
requires x.f.decreases(x) < rec.decreases(x)
{
1 + x.f(x) // Ok here.
}
Now, that function is no longer a ->
but a ~>
. Let's investigate how this would transform https://github.com/dafny-lang/dafny/issues/2178: By default, no decreases implies decreases false
, and obviously false < true
:
method Knot() ensures false {
var r := new Ref<() ~> bool>(() => false);
var tf := () reads r, r.target.reads() => (
if r.target.requires() then !r.target() // Error, cannot prove false <= r.target.decreases() < self.decreases() == false
else false
);
r.target := tf;
assert tf() == !tf();
assert false;
}
Bingo, we have an error. The only way to solve this error would be to add a precondition to the closure that refers to the implicit decreases()
clause (since this closure does not have any argument).
So, at the time we build the lambda, we need to ensure that the decreases expression actually decreases
method Knot() ensures false {
var r := new Ref<() ~> bool>(() decreases 0 => false);
var tf := () reads r, r.target.reads() decreases 1 requires r.target.decreases() < decrease() => (
if r.target.requires() then !r.target() // ok
else false
);
r.target := tf;
assert tf() == !tf();
assert false;
}
Now something interesting happens. Well-formedness check will ensure it's no longer possible to call tf()
in the assert because a precondition might not hold.
And the precondition is that r.target.decreases() < closure.decreases()
. However, because of the rewrite, we can now prove that closure.decreases() == r.target.decreases()
. So it's normal that it does not hold, there is no value that will make it possible for us to call tf() and have it terminate.
Here are the potentials problems of my approach:
<~<
or <_<
In this comment, I'm going to focus on #2178 because I'm convinced that #1379 should just be fixed by requiring strict positivity for const class fields.
My concern about Mikaël's proposal: How do you prevent the following without causing too much of a hassle in legitimate code?
method Knot() ensures false {
var r := new Ref<() ~> bool>(() decreases 0 => false);
var tf := () reads r, r.target.reads()
decreases r.target.decreases() + 1 // !!!
// Now the precondition holds by construction; we could even omit the `requires` clause.
requires r.target.decreases() < decreases() => (
if r.target.requires() then !r.target()
else false
);
r.target := tf;
assert tf() == !tf();
assert false;
}
Compare to Map
in the standard library, where we'd very much like to be able to write something like the following to avoid forcing every caller to pass an extra argument just for the termination check:
/* applies a transformation function on the sequence */
function method {:opaque} Map<T,R>(f: (T ~> R), s: seq<T>): (result: seq<R>)
requires forall i {:trigger s[i]} :: 0 <= i < |s| ==> f.requires(s[i])
decreases max(set i | 0 <= i < |s| :: f.decreases(i) + 1), |s| // <==
ensures |result| == |s|
ensures forall i {:trigger result[i]}:: 0 <= i < |s| ==> result[i] == f(s[i]);
reads set i, o {:trigger o in f.reads(s[i])} | 0 <= i < |s| && o in f.reads(s[i]):: o
{
if |s| == 0 then []
else [f(s[0])] + Map(f, s[1..])
}
Some ideas that might work:
Prohibit decreases
clauses from reading memory. I don't see an obvious way to change the example from #2178 to bypass this, but I haven't thought about it very hard. Are there legitimate use cases where this restriction would cause a hassle? I can imagine there might be some use for a helper function that is like Map
but the lambdas are read from mutable memory.
(Aside: I tested the master
of Dafny as of this writing (195c21e6fe0e31c22581c37394929f2f7815faa4) and it seems to allow decreases
clauses to read arbitrary memory even without a reads
clause. This may not pose any soundness problem by itself, but it certainly seems inappropriate and is probably an oversight. Should I file a separate issue?)
Disallow references to f.decreases
or f.requires
outside a requires
clause. We have to restrict f.requires
as well as f.decreases
, otherwise the restriction on f.decreases
can be trivially bypassed like this:
function requireDecreases(f: () ~> bool, n: nat): ()
requires n == f.decreases() {
()
}
function getRequires(f: () ~> bool): nat {
var n :| requireDecreases.requires(f, n);
n
}
And then set f.requires.decreases == f.requires
so that f.requires
can't be called without checking f.decreases
first, so that users can't cause unsound recursion purely within a requires
clause. The restriction may be reasonable: IIUC, the reason Dafny exposes these auxiliary functions at all is so that requires
clauses can be written for wrapper functions like Map
. However, if there are wrapper functions that are complex enough that their requires
clauses use helper functions that access f.requires
, those would have to be restructured somehow.
However, a broader concern is that the proposal would require extra boilerplate to deal with decreases
on pretty much all Dafny code that uses lambdas. I suspect users would find this very annoying. Other systems such as Coq achieve soundness (AFAIK) without this. The only feature Dafny adds that fundamentally changes the picture is mutable memory, so it would be great if we could have a restriction that just targets mutable memory and doesn't affect Map
or other code patterns that are valid in systems like Coq. Assuming we adopt my proposal from #2178, can we extend the soundness proof of one of the existing systems to handle mutable memory? I know very little about this stuff, but I'd hope that we could at least take advantage of the idea that within a larger Dafny computation, no memory reads can occur during a call to a lambda that was read from memory, thus that call must terminate by the same argument as in the existing systems that don't have memory.
f.decreases()
would be a lexicographic tuple and not just a single expression, so you cannot just add 1 to it
Moreover Dafny has this interesting feature that, if you add an element to a lexicographic tuple, the lexicographic tuple decreases.
So, if you were to give a meaning to decreases r.target.decreases(), 1
, this value would be smaller than r.target.decreases()
so it would simply fail at construction time.
Knowing this, how would you change your example to create a problem?
Before I can give a definite answer, I might need you to fill in the rest of the details of your proposal, i.e.:
- What if the return type of decreases clause is not compatible? Can we determine this statically? Can we even imagine a return type for decreases clause at all? Should we restrict to only one value?
But a first attempt at an answer: For every lexicographic tuple, there exists a greater tuple, right? (Or do we allow ⊤ by itself as a lexicographic tuple?) If so, it seems fragile to rely on there not being any way within the language to compute a greater tuple. For instance, I think a "let-such-that" would probably work if there were a way to make it deterministic.
In any case, my other point stands that regardless of the soundness of your proposal, I think it will be a pain for users of lambdas.
What if the return type of decreases clause is not compatible? Can we determine this statically? Can we even imagine a return type for decreases clause at all? Should we restrict to only one value?
I just tried the following experiment:
function f(): int
decreases true
{
g()
}
function g(): int
decreases 1
{
f()
}
it does not complain about type checking, only that it cannot prove that the expressions decrease. And that makes sense and that's ok. So this solves our concern: The user needs to prove that the decreases clause decreases, which implicitly requires for the verifier to find the axioms comparing two compatible types anyway.
Can we even imagine a return type for decreases clause at all? Should we restrict to only one value?
We can probably circumvent this problem by having a special uninterpreted ghost predicate decreasing(lex1, lex2)
, that can just express that the left lexicographic tuple is less than the right lexicographic tuple. And expr1 and expr2 should be a F.decreases(x, y, z) where f is a function type, during resolution.
That way, this reasoning happens only in Boogie and we can use the decreases functions as usual.
But a first attempt at an answer: For every lexicographic tuple, there exists a greater tuple, right? (Or do we allow ⊤ by itself as a lexicographic tuple?)
There is one biggest lexicographic tuple, and it's the empty lexicographic tuple.
In any case, my other point stands that regardless of the soundness of your proposal, I think it will be a pain for users of lambdas.
However, a broader concern is that the proposal would require extra boilerplate to deal with decreases on pretty much all Dafny code that uses lambdas. I suspect users would find this very annoying.
As you stated it yourself, if a lambda is not reading anything in memory, it won't need any decreases clause, because it cannot loop into itself. You cannot create the example at the top of this page using a datatype because of positivity either. But there are at least two other cases in which you will not need at all decreases check: Even if there are recursive calls to other lambdas or functions, we can prove we don't need a decreases clause:
Otherwise, there might be latent soundness issues that users have to be aware of, so I'd argue we should issue this breaking change if Dafny cannot guarantee termination.
(I'm finally following up on this thread. I've been busy with real work!)
Hi Mikaël,
Re the first point, about soundness of the proposal: I think I can update my previous example to comply with the rules for lexicographic tuples. I'm assuming that f.decreases(...)
returns some "lexicographic tuple" data type that I can compare with ==
and that supports the usual reasoning patterns (e.g., the constructor is injective); I'm not assuming that lexicographic tuples support any other operations. Even this is enough to check whether a lexicographic tuple consists of a single nat
(using exists
and a helper function to construct a lexicographic tuple from a single nat
) and, if so, extract the nat
(using let-such-that). From there, I can repeat essentially the same process that my previous example was informally trying to follow. It's easy enough to derive a contradiction directly from the decreases
clause that I no longer bother actually calling the function, so it can return ()
instead of a boolean.
function decreaseNat(n: nat): () decreases n { () }
method Knot() ensures false {
var r := new Ref<() ~> ()>(() decreases 0 => ());
var tf := () reads r, r.target.reads()
decreases
(
if (exists n :: r.target.decreases() == decreaseNat.decreases(n))
// The let-such-that is deterministic since we assume lexicographic tuple
// construction is injective.
then (var n :| r.target.decreases() == decreaseNat.decreases(n); n) + 1
else 0
)
=> ();
// The `decreases` clause consists of a single `nat` no matter which branch of the `if` is taken.
assert (exists n :: tf.decreases() == decreaseNat.decreases(n));
r.target := tf;
// Because r.target == tf.
assert (exists n :: r.target.decreases() == decreaseNat.decreases(n));
// That was exactly the `if` condition, so we know the `then` branch is taken.
// Define `n` the same way as in the `then` branch:
var n :| r.target.decreases() == decreaseNat.decreases(n);
// Now the `decreases` clause simplifies to `n + 1`, so we should be able to deduce:
assert tf.decreases() == decreaseNat.decreases(n + 1);
// But r.target == tf, so:
assert decreaseNat.decreases(n) == decreaseNat.decreases(n + 1);
// By injectivity:
assert n == n + 1;
assert false;
}
Have I made a mistake? (Unfortunately, I can't easily test code that uses a language extension that hasn't yet been implemented.) If not, at what point would you block the above example?
Re the second point, about the amount of hassle the proposal might cause for legitimate code:
But there are at least two other cases in which you will not need at all decreases check: Even if there are recursive calls to other lambdas or functions, we can prove we don't need a decreases clause:
When we can determine that the fields we read statically are all scalar (and not other references or functions)
When don't directly invoke fields that are high-order AND inner functions calls don't read anything AND inner function parameters are not high-order.
OK, I didn't realize from your original post that you intended to exempt some cases from the new restriction; that helps. And I agree we should close any soundness gaps. However, we don't know at this point that your proposal is the only possible way to do that; we still have my alternative proposal from #2178. We should compare them on the basis of (1) how confident we are that they actually solve the soundness problem and (2) how much of a hassle they would present to legitimate code. I'm not sure about (1); I believe John is studying it. For (2), I'm not familiar with the full range of real-world Dafny code that might be affected by any new restriction, but one example I think is important is Map
in the standard library. My proposal does not require any changes to Map
. How would it work under your proposal? Are there examples of legitimate code that your proposal handles more easily than mine?
Great and clever thinking. This is an elaborated version of your tf.decreases() = r.target.decreases() + 1
where r.target
is assigned to tf
. You proved there is no way around it.
What comes in my my mind now is that decreases
would be like requires
, it's a total function on the input domain, and as such, its "computation" (even if it's ghost) needs to terminate.
So back to my example, it should even trigger an error like this
method Knot() ensures false {
var r := new Ref<() ~> bool>(() decreases 0 => false);
var tf := () reads r, r.target.reads() decreases 1 requires r.target.decreases() < decrease() => (
^ decreases expression might not decrease
if r.target.requires() then !r.target() // ok
else false
);
r.target := tf;
assert tf() == !tf();
assert false;
}
The only way to invoke r.target.decreases()
would be to first prove that its decreases function decreases the current decreases expression, which means that decreases clauses, as total functions, would have themselves decreases expressions.
method Knot() ensures false {
var r := new Ref<() ~> bool>(() decreases 0 => false);
var tf := () reads r, r.target.reads() decreases 1 requires r.target.decreases.decreases() < decreases() requires r.target.decreases() < decrease() => (
if r.target.requires() then !r.target() // ok
else false
);
r.target := tf;
assert tf() == !tf();
assert false;
}
But then, we cannot guarantee that r.target.decreases.decreases()
is not a recursive call itself unless it decreases one more expression. And I don't see how the iteration could end.
Now I'm convinced the easiest and the least disruptive way to fix this unsoundness for now (before we even consider a decreases clause on closures), after reading all of your proposals, is to disable storing lambdas that can read memory as a field of some class.
That way, termination can be checked at named function level as usual, which can be provable as usual.
The trait approach you suggest is still good (but we still have the {:termination false}
unsoundness for traits)
Now I'm convinced the easiest and the least disruptive way to fix this unsoundness for now (before we even consider a decreases clause on closures), after reading all of your proposals, is to disable storing lambdas that can read memory as a field of some class.
Sorry if the following is naive, as I haven't followed the details :/
I would love to understand precisely why lambdas behave differently from trait functions with regards to termination. Intuitively, lambdas and traits should be interchangeable, since lambdas can (well, should be able to) be desugared into instances of a class that stores the state captured by the lambda. I'm guessing the heart of the issue is that termination is proved against the spec of the trait for trait functions, whereas for lambdas we try to prove that the lambda can always be called, in any context?
Maybe it would be interesting to take a few real-world examples of code that uses ~>
and try to rewrite it with traits? This way we'd get a better sense of how useful this feature is. I personally don't have any code of that sort, unfortunately.
I'm guessing the heart of the issue is that termination is proved against the spec of the trait for trait functions, whereas for lambdas we try to prove that the lambda can always be called, in any context?
That's basically my understanding from what I've seen so far, though I imagine there are others here who could speak with much more authority on this.
Maybe it would be interesting to take a few real-world examples of code that uses
~>
and try to rewrite it with traits? This way we'd get a better sense of how useful this feature is. I personally don't have any code of that sort, unfortunately.
Can you clarify what Dafny feature you're trying to evaluate the feasibility of removing? The ability to use ~>
at all? I don't have a real-world example of ~>
to prove that it's useful at all, but if you'd like to see the amount of boilerplate it took me to replace a contrived example of ~>
with a trait, head over to https://github.com/dafny-lang/dafny/issues/2178#issuecomment-1146234416. In addition, if people want to use functions like Map
with callbacks that read memory, we'd have to look into whether and how Map
could be changed to work with traits. I imagine Map
would need to take an additional ghost parameter to control its own decreases
clause so the caller can ensure the decreases
clause of Map
is less than that of the caller but greater than that of the callback. And then, just as under Mikaël's proposal, we may need to add a first-class "lexicographic tuple" data type for this parameter because we don't know the element types of the callers and callbacks of Map
.
Minor point:
(but we still have the
{:termination false}
unsoundness for traits)
If the user opts to bypass termination checking, all bets are off for soundness; this isn't our problem. If we want, we can make /noCheating
disallow {:termination false}
if it doesn't already.
I'm guessing the heart of the issue is that termination is proved against the spec of the trait for trait functions, whereas for lambdas we try to prove that the lambda can always be called, in any context?
A further remark: Lambdas can be called from any context, but I imagine that the biggest use case by far is to call them during execution of the same function that created them via a helper such as Map
without extra termination checking hassles.
An interesting sub-case is when a function f
creates a lambda that calls f
recursively: Dafny checks the decreases clause of the recursive call against the decreases clause of the caller as if there were no lambda in between. This commonly comes up during recursive traversals of data structures where one of the constructors for a node contains a sequence of nodes and f
wants to map f
across the elements of the sequence. Typically, the lambda needs an explicit precondition for Dafny to be able to verify that the recursive call to f
is safe. I thought this idiom was widely known, but I couldn't find an existing web page describing it in a quick web search, so I'll go ahead and write up a contrived example here. I also have real-world examples in Amazon-internal code written by my team that I'd be happy to share if you contact me by a channel internal to Amazon.
datatype IntTree = Internal(children: seq<IntTree>) | Leaf(x: int)
function mapSeq<A, B>(f: A --> B, s: seq<A>): seq<B>
requires forall x | x in s :: f.requires(x) {
seq(|s|, i requires 0 <= i < |s| => f(s[i]))
}
function sumOfIntSeq(s: seq<int>): int {
if |s| == 0 then 0 else s[0] + sumOfIntSeq(s[1..])
}
function sumOfIntTree(t: IntTree): int
decreases t {
match t {
case Leaf(x) => x
case Internal(children) => sumOfIntSeq(mapSeq(
t2 requires t2 < t => sumOfIntTree(t2), // <==
children))
}
}
The upshot for our work here: if we keep this feature for lambdas that don't read memory (which I sure hope we do), that may reduce the cost of also keeping it for lambdas that do read memory, subject to any other restrictions we impose for soundness.
For the record and future tests, a variant using arrays and datatypes
datatype Pack<T> = Pack(ghost c: T)
method m()
ensures false
{
var r := new (Pack<() ~> bool>)[1];
r[0] := Pack(() => false);
var tf := Pack(() reads r, r[0].c.reads => (
if r[0].c.requires() then !r[0].c() else false
));
r[0] := tf;
assert r[0].c() == !r[0].c();
assert false;
}
Wow what a great example. I got inspired.
Let me throw a way to solve this unsoundness with the least disruptions in the short-term:
Prevent invocations of expressions of type A ~> B
inside a closure itself on a type C ~> D
with the warning "Error: impossible to check termination because this closure could be pointing to the same closure.".
Here is why i think it should work. Historically, we solved the termination problem between named functions by creating a graph and check termination between successive calls, even if these calls are made from closures declared inside of functions.
But when we look at the call r[0].c()
, c is not the name of a function or method of some class. It's actually an indirection to a closure. It's actually (r[0].c) ()
. We cannot build a call graph that goes through c
, because c
itself is just an indirection to a function that is "dynamically dispatched".
If r[0] was a class X { function method c() { ... }}
, then there would be no problem, we would be able to compute the call graph when encountering r[0].c()
, because c
is actually a static name, not an indirection.
Hence, if we encounter an expression that looks like (B)(...)
in a closure that reads memory, and B
is actually an expression that returns the type of a function that also reads memory, we don't have any safeguards to prevent that that function is not the closure itself, as in all the example above, and that would thus be surely skipped by the call graph. And with no "decreases" clause on closures (to keep simplicity), it means we just cannot afford a second nested recursive call to a closure that can read memory.
As an example, we could either do this check by inserting the following assertion:
if r[0].c.requires() then
assert (r[0].c).reads() == {}; // inserted automatically
!r[0].c() else false
Now, why would such a restriction work in any case? If we call a closure (B)(...)
is so that B
does not read memory, then there is no possibility that this function calls the enclosing closure, so there can't be a cycle there, and termination works as usual.
QED.
Notes:
c.requires()
is also reading memory, then this call would be prevented as well.(B)(...)
, the expression B computing a closure that reads memory does not itself read memory. This simply means that B was completely determined before the closure existed, and thus cannot call the closure itself. It would be a good way to enable clean refactoring.I'm also curious about one thing: how common is it to store a ghost value of type ~> in memory? My intuition is that it is not common at all, but I would be curious to hear from people who have seen more Dafny code than me.
I'm also curious about one thing: how common is it to store a ghost value of type ~> in memory? My intuition is that it is not common at all, but I would be curious to hear from people who have seen more Dafny code than me.
Good catch. I tried to remove the ghostness of this example, but then I need to put the requires in the requires clause of the closure itself, and it's no longer possible to call it.
datatype Pack<T> = Pack(c: T)
method m()
{
var r := new (Pack<() ~> bool>)[1];
r[0] := Pack(() => false);
var tf := Pack(() reads r, r[0].c.reads requires r[0].c.requires() => (
!r[0].c()
));
r[0] := tf;
print r[0].c(); // A precondition might not hold.
}
Nevertheless, I wouldn't feel safe making the assumption that such lambda would always be ghost anyway (and only check in ghost code), because I can't prove it never happens in non-ghost code as of today.
I have an example of this unsoundness in non-ghost code, which in principle triggers an infinite loop:
datatype Pack<T> = Pack(c: T) // not ghost anymore !
type ToBool = x: () ~> bool | x.requires()
method m()
ensures false
{
var r := new (Pack<ToBool>)[1];
r[0] := Pack(() => false);
var tf := Pack(() reads r, r[0].c.reads => (
!r[0].c() // no need to verify that .requires() holds.
));
r[0] := tf;
var result := r[0].c(); // never finished
assert r[0].c() == !r[0].c();
assert false;
}
I haven't tested this code yet as I just found that after a trip thinking about it.
I don't think this example is going to work. In Dafny 3.9.1, the x.requires()
in the subset type condition gives me the error, "insufficient reads clause to invoke function". This is because subset type conditions aren't allowed to read the heap (otherwise a value of a subset type T could cease to be of type T after a heap write, leading to unsoundness), x.requires.reads() == x.reads()
, and x.reads()
being nonempty is the whole point of the example. If the subset type could express that x.requires()
is true regardless of the heap state, then the example would probably work, but I don't think Dafny supports that kind of quantification over heap states in the context of an expression.
Yep I just checked. That's correct. I don't see a downside of putting this check in compiled code as well. It will make it easier to identify the issue before someone tries to call this closure.
Besides, if in a more complex context, someone was looking at the error "Function precondition might not hold" in the closure, they might be tempted to think it's one more incompleteness limitation of Dafny and add an assume {:axiom} assume r[0].c.requires();
like below. After all, there are only reads clauses, no requires clause in all of the lambdas written, right?
datatype Pack<T> = Pack(c: T)
method m()
ensures false
{
var r := new (Pack<(() ~> bool)>)[1];
r[0] := Pack(() => false);
var tf := Pack(() reads r, r[0].c.reads => (
assume r[0].c.requires(); // Oh Dafny, com'on I think you can do this.
!r[0].c()
));
r[0] := tf;
var result := r[0].c(); // never finished
assert r[0].c() == !r[0].c();
assert false;
}
As expected for us, Dafny wouldn't complain as of today, prove false and never terminate. I know, as soon as someone writes assume
, they assume the risk. But we can do better at warning about the real risk, especially because we have to do this in ghost code.
That's my best argument as to why we will want to put the error about the recursion on a lambda that can read memory as described earlier, not only in ghost code, but also in compiled code.
Asking in this context since @RustanLeino connected this issue and #3152: is there a reason we SHOULDN'T at least add the missing check for strict positivity for classes? I can see that it wouldn't be sufficient to address this larger class of unsoundness, but I assert that the missing check is simply a bug given the current design of the language.
Mikaël, re your latest proposed solution for the heap variant of the issue: I don't think it's going to work as well as you imagine. I still think my proposal (to prohibit storing ~>
lambdas on the heap) is better because it's likely to be less disruptive to legitimate code and similar in implementation complexity and feasibility of proving soundness, as I'll explain below.
The first issue with your proposal that needs clarification: A call from one ~>
lambda to another could occur via one or more intermediate functions. How would you track this? At what point would you report an error? Here's an illustrative example to work through:
class Ref<T> {
ghost var target: T
constructor(target_init: T)
ensures target == target_init {
target := target_init;
}
}
trait AbstractLambdaCaller {
function call(l: () ~> bool): bool reads *
}
method ProtoKnot(c: AbstractLambdaCaller) returns (ghost l: () ~> bool)
ensures l.requires() && l() == c.call(l)
{
var r := new Ref<() ~> bool>(() => false);
l := () reads * => c.call(r.target);
r.target := l;
}
class NegatingLambdaCaller extends AbstractLambdaCaller {
function call(l: () ~> bool): bool reads * {
if l.requires() then !l() else false
}
}
method Knot() ensures false {
var c: NegatingLambdaCaller := new NegatingLambdaCaller;
var l := ProtoKnot(c);
assert l.requires() && l() == c.call(l);
assert l() == !l();
}
Let's suppose you have some reasonable solution to the above issue. Now consider the following code to apply a function f
to every element of a sequence of sequences, using the Map
function for sequences in the standard library:
function TwoLevelMap<T,R>(f: T ~> R, s: seq<seq<T>>): seq<seq<R>>
requires forall i, j | 0 <= i < |s| && 0 <= j < |s[i]| :: f.requires(s[i][j])
reads set i, j, o | 0 <= i < |s| && 0 <= j < |s[i]| && o in f.reads(s[i][j]) :: o
{
var f2 := (si)
requires forall j | 0 <= j < |si| :: f.requires(si[j])
reads set j, o | 0 <= j < |si| && o in f.reads(si[j]) :: o
=> Map(f, si);
Map(f2, s)
}
I believe the basic version of your proposal would reject this code because f2
calls Map(f, si)
, which calls f
, and f2
and f
are both ~>
lambdas. My proposal would allow the code because neither f
nor f2
is stored on the heap. You later say:
We could lower the strong requirement if in
(B)(...)
, the expression B computing a closure that reads memory does not itself read memory. This simply means that B was completely determined before the closure existed, and thus cannot call the closure itself.
Would this rule help in the TwoLevelMap
example? If so, can you explain the details? In the body of Map
, the expression B
just consists of the function parameter f
, so no heap read occurs during the evaluation of B
. But the value of f
could depend on heap reads previously performed by the caller. For example, the following minor variant of my original example from https://github.com/dafny-lang/dafny/issues/2178#issue-1251887659 still needs to be rejected:
class Ref<T> {
ghost var target: T
constructor(target_init: T)
ensures target == target_init {
target := target_init;
}
}
method Knot() ensures false {
var r := new Ref<(()) ~> bool>(u => false);
var tf := u reads r, r.target.reads(()) => (
if Map(r.target.requires, [()])[0] then !Map(r.target, [()])[0] else false
);
r.target := tf;
assert tf(()) == !Map(r.target, [()])[0];
assert tf(()) == !tf(());
assert false;
}
So I guess if we use this rule, the analysis would have to be interprocedural?
So: Unless the special rule applies, it seems that my proposal has an expressiveness advantage in the TwoLevelMap
example. Do you have a reasonable example that your proposal allows and mine doesn't? If not, it would seem that my proposal has an expressiveness advantage overall. Granted, it may be that using ~>
lambdas at all is pretty rare, in which case differences in expressiveness with ~>
lambdas may be of low importance to the Dafny user base as a whole. But if the proposals are similar on our other design criteria (which I currently believe is the case), I'd think even a small expressiveness difference should tip the balance in favor of my proposal.
Regarding those other design criteria:
Implementation complexity: I imagine the two proposals are likely to be similar once the details are worked out. Yours needs to detect a call from one ~>
lambda to another through intermediate functions, while mine needs to detect storage of a ~>
lambda on the heap through intermediate data structures.
Soundness: Since I have essentially no experience with the techniques used to prove soundness of a system like Dafny directly (they look quite difficult), the approach I would try is to translate Dafny code to another well-studied system such as Coq, thereby proving soundness of Dafny relative to Coq. (I'm using Coq as an example because I'm familiar with it; a different system may be better.) One of the first things we have to do in any such translation is explicitly model the heap as a Coq datatype H
that holds all the mutable fields of all Dafny objects that exist at a given time. (Dynamically adding objects to H
is tricky, but I have some ideas about how to do it that I can write up if it becomes relevant.) A Dafny lambda of type A ~> B
becomes a Coq lambda of type H -> A -> B
. Then my proposed restriction that no ~>
lambda is stored on the heap just corresponds to the strict positivity condition for H
. I imagine there are many other obstacles that would have to be overcome to complete the translation, but I don't imagine that my proposal is likely to leave us worse off in confronting those obstacles than the kinds of alternatives we've been discussing. Of course, if we wanted to prove the soundness of Dafny directly, I can't speak to the likely difficulty of that.
Mikaël, do you have any other concerns about my proposal? I get the sense that you're dissatisfied with it, given that you keep suggesting alternatives.
I'm always in favor of soundness vs. my potential dissatisfaction, so I'm truly grateful your are putting so much effort defending your proposal as well as finding flaws in mine, that's the best way we can find a solution we both agree on.
Do you have a reasonable example that your proposal allows and mine doesn't?
The only reason why I keep looking for alternatives to your solution is that I have seen in industrial user's code the storage of closures of type A ~> B as ghost const of traits that don't have any implementation (they model some externs), so your proposal would break their code with no obvious way to recover from it. Here is an example of code that would be accepted with my proposal but rejected with yours:
trait {:extern} {:compile false} Mapper<B, B2(!new)> {
ghost const valueConverter: B ~> B2
ghost const underlyingMap<int, B>
ghost const modelMap<int, B2>
predicate Valid() {
forall k :: k in underlyingMap ==>mapBase[k] == valueConverter(underlyingMap[k])
}
}
Even though in this case it seems like you can put the value converter as an explicit function, what if this is provided by the constructor in derived classes? If code is not building other closures based on valueConverter
, then my proposal accepts their code, whereas yours would reject it.
: A call from one ~> lambda to another could occur via one or more intermediate functions. How would you track this? At what point would you report an error?
In your example, the faulty closure () reads * => c.call(r.target);
falls at the edge of having a LHS which has a named function that reads memory and a closure that reads memory. Since we don't know the implementation, I would argue c.call is the same as a field that contains a closure that reads memory (and might be initialized later, indirectly to point to the same closure), so it would be prohibited (because it's not possible to determine the call graph statically) and that solves your first problem.
(my quote) We could lower the strong requirement if in (B)(...), the expression B computing a closure that reads memory does not itself read memory. This simply means that B was completely determined before the closure existed, and thus cannot call the closure itself. It would be a good way to enable clean refactoring.
I was thinking of situations like this:
var f1: A ~> B = ...
var f2: A ~> B = ..... f1(a);,
I thought that, because f1 was statically built before f2 in the same context, there would be no way f1 can actually call f2 recursively. However, if f1 reads the memory as well, it could be that f2 is be assigned to that memory later so that f1 can actually call f2 indirectly, as your example states it. So my note is not acceptable.
I believe the basic version of your proposal would reject this code because f2 calls Map(f, si), which calls f, and f2 and f are both ~> lambdas. My proposal would allow the code because neither f nor f2 is stored on the heap. You later say:
The basic version of my proposal would not reject the code TwoLevelMaps
because the function Map is named and defined, and f
and f2
are not directly called.
However, my proposal would actually have (new necessary condition) to reject the definition of Map
because it is making a call to f
and has no way to determine if, because f
reads memory, it could actually invoke the Map
function itself with f
again on the same arguments.
And this is possible if and only if f
is actually stored in the memory, because otherwise, there is no way it could call itself again.
That remark, along with the problem of the strict positivity (which happens even if there is no memory reads), makes me think that there might be no way to get around forbidding storing a closure that reads memory as a field. But that's not enough, John's example is storing closures that read memory indirectly. How will you detect an attempt to store a closure on memory? Look at this example. I am wondering if there are more complicated ones, but it is to illustrate that it might not be as simple as you think to detect lambdas stored on the heap. I think it's doable because I don't extremely hard counter-examples though.
datatype Pack<T> = Pack(ghost c: T ~> bool) // Not on the heap, yay !
method m()
ensures false
{
var r := new (Pack<()>)[1]; // Ouch, how do we detect simply that we are storing a lambda that reads the memory on the heap?
r[0] := Pack(() => false);
var tf := Pack(() reads r, r[0].c.reads => (
if r[0].c.requires() then !r[0].c() else false
));
r[0] := tf;
assert r[0].c() == !r[0].c();
assert false;
}
Forbidding closures that can read memory as fields of objects is a pity though, because when we model JavaScript, it's extremely common to store closures that can read heap on an object, we do this all the time. To me, although I might be wrong, it still feels like it should be equivalent to have a closure stored on the heap be the same as a function.
Here are alternatives:
Because to be able to invoke recursively a lambda reading memory, one has to prove that its .requires() hold, we could also only prevent storing lambdas reading on the heap if they are ghost. That would not solve the user case I had in mind, but it would not prevent from using lambdas reading memory in compiled code.
Double ghost. Could it be that ghost code is not allowed to call double ghost code? It looks like if r[0].c
is ghost, then r[0].c.requires
is "double ghost", meaning we shouldn't have the right to call it in ghost code. But that's nothing of a real argument, jut guts feelings.
While we discuss potential solutions, I would also like to understand what the fundamental problem is. Note that non-termination does not, in general, lead to inconsistencies. For example, the following code is valid and reasonable Dafny code:
method f(x: nat) returns (y: nat)
decreases *
ensures false
{
y := f(x);
}
Because of the decrease * clause, we only get partial correctness: if the program terminates, then it will satisfy the postcondition.
Note that any method that calls f must have a decreases * clause.
What we do need to be careful about is that any term that can be used as part of a specification be strongly normalizing. f is not a problem because Dafny forbids using f as a term in a specification.
Likewise, creating a Landin's knot is not inherently problematic. The actual problem is when it end up being used as part of a specification. So, preventing Landin's know from being defined might be a solution, but perhaps it is worth also considering whether we should expect code that defines a Landin knot to come with a decreases * clause.
What ways do you think we could prevent Landin's knot from being defined?
Also, you might be suggesting something along the lines of adding "decreases " to closures that can read the heap? In this case, decreases would make a closure immediately and irremediably opaque, right? That way, we wouldn't be able to prove false in the cases above.
Q1: the best solution I have seen so far is what Matt has been sketching Q2: yes, that seems like an interesting idea to flesh out
Another example from Rustan, interesting because of the different signature for the constructor:
class Y {
const f: Y -> nat
constructor()
ensures this.f == ((x: Y) => 1 + x.f(x))
{
this.f := (x: Y) => 1 + x.f(x);
}
}
method Main() {
var knot := new Y();
var a := knot.f(knot);
assert a == knot.f(knot);
assert a == 1 + knot.f(knot);
assert knot.f(knot) == 1 + knot.f(knot);
assert 0 == 1;
assert false;
}
Still catching up on this thread. But here is a verified proof of false
that adapts the example in https://leino.science/papers/krml280.html for classes (in initial notation). Of course, it is very similar to the example(s) from John.
class D {
const f: D -> bool
constructor(f: D -> bool)
ensures this.f == f
{
this.f := f;
}
}
method Main() {
var g := (d: D) => !d.f(d);
var dd := new D(g);
calc {
g(dd);
==
!dd.f(dd);
== { assert dd.f == g; }
!g(dd);
}
assert false;
}
@mattmccutchen-amazon how would your proposal deal with the case above? It's a lambda that does not read the heap nor has preconditions.
I actually looked at this example, and it's very interesting how it's possible to tweak it to encode mutually recursive functions without the need of any decreases clause.
class Y {
const even: Y -> int -> bool
const odd: Y -> int -> bool
constructor()
ensures even == ((y: Y) => (x: int) =>
if x == 0 then true else if x > 0 then y.odd(y)(x - 1) else y.odd(y)(x + 1))
ensures odd == ((y: Y) => (x: int) =>
!y.even(y)(x))
{
even := (y: Y) => (x: int) =>
if x == 0 then true else if x > 0 then y.odd(y)(x - 1) else y.odd(y)(x + 1);
odd := (y: Y) => (x: int) =>
!y.even(y)(x);
}
}
method Main() {
var y := new Y();
assert y.even(y)(4);
}
With regular predicates, you would have to manually provide decreases clauses to check termination 😱
predicate even(x: int)
decreases if x > 0 then x else -x, 1
{
if x == 0 then true else if x > 0 then odd(x - 1) else odd(x + 1)
}
predicate odd(x: int)
decreases if x > 0 then x else -x
{
!even(x)
}
Oh actually, it's another instance of https://github.com/dafny-lang/dafny/issues/1379 which has a separate fix, i.e. checking the type cardinality.
In the following example, we prove that 0 equals 1 by defining a diverging function. For explanatory purposes, the proof is much more detailed than it needs to be. The idea is to make a recursive call not syntactically obvious by defining a lambda expression that calls a method that has been defined as the lambda expression.
class Y { const f: Y -> nat constructor(f: Y -> nat) ensures this.f == f { this.f := f; } } method Main() { var knot := new Y((x: Y) => 1 + x.f(x)); var a := knot.f(knot); assert a == knot.f(knot); assert a == 1 + knot.f(knot); assert knot.f(knot) == 1 + knot.f(knot); assert 0 == 1; assert false; }
@jtristan 's code-snippet in the style of https://github.com/dafny-lang/dafny/issues/2750#issuecomment-1402514795, i.e. https://leino.science/papers/krml280.html :
class D {
const f: D -> nat
constructor(f: D -> nat)
ensures this.f == f
{
this.f := f;
}
}
method Main() {
var g := (d: D) => 1 + d.f(d);
var dd := new D(g);
calc {
g(dd);
==
1 + dd.f(dd);
== { assert dd.f == g; }
1 + g(dd);
}
assert false;
}
Another variation of the previous example:
trait T {
const f: D -> bool
}
class D extends T {
constructor(f: D -> bool)
ensures this.f == f
{
this.f := f;
}
}
method Main()
{
var g := (d: D) => !d.f(d);
var dd := new D(g);
calc {
g(dd);
==
!dd.f(dd);
== { assert dd.f == g; }
!g(dd);
}
assert false;
}
In https://leino.science/papers/krml280.html Rustan points out that in F* the problem is solved by omitting the injectivity property of the datatype constructor Ctor: (D -> bool) -> D
. (Indeed, it is well-known that there is no injection from Pow(X) to X. In other words |P(X)| > |X|.) Here is a snippet that illustrates this aspect further. Note that the existence of a left-inverse implies injectivity.
type D
function G(d: D): (D -> bool)
function F(g: D -> bool): D
lemma Inverse(g: D -> bool)
ensures G(F(g)) == g
method Main() {
var g := (d: D) => !G(d)(d);
var dd := F(g);
calc {
g(dd);
==
!G(dd)(dd);
== { Inverse(g); }
!g(dd);
}
assert false;
}
Another version, which generalises the problem from nat
and bool
to a generic type T
that comes with a function H
that establishes that the type is inhabited by at least 2 elements. For type T = nat
, we have Hx = 1 + x
, and for type T = bool
we have Hx = !x
.
type D
type T
function method H(s: T): (t: T)
ensures s != t
function G(d: D): (D -> T)
function F(g: D -> T): D
lemma Inverse(g: D -> T)
ensures G(F(g)) == g
method Main() {
var g := (d: D) => H(G(d)(d));
var dd := F(g);
calc {
g(dd);
==
H(G(dd)(dd));
== { Inverse(g); }
H(g(dd));
}
assert false;
}
Watch out, I don't think this last one is a valid example of a deficiency with Dafny. All it is showing is that if you make inconsistent assumptions, then you can prove false.
type D
type T
function method H(s: T): (t: T)
ensures s != t
function G(d: D): (D -> T)
function F(g: D -> T): D
lemma Inverse(g: D -> T)
ensures G(F(g)) == g
lemma foo()
ensures false
{
var g := (d: D) => H(G(d)(d));
var dd := F(g);
calc {
g(dd);
==
H(G(dd)(dd));
== { Inverse(g); }
H(g(dd));
}
}
To clarify, here is one of the many instantiations of above (abstract) assumptions, which are sufficient to derive an inconsistency.
class D {
const f: D -> bool
constructor(f: D -> bool)
ensures this.f == f
{
this.f := f;
}
}
type T = bool
function method H(s: T): (t: T)
ensures s != t
{
!s
}
function method G(d: D): (D -> T) {
d.f
}
method F(g: D -> T) returns (d: D)
ensures G(d) == g
{
d := new D(g);
}
method Main() {
var g := (d: D) => H(G(d)(d));
var dd := F(g);
calc {
g(dd);
==
H(G(dd)(dd));
==
H(g(dd));
}
assert false;
}
I haven't followed all the details - I know Marco Servetto had a discussion of this at the Unsound workshop in OOPSLA Auckland last month --- https://unsound-workshop.org
I wanted to reiterate @mattmccutchen-amazon's point here https://github.com/dafny-lang/dafny/issues/2750#issuecomment-1249718110
In the meantime, is it appropriate to go ahead and impose restrictions that fix some known unsound examples, even if they might not be part of the comprehensive solution we'd choose to complete a soundness proof and might even force users to do work adapting to breaking changes that could have been avoided if we went right to the comprehensive solution?
I don't think it's just "appropriate" - it's essential to at least have the option.
I'm working with Dafny and for various reasons may be pushing the envelope (or not, I don't know enough to know). I know there's no proof, but for me often it's critical that Dafny not give out false positives - or else why bother? Especially if I'm doing weird s***. I don't want to wander accidently into unsoundness. Where there are known unsoundnesses(sic) I need to know. If there are overly-conservative checks that rule out lots of stuff, well I'd rather either turn the checks off, or rewrite my code to avoid the problem than have dafny tell me something verifies when it doesn't.
If there are too many restrictions, or they're too tight, turn them off by default, or give people the option of {:termination false} or whatever.
(and btw, since I primarily work with objects, if I needed to use lambdas - so far I don't - they'll need to be able to "read the heap")
You say that it is essential to have the option and I agree, but there's actually more than 1 option, and so I am curious to hear which one you would prefer. For concreteness, let's consider the very first example in this thread. The first question is: what is the problem? Is the problem that the proof should have been rejected because the method was not declared with a decreases clause and/or we consider it to be invalid, or is the problem that Dafny should have required the declaration of a decreases clause?
With that in mind, the option we could provide could be one of two things:
Both have pros and cons, and far-reaching consequences, but I am curious to know which one you would find more satisfying and why.
I'm not sure I can see the difference between those two options: forcing "the user" to be explicit that the specification is partial seems to require rejecting the program if they're not so explicit..?
Or do you mean banning even partial specifications with no opt-out?
Hopefully there's no need to choose (given the plethora of options, attributes, command-line switches, and now cli verb options Dafny as grown :-). But if I have to choose, for what I'm doing, I'd rather err on the side of soundness.
(I want to sent papers in to POPL* saying I've got a Dafny model that verifies - attaching the code as an artefact - but not having to somehow identify, enumerate, and construct arguments to dismiss a list of unsoundnesses...))
*POPL - not really POPL (ça doit être Coq) but OOPSLA at least.
Two more examples.
First, an example that hints that we could have arbitrary long cycles:
class Ref {
ghost var hogp: () ~> int
}
ghost method LogicKnot1'(r1: Ref,r2: Ref)
modifies r1, r2
ensures false
{
r1.hogp := () reads r2, r2.hogp.reads() => if r2.hogp.requires() then 1 + r2.hogp() else 0;
r2.hogp := () reads r1, r1.hogp.reads() => if r1.hogp.requires() then 1 + r1.hogp() else 0;
}
Second, an example that shows that it is possible to create a code knot even without a constructor.
class Test {
var f: () ~> bool
ghost predicate Invariant()
reads this, f.reads()
{
f.requires()
}
method update()
modifies this
requires Invariant()
ensures false
{
f := () reads this, f.reads() requires f.requires() => !f();
assert f.requires() == old(f).requires();
var b := f();
}
}
Another simple useful to keep in mind to devise a solution:
class Ref {
ghost var hogp: () ~> int
}
ghost method LogicKnot1(r1: Ref,r2: Ref)
modifies r1
ensures r1.hogp == (() reads r2, r2.hogp.reads() => if r2.hogp.requires() then 1 + r2.hogp() else 0)
{
r1.hogp := () reads r2, r2.hogp.reads() => if r2.hogp.requires() then 1 + r2.hogp() else 0;
}
ghost method LogicKnot2(r1: Ref,r2: Ref)
modifies r2
ensures r2.hogp == (() reads r1, r1.hogp.reads() => if r1.hogp.requires() then 1 + r1.hogp() else 0)
{
r2.hogp := () reads r1, r1.hogp.reads() => if r1.hogp.requires() then 1 + r1.hogp() else 0;
}
ghost method Main()
ensures false
{
var r1 := new Ref;
var r2 := new Ref;
LogicKnot1(r1,r2);
LogicKnot2(r1,r2);
}
A solution was implemented in #4348 . It is possible to relax the checks to reject less programs, but it's unclear to me that there is any demand for such expressiveness at this point.
In the following example, we prove that 0 equals 1 by defining a diverging function. For explanatory purposes, the proof is much more detailed than it needs to be. The idea is to make a recursive call not syntactically obvious by defining a lambda expression that calls a method that has been defined as the lambda expression.