AdaCore / ada-spark-rfcs

Platform to submit RFCs for the Ada & SPARK languages
62 stars 28 forks source link

[feature] Allow handling of Storage_Error in Spark #78

Closed mhatzl closed 3 years ago

mhatzl commented 3 years ago

Summary

Provide a possibility to handle Storage_Error in Spark_Mode.

Motivation

Currently, exception handling is not allowed in Spark. Since the abscence of all run-time errors except Storage_Error can be proven, I don't see a way to write a full program in Spark_Mode.
If one writes most code in Spark_Mode, except at program entry to handle a Storage_Error, one might lose the possibility to safely recover from the exception.

See: How to check for Storage_Error in Spark_Ada for further motivation

Caveats and alternatives

Adding full exception handling support is probably beyond any achievable scope, but maybe an alternative can be found since it is only one specific exception.
As an alternative, some kind of size check could be used to compare the available stack/heap size with the size that will be allocated, but this might result in a lot of if-checks.

yannickmoy commented 3 years ago

Storage_Error is related to the hardware/OS configuration, and so cannot be analyzed by a tool like GNATprove which takes the software only as input. It's similar to other things at the boundary of software, like the addresses you use for certain memory-mapped variables. For storage errors, it's rarely the case that you can recover at any point in the code, can you tell more about your use case?

glacambre commented 3 years ago

@yannickmoy I don't know how that relates to @mhatzl use case, but being able to recover from allocation failures is very important in the Linux kernel and is in fact of the major arguments against using Rust in that setting (see https://lkml.org/lkml/2021/4/14/1099 ).

yannickmoy commented 3 years ago

@glacambre In that case, the solution outlined in How to check for Storage_Error in Spark_Ada seems to be adequate, with minimal use of SPARK_Mode=>Off just inside the allocation function allowed to fail. Note that you could also return a possibly null pointer instead of a discriminated record if you prefer, which would mimic the C interface for allocation. And GNATprove will check that you never dereference a null pointer in the caller.

mhatzl commented 3 years ago

Thanks for your fast responses. As for the use case, I was thinking about programming an unbound (generic) AVL-tree in Spark with gold/platinum level as my bachelor thesis, since I have not yet found any unbound implementation online (not yet sure how "doable" this is). Since I want the tree to be unbounded, I have to use dynamic allocation.
And with dynamic allocation, I was looking for a way on how to handle the case when there is no memory left on the heap, which got me to the Storage_Error exception. With the example given at How to check for Storage_Error in Spark_Ada, there is a good way on solving this.
However, I thought it would be nice to have a way to write everything in Spark_Mode. If I am correct, with Spark and GNATstack, a stackoverflow can not happen, which would leave only the new keyword for a place where a Storage_Errorcould occur.

Would it be possible to have specific code blocks, where only allocations are made and in this code block, handling Storage_Error would be allowed with Spark_Mode?

As my use case is purely personal, it has absolutely no priority. I just wanted to better understand Ada and Spark and hear other opinions.

Fabien-Chouteau commented 3 years ago

Would it be possible to have specific code blocks, where only allocations are made and in this code block,

I was going to suggest that, you can put the allocation/deallocation in a specific part of you code that will handle Storage_Error.

handling Storage_Error would be allowed with Spark_Mode?

I don't think so, but mixing SPARK and Ada code is actually a good strategy for this kind of situations. You can probably have a Spark_Mode => On specification for the sub-program that handles allocation and then just the implementation (body) in Spark_Mode => Off. This implementation will have to be verified through unit testing for instance, but it should be a very small piece of code so it is not too much of an issue.

Something like: [edit: as suggested by Claire, only the implementation of Alloc has to be in SPARK_Mode => Off] [edit2: fix Alloc as suggested by @mhatzl] [edit3: make it generic] [edit4: fix Free () data flow]

generic
   type Object is limited private;
   type Name is access Object;
package SPARK_Alloc
  with SPARK_Mode
is

   function Alloc return Name;

   procedure Free (Ptr : in out Name)
     with Post    => Ptr = null,
          Depends => (Ptr  => null,
                      null => Ptr);

end SPARK_Alloc;
with Ada.Unchecked_Deallocation;
with Ada.Text_IO;

package body SPARK_Alloc
  with SPARK_Mode
is

   function Alloc return Name is
      pragma SPARK_Mode (Off); -- Only the allocation has to be in SPARK off
   begin
      return new Object;
   exception
      when Storage_Error =>
         return null;
   end Alloc;

   procedure Free (Ptr : in out Name) is
      procedure Dealloc is new Ada.Unchecked_Deallocation (Object, Name);
   begin
      Dealloc (Ptr);
   end Free;
end SPARK_Alloc;

Then when you use this allocator in your code GNATprove will make sure that you don't de-reference a null access. For instance, this:

with Ada.Text_IO; use Ada.Text_IO;
with SPARK_Alloc;

procedure Main is
   pragma SPARK_Mode (On);

   type Int_Acc is access Integer;
   package SPARK_Alloc_Int is new SPARK_Alloc (Integer, Int_Acc);
   Ptr : constant Int_Acc := SPARK_Alloc_Int.Alloc;
begin
   Ptr.all := 42;
   Put_Line (Ptr.all'Img);
end Main;

will produce a GNATprove message:

main.adb:9:04: medium: memory leak might occur at end of scope[#4]
    9 |   Ptr : constant Int_Acc := SPARK_Alloc_Int.Alloc;
      |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

main.adb:11:08: medium: pointer dereference check might fail[#0]
   11 |   Ptr.all := 42;
      |   ~~~~^~~
  e.g. when Ptr = null[#1]

So SPARK will make sure that you handle properly the case where the allocation fails.

mhatzl commented 3 years ago

Thanks again for the response. I will try to use this approach in my implementations.

Since handling of Storage_Error inside Spark will not be implemented, the issue can be closed from my point of view.

yannickmoy commented 3 years ago

Thanks @mhatzl for the feedback, I'll close that Issue then.

mhatzl commented 2 years ago

I have implemented and tested the sample as described by @Fabien-Chouteau and noticed that the allocation is inside the declare block.
To be able to handle the exception, the allocation must be moved inside begin like

...
declare
    Ptr : Int_Acc;
begin
    Ptr := new Integer;
    return Ptr;
exception
...

Issue can remain closed, just as info

Fabien-Chouteau commented 2 years ago

Thanks for the comment/bugfix @mhatzl. You can see that I don't use Ada exceptions a lot, and that it was a good idea to recommend testing the non-SPARK code ^^

mhatzl commented 2 years ago

Thanks @Fabien-Chouteau for your edits :) Indeed it was a good idea to test it :)