seahorn / clam

Static Analyzer for LLVM bitcode based on Abstract Interpretation
Apache License 2.0
278 stars 37 forks source link

Are there any artifacts or repositories that use clam to test SV-COMP benchmark? #95

Closed ghost closed 8 months ago

ghost commented 8 months ago

I want to use clam to do some tests on SV-COMP 2024 benchmark, but meet some problems:

  1. I use the svcomp19.yaml to test SV-COMP 2024, but it can't solve the __VERIFIER_assert function, thus can't check assertions.
  2. There are just analysis time limit and memory limit for clam, but can't print time and memory usage. ...... Since Clam and Crab are modular, many users may only implement a module of the whole tool, like adding an abstract domain, thus may be unfamiliar to the whole tool. Therefor, are there any artifacts or repositories that use clam to test SV-COMP benchmark? I think this template example could greatly improve the efficiency of our development and research based on Clam/Crab.
caballa commented 8 months ago

I've just taken a look and it seems that for reachability, there is a special function called reach_error which is not recognized by Clam front-end.

I think it should suffice just to modify this code https://github.com/seahorn/clam/blob/master/lib/Clam/CfgBuilderUtils.cc#L211 to consider also reach_error. The only thing you need to be careful is to make sure that the function is not inlined by LLVM. I would be very happy to merge a PR from you.

Regarding printing analysis and time and memory usage. When you run clam.py it prints something like

 BRUNCH_STAT Clam 0.01
 BRUNCH_STAT ClamPP 0.00 
 BRUNCH_STAT Clang 0.02

The first one includes the translation from LLVM to Crab and abstract interpreter. However, Clam doesn't print memory usage because it's hard to do it in a portable way.

ghost commented 8 months ago

I am happy to solve this problem, but I don't have much time to spend on it. I am very sorry. I have decided to convert the programs in the benchmark to a format supported by Clam.