Open julienfreche opened 1 month ago
I have used sail generated code as a library somewhat recently. For exxample: https://github.com/CTSRD-CHERI/cheri-compressed-cap/blob/master/test/contrib/sail_compression_128.c contains sail generated code that I used to compare against a C implementation of the same logic.
To generate it I used https://github.com/CTSRD-CHERI/cheri-compressed-cap/blob/master/test/regen_sail_c_code.sh: The flags -c -c_no_main -c_prefix sailgen_ -c_specialize
seemed sufficient to get it working. That said those files are linked into a test binary rather than built as a standalone .so so it's possible there are some things that are broken.
Thank you for the quick reply and for giving this example. That's a start, but, it does not seem to cover a few features that are very useful to a VMM use case like:
or am I mistaken?
Maintaining two separate copies of the Sail C backend was too much work with the code duplication involved. I naively thought at the time that I could transition uses of the older C backend to the new one and fix the code duplication by removing the older backend, but that just wasn't realistic. When I switched the build system to dune (about 2 years ago roughly) and moved each Sail backend into a separate plugin, I wasn't sure if anyone was actively using the c2
output and it didn't fit neatly into the more cleanly separated architecture, so I decided to drop it - thinking that if anyone was they would probably notice and raise an issue. I see that finally happened!
My plan after that was to add these features back in by parameterising the original backend, preserving backwards-compatibility, but I just never had either the time or a pressing use-case to actually implement that.
There is definitely an interest from our side as we still use sail to perform emulation in certain cases. I just only recently looked into upgrading our ARM model to a newer version hence the belated feedback. Thanks for the explanation.
We also have people here who are interested in Sail generating C libraries(with header file used for ffi calls) maybe related to #215
@julienfreche By the way, how do you feel about C++? I think I can do something quite a bit nicer (and easier) if I don't restrict myself to just plain C, i.e. rather than changing the compilation to thread all the state through manually I could make a C++ class that encapsulates all the state - that might be a nicer interface for using the model in general.
We use C++ on our side so that would make my life even easier :) thank you for asking!
C++ would be much nicer! Last year I started trying to use the Boost.Multiprecision for the larger arithmetic, but never got around to fully implementing that. When profiling I saw that most of the time is being spent in malloc/free for GMP, and using a header-only C++ library would allow inlining all those functions to avoid heap allocations for smaller integers.
For example, https://godbolt.org/z/j3hjWhr4K shows that the compiler is able to remove all the logic to perform uint65 arithmetic whereas constant_fold_gmp
still involves gmp API calls even for constants.
My experiments also correlate with your observations.
Tackling those two issues to some degree would lead to much better perf.
In the past, we worked on the ability to generate a C library with Sail rather than a C executable. That was useful to use the generated code in the context of a full emulator or a VMM.
However, it looks like refactorings in the C backend eliminated that option. Was that on purpose and/or is there still a desire from the maintainers to keep this as a working option in the future?
For example, those changes were lost: https://github.com/rems-project/sail/pull/81 https://github.com/rems-project/sail/pull/83 https://github.com/rems-project/sail/pull/164