Closed huitseeker closed 7 months ago
Also, I think the message direction you implemented is backwards. The main thread - the one doing the folding - should be the one requesting witnesses to be cached upfront
Right before folding starts at index i, we send that message via the mpsc API. Then, in the witness caching thread, we simply need to cache the witness of the MultiFrame at index i + 1.
This is already what's happening: as the main thread pulls item i, the witness caching thread starts preparing item (i + channel_capacity).
@arthurpaulino Insisting on this:
I wouldn't touch this until we have proper benchmarks for SuperNova folding.
What's the issue for this, describing the gap precisely?
If we have doubts regarding performance optimizations, it would be helpful to have benchmarks that justify those performance optimizations - and until we do, descriptions of what work needs to be accomplished for those benchmarks to exist.
I opened #1219
What's in this PR?
This removes the assumption that
prove_recursively
has access to the whole set of multiframes, by :Arc
,prove_recursively
operate on anIterator
ofMultiFrame
, opening the way for that iterator to eventually be lazy,prove_recursively
's witness caching only run ahead of proving by at most a fixed number of frames (set to 1000),Next Steps
Transferring the above memory-sparse semantics to
prove_from_frames
,evaluate_and_prove
and other transitive consumers ofprove_recursively
. In other words, evaluation should, as much as is possible, produce an iterator of frames.