Open ColonelPhantom opened 6 months ago
Thanks a lot. To me, this is a clear improvement. A slight alternative could be to have this as an optional argument -- but I think i would dislike the extra code complexity that would add. @tquatmann thoughts?
I think we have discussed in other threads that it make sense to create a class/type for submatrices that also would avoid further copies.
Iirc, the original idea was to allow re-using backward transitions for cases where one solves multiple step-bounded reachability queries on the same matrix. However, I don't think that this is currently used somewhere (unless maybe through stormpy?), so I support the suggested changes.
Iirc, the original idea was to allow re-using backward transitions for cases where one solves multiple step-bounded reachability queries on the same matrix. However, I don't think that this is currently used somewhere (unless maybe through stormpy?), so I support the suggested changes.
Ah yeah, that would be a valid alternative. Doing that would probably be pretty easy and just put it as a std::optional<>
member. But personally I think doing it like this to save memory is more valuable as transposing should be a fairly cheap operation, compared to the achieved memory usage reduction.
Currently, Storm produces many copies of the transition matrix for the sparse engine. Concretely, there are 2 copies of the original matrix and 2 copies of the P=? submatrix resident at peak RSS.
These are
By deferring the creation of (2), it is freed earlier (its scope is closed earlier), which allows the operating system to reuse its memory for (3) and (4). As such, peak memory usage is determined by
max(m1+m2, m1+m3+m4)
instead ofm1+m2+m3+m4
.(Addtionally, (4) can be omitted entirely by using the Storm native solver, which is already a setting that can be changed.)
Other sparse model checking functions have the same weakness, but I only did it for this one as a proof-of-concept. Applying this patch and using the native solver nearly halves Storm's memory usage for the factories problem (f=12) from the Rubicon paper, from 1 GB to circa 0.5 GB. Either of these by themselves reduce it to 0.75 GB. This is because for this problem the submatrix has nearly the same size as the original matrix.
An alternative that could be considered is to keep the backwards transitions around in the original model. Currently, I believe that Storm recalculates the transpose over and over. Keeping it around would reduce time usage slightly, whereas dropping it early reduces memory usage.