as in the title, it's possible to use torch's shared memory tensor to supply mux weights and change them in a way that syncs across processes
simplified DurationBatcher sampling logic (no changes in sampling behavior)
fixed inconsistency between time constraint exceeded() and close_to_exceeding() (I think it was reported in some issue)
leveraging dill for CutSet/Sampler inter-process serialization now has to be explicitly enabled with LHOTSE_DILL_ENABLED=1; the library is now less dependent on dill for making CutSet transforms work across main/dataloading processes (you'd only need it if you as the user provide lambdas instead of global functions / partials to map/filter-style functions)
Several changes:
exceeded()
andclose_to_exceeding()
(I think it was reported in some issue)dill
for CutSet/Sampler inter-process serialization now has to be explicitly enabled withLHOTSE_DILL_ENABLED=1
; the library is now less dependent ondill
for making CutSet transforms work across main/dataloading processes (you'd only need it if you as the user provide lambdas instead of global functions / partials to map/filter-style functions)