For last couple of days, I have been reducing a file with original size of ~250KB. I have tried with different grizzly.reduce strategies and most of them can initially remove pretty large chunks, but then end up spending ages churning the last passes, especially when in some cases removing one line here, allows removal of another line on next round, causing another 4k executions.
I think it would be beneficial, if we could use --repeat, --min, --max and --chunk-size arguments from lithium, to have more fine grained control over test case reduction. This would allow initial manual sweeps with different strategies, using mid size chunk sizes, before running full sweeps, potentially cutting large amount of executions from small chunk size passes of each strategy.
Yes I agree grizzly-reduce needs some work here. We have had some discussions internally as well. Hopefully I can schedule sometime to work on this, knowing others care about this issue is helpful.
For last couple of days, I have been reducing a file with original size of ~250KB. I have tried with different grizzly.reduce strategies and most of them can initially remove pretty large chunks, but then end up spending ages churning the last passes, especially when in some cases removing one line here, allows removal of another line on next round, causing another 4k executions.
I think it would be beneficial, if we could use
--repeat
,--min
,--max
and--chunk-size
arguments from lithium, to have more fine grained control over test case reduction. This would allow initial manual sweeps with different strategies, using mid size chunk sizes, before running full sweeps, potentially cutting large amount of executions from small chunk size passes of each strategy.