Closed sf-issues closed 10 years ago
Submitted by ceball Date: 2011-11-22 17:37 GMT
And maybe run_batch could be more verbose if requested? Might be worth e.g. putting in some debug statements or whatever to say things like "Running for 100 iterations...", "Calling analysis_fn x...", etc. I have batch runs that appear to have stopped mid-run, but I can't tell where they really got to.
Submitted by ceball Date: 2011-11-23 16:27 GMT
(request for timing info has already been made in https://sourceforge.net/tracker/index.php?func=detail&aid=1432101&group_id=53602&atid=470932)
I think we can close this issue: Collector
implements a progress bar while running. Measurements also have their own progress bars.
When running batch jobs, you can use RemoteProgress
in dataviews/ipython/widgets
to monitor the progress of run_batch
when it is run in a separate process (e.g. using Lancet).
The option to show a progress bar on stdout, broadcast the progress value on a TCP port (or disable it entirely) can be specified on run_batch
using the progress_bar
parameter.
Note that if a progress bar is enabled, the progress advances at a regular interval, regardless of when the measurements are run. As such I think this addresses the issue described.
Converted from SourceForge issue 3441172, submitted by ceball Submit Date: 2011-11-22 17:26 GMT
Right now, if you specify e.g. times=[100,200,300] to run_batch(), your analysis_fn is called at 100, 200, and 300. Additionally, the elapsed time is printed at these points. What if you only want your analysis_fn to be called at e.g. 10000? You don't see any timing information at all. I think people have handled this before by adding scheduled commands to simulations (in script files). What about having some kind of option for run_batch() to cause timing info to be printed periodically?