Closed joshspeagle closed 6 years ago
So this is somewhat along the lines of what I was thinking, except that it prints to stderr and there appears to be no easy way to intercept that (it looks like I can silence it totally with print_progress
, but then we're back to no info).
I think a preferable solution would be to store those messages in a _status
attribute of the class, which would be printed if print_progress = True
, but otherwise not. Also, there should be yield
statements that match the tuples yielded in the later loops in the loops that are setting/printing those statuses, otherwise there's no way to use the _status
message in those first loops within the sample_batch
generator.
Yea, this was just a quick hack to get something working. I agree that the _status
solution seems cleaner and it should ideally yield a similar tuple. I'll try and implement a better fix soon.
Sorry, just a quick follow-up to this: technically the regeneration of live points should not yield a tuple in the same way because those points only become samples once they are "discarded". This happens intrinsically within the remainder of the sample_batch
code (which deals with the remaining points). That said, users should still be able to get status updates while this "prepwork" is happening and should be able to print out whatever they'd like (ideally), so I'll see what I can do.
I was thinking yielding the same tuple just so you don't break anyone's workflow, even if it's unmodified from the previous return. Ideally though you'd yield a dictionary instead, which could contain extra keys if needed, but this would break peoples' codes that use the current generator.
Okay -- I think this change should be good enough for most users. The generator now returns all points used to initialize a batch within sample_batch
. They are demarcated using a negative index for worst
so that they can be ignored when running the actual sampling while still allowing the user to see what's happening and/or manipulate/store the results accordingly.
Thanks @joshspeagle, giving it a try, but looks like a good solution.
Currently, when allocating new live points (during the
sample_batch
step) nothing is output to the user, which can lead to misconceptions that the code has stalled (as pointed out by @guillochon). This should be fixed so that new live points are also written out, even if other aspects of the state cannot update.