Closed ajgdls closed 3 years ago
Do we need to update eiger-detector to work with this?
No due to the way EigerMetaWriter is constructed I think it should work without change.
I've just realised that by removing the code that was allowing a writer to call into the parent to clean itself up after is has been closed, the meta writer has reverted to not allowing the same acquisition ID to be executed twice in a row. @GDYendell although you thought the clear writer request from into the parent from the writer was messy, it is the only way to guarantee the writer is correctly removed from the store as soon as it is finished (simply leaving behind the status object in case a client is monitoring). So I need to undo the changes you have requested unless we come up with another way of doing this because not being able to start the same acquisition twice is a show stopper for any manual execution.
I think if we call clear_writers after every meta message that will also pick up as soon as the writer is finished. This removes the need for the writer to know about the listener class and doesn't add too much overhead. I'll try this before reverting any of the changes.
OK @GDYendell pushed the updates...
Looks good! I tried running against the eiger simulator and it works fine. I set the block size to 3 so I could see it working:
[D 210510 18:08:18 hdf5dataset:244] offset | Writing cache to dataset
[D 210510 18:08:18 hdf5dataset:68] [offset] Highest recorded index: 9
[D 210510 18:08:18 hdf5dataset:72] [offset] Flushing: total number of blocks to check: 4
[D 210510 18:08:18 hdf5dataset:103] [offset] Writing block 0 [0:3] = [0 1 2]
[D 210510 18:08:18 hdf5dataset:103] [offset] Writing block 1 [3:6] = [3 4 5]
[D 210510 18:08:18 hdf5dataset:103] [offset] Writing block 2 [6:9] = [6 7 8]
[D 210510 18:08:18 hdf5dataset:103] [offset] Writing block 3 [9:10] = [ 9 -1 -1]
[D 210510 18:08:18 hdf5dataset:86] [offset] Flushing complete - flushed 4 blocks: [0, 1, 2, 3]
OK great, so I think it's ready now, I've just updated the unit test to work with the recent changes. Any time you want to approve it I'm happy. If @timcnicholls has no comments I will probably remove from the approval list.
Oh, the tests are failing on the Slack notification. Is that a new problem?
The failure is in the slack notification.
Any ideas @GDYendell or @timcnicholls
[ERROR] Secret SLACK_WEBHOOK
is missing. Please add it to this action for proper execution.
Refer https://github.com/rtCamp/action-slack-notify for more information.
Any ideas @GDYendell or @timcnicholls
[ERROR] Secret
SLACK_WEBHOOK
is missing. Please add it to this action for proper execution. Refer https://github.com/rtCamp/action-slack-notify for more information.
The repo secret is set correctly but it's failing on this because secrets are not passed to action environments running against PRs from forks. This is a security control, otherwise anyone harvest secrets by attempting a PR from an arbitrary fork. You really ought to push your branch to the main repo and PR from there - it also allows others to use/test/validate the PR branch.
Oh I see, OK I can do that. I guess I'll close this one down and re-open from odin-detector.
The tests themselves are passing OK so it's fine to merge this without the action completing if you and @GDYendell are happy.
OK @timcnicholls I'll merge this one this time and then from now on I'll create any PRs directly from branches on odin-detector.
Created a cache object that grows dynamically for use within the meta writer application. Added unit test for the object. Provide status for previously completed acquisitions.