Closed kmharrington closed 2 years ago
Adding another thing we have found that needs to get fixed with this table. Currently we are running all the Frame queries by Frames.time. However, this is often not the order they show up in the file. We're seeing that the Status dump frame is the second frame in the file but in time is comes after the first scan frame. Ex:
reader = so3g.G3IndexedReader( '/data/timestreams/16392/ufm_cv4/1639265437_000.g3')
while True:
db_frame_offset = reader.Tell()
frames = reader.Process(None)
if not frames:
break
frame = frames[0]
print(frame.type, frame['time'].time / spt3g_core.G3Units.s )
Outputs:
Observation 1639265437.997705
Wiring 1639265438.1342428 <--- this is the status dump
Wiring 1639265438.018063
Scan 1639265438.062377
Wiring 1639265439.548448
Wiring 1639265440.548037
Wiring 1639265442.536509
Scan 1639265443.0629652
Wiring 1639265444.9414828
Wiring 1639265444.945008
Wiring 1639265445.100959
Wiring 1639265445.105672
Wiring 1639265445.110287
Scan 1639265448.055509
We need to fix our querying strategy to get the status dump frame for an observation, the time doesn't actually matter.
I'm going to close this issue in favor of some others I'm about to make. The level 2 databases aren't going to be as persistent as the level 3 ones and I think instead of revamping how we do frames we want to build in methods of deleting entries once they have been bookbound and removed from whatever file system we're using to create these databases.
The Frames table in G3tSmurf is a bit different than the design for the Context frame_offsets table.
Looking at moving G3tSmurf toward use in the field, when building G3tSmurf off our level 2 data, one frame per second per wafer is going to get very large very quickly, and there's not a whole lot of motivation to have that many. Building that sort of frames table off the book-bound data (level 3) makes more sense, it will be more widely used and won't get as massive since the frames are much larger. But a level 2 G3tSmurf database is still useful for things like the QDS monitors. That's where we will make our first CalDbs at the site and having the G3tSmurf infrastructure there will be critical.
My current thought is to add in the ability to just build a table of the status frames, since those will remain useful at either level. I'm also debating if we want to have a setup that uses the Level 2 smurf folder data as well as file/frame indexing for both level 2 and level 3 timestreams.