SheffieldMLtracking / bee_track

Bumblebee Tracking System to run on Raspberry Pi
https://docs.google.com/document/d/1502XEZiBlq8zBkCZ7OQDWJYq8gfGQxDF87-0LBQ3b8Q/edit?usp=sharing
GNU General Public License v3.0
0 stars 0 forks source link

Indices: trying to match photos with triggers #9

Open lionfish0 opened 3 months ago

lionfish0 commented 3 months ago

_"In beetrack each trigger and camera has an 'index' (number attribute) -- what is this and what is it used for?"

This is a good question! The 'index' values are my desperate attempt to match up which photo is associated with which trigger.

In trigger.py the value in self.index.value is saved as metadata in the self.record list's newest item [link]. This index value is incremented.

In camera.py, the self.record should be the same as the list in the trigger object, as the constructor for the camera takes the list as an argument. Whether I've done something that's not thread safe...?

Example from core.py of the camera being created with the record list being passed: camera = AravisCamera(message_queue, trigger.record, cam_trigger, cam_id=cam_id)

Anyway, in camera.py we look through the list, finding the item with an index that equals the internal camera 'index':

for r in self.record: if r['index'] == self.index.value: rec = r break if rec is None: print("WARNING: Failed to find associated photo record")

        photo_object = {'index': self.index.value, 'record': rec}

The camera's internal index counts how many photos (including failed ones) have come in (increments here).

The idea is that, hopefully, the number of triggers and number of images coming in match! This was obviously too optimistic, and for whatever reason this seems to slip... maybe sometimes the camera just doesn't trigger for some reason - or an image is just completely lost due to buffer overflows...

So what I do is I assume that all the images that have been triggered have been read by the time that the /start API call is triggered, and at that point I force the indices to match [code link].

There has got to be a less harrowing way to do all this, maybe there's metadata in the image from the camera I could make use of?! (I think there might be a clock onboard the camera?? this might also help solve the problem of the time 'jitter').

lionfish0 commented 1 month ago

This is basically solved by using the actual time stamp from the photo/camera (so we no longer even need to keep track of which trigger goes with which photo - can fire-and-forget).

See this issue.

We still need to remove the legacy code described here.

lionfish0 commented 1 month ago

During operation we can use buffer.get_timestamp() after raw = np.frombuffer(buffer.get_data(),dtype=np.uint8).astype(float). About here.

I think these numbers are the times the images were captured in nanoseconds since power-up.

lionfish0 commented 1 month ago

Euan/Joe -- could you test the latest version... there should be a big number in each file now (nanoseconds since the camera turned on), e.g... image