pupil-labs / pl-neon-recording

An API for working with raw data from Neon recordings
MIT License
1 stars 2 forks source link

Eye state data is empty #22

Closed qian-chu closed 2 months ago

qian-chu commented 2 months ago

Same intention to play with the sample data with a minimalist code:

from pupil_labs.neon_recording import load
recording = load(data_path)

print(f"Gaze data is OK:\n{recording.gaze.data}")
print(f"Eye state data is empty:\n{recording.eye_state.data}")

The ouputs show eye state data is absent while gaze data can be loaded:

Gaze data is OK:
[(1.68085799e+09, 684.4744 , 482.6422 )
 (1.68085799e+09, 684.25995, 482.6473 )
 (1.68085799e+09, 684.4015 , 483.29782) ...
 (1.68085813e+09, 796.1253 , 908.0255 )
 (1.68085813e+09, 796.1386 , 908.64374)
 (1.68085813e+09, 796.12115, 910.4585 )]
Eye state data is empty:
[]

The problem seems to arise from that in the native recording file there is no .raw and .time file pairs while EyeStateStream expects them https://github.com/pupil-labs/pl-neon-recording/blob/f6a2b38cc960941156012086de9b2f8b96777418/src/pupil_labs/neon_recording/stream/eye_state_stream.py#L38-L39

domstoppable commented 2 months ago

Those missing files would indicate that eyestate was not calculated at the time of the recording. Please make sure you have the latest version of the Neon Companion app and that "Compute eye state" is enabled in the settings

qian-chu commented 2 months ago

Those missing files would indicate that eyestate was not calculated at the time of the recording. Please make sure you have the latest version of the Neon Companion app and that "Compute eye state" is enabled in the settings

I see! So eye state would not be available in the native data if it wasn't enabled on the companion device. But is there a way to still get that data if it wasn't enabled during recording? Because the "Timeseries Data + Scene Video" download option does give 3d_eye_states.csv, so I imagine on the cloud such info is derived.

domstoppable commented 2 months ago

Yes, with the option enabled, eye state is computed from the eye cameras in real-time and saved in the raw format.

IIRC, when uploaded to cloud, the raw files are ignored and eye state is computed (or re-computed) from the eye videos on cloud and saved as csv. It's the same algorithm on the device and on cloud.

I will close this issue now, but if you have other questions, please feel free to post in our Discord server 😄

qian-chu commented 2 months ago

Cool, I think that answers my question. Thanks! But also I think there is this user scenario where people forget to acquire eye states in real-time but still want to get eye states. Since the library only reads native data, this might mean eye states are not available for those data