sccn / labstreaminglayer

LabStreamingLayer super repository comprising submodules for LSL and associated apps.
Other
522 stars 157 forks source link

using LSL for coding live experimnet #119

Closed Niloofar-GH closed 3 months ago

Niloofar-GH commented 3 months ago

Hi,

I have one question regarding using LSL. My experiment is live in the lab, and there is no stimulus on the screen. I have to record the whole session, and then I have to code the behaviors and then use the code as a marker for EEG. I know that it is not a good way because they cannot be synchronized with EEG data. I want to know whether I can use LSL and press the button at the beginning of each event in a live experiment. Can LSL be exact when the expeimnet presses the key when she sees the events in the live expeimnet? 

Kind regards, Niloofar

cboulay commented 3 months ago

https://github.com/labstreaminglayer/App-Input/releases Download and extract this. Run it. Make sure LabRecorder is collecting data from your EEG stream and this Input stream. Press buttons on your keyboard -- they should end up as events in your xdf file that LabRecorder saves.

Please close the issue after you have tested this solution and determined that it works for you.

Niloofar-GH commented 3 months ago

Hi,

Thank you very much for your response. I am preparing my experiment now and I am planning to use recorder from BrainVision. May I ask you whether it is presice or not when I press the key?

cboulay commented 3 months ago

"precise" and "press the button at the beginning of each event in a live experiment" are mutually exclusive. Human jitter while pressing the button will be >50x greater than the jitter from the LSL app.

If you need precise timing, use a stimulus presentation system that publishes triggers when the stimuli were presented. If the simulus presentation system uses LSL on a local network then LSL will contribute < 1 ms of jitter to the stimulus estimate -- your computer's video buffer or sound buffer will contribute more than that. I recommend NeuroBehavioralSystems Presentation because it has some premade experiments and excellent timing, but if you're on a budget or just tinkering then you can use psychopy to control your stimuli and send markers over an LSL outlet.

Even better is if the stimulus presentation system has hardware triggers that they've already worked out the latency on, and you can feed those hardware triggers into your EEG system. This is as accurate as you can get.

But it sounds like you're not presenting stimuli:

I have to code the behaviors and then use the code as a marker for EEG.

So you're doing this manual coding, and you want to do it live? You're going to make mistakes in your live behaviour coding... make sure you add an event for "discard last event".

But if you're recording video, there are systems to pull out kinematic skeletons from videos of behaviour. Maybe you can automate this. e.g., https://www.mackenziemathislab.org/deeplabcut

I've never tried one of these systems directly.

Niloofar-GH commented 3 months ago

Yes. I will not show any stimuli on the screen and my experiment is a game and it should be done live in the lab. You are right. Pressing the key cannot be precise.

I am planning to record the video of the whole game and then code the video. After that I can use the code as marker and insert in Matlab.

You offered me two ways: add an event for "discard last event" and using some systems like skeletons. I checked the skeletons. I think it will help me to code my video more prescise than hand but I am wondering what you mean by adding an event for "discard last event". Could you give me an exmaple? or could you refer me to some articles that I can learn more about it?

Thank you very much for your time and help.

cboulay commented 3 months ago

I still don't quite understand.

If indeed you will record the video first, and code it after, then you shouldn't use LSL for this. LSL's primary purpose is to synchronize multiple "live" data sources. Coding pre-recorded video is the opposite of this. You should have some way to synchronize the frames in your video with your EEG recordings, and you'll have to leverage that synchronization when creating events in your offline analysis.

I'm closing this issue because as far as I can tell this doesn't have anything to do with LSL.