NeuroTechX / eeg-notebooks_v0.1

Previous version of eeg-notebooks
https://neurotechx.github.io/eeg-notebooks
BSD 3-Clause "New" or "Revised" License
184 stars 56 forks source link

video? #16

Closed rschmaelzle closed 4 years ago

rschmaelzle commented 5 years ago

Hello, I was able to get the scripts to run - such a cool project! thank you. I tried to get a video presentation to work, but I failed. After overcoming psychopy-video difficulties, I am able to run a video-presentation script (essentially the version that is in psychopy's examples), but when I try to interface that script with the run_experiment.py files, which call the multiprocessing/pool functions, then it fails. I tried to refactor the code, but it seems that the video + multiprocessing doesn't work (i have an intuition why that might be the case, but couldn't find much on the web). Does anybody have deeper insights or ideas for how to make it work?
best r

JohnGriffiths commented 5 years ago

Hi Ralf.

Thanks for taking this up. Movie-based paradigms would be a fantastic addition to eeg-notebooks.

I don't have any magic-bullet answers for you at this point, but happy to help and brainstorm potential solutions.

It sounds like you understand the core code and have done all the right things so far. So you probably know this already, but for reference: generally speaking for developing new experiments you should confirm that the psychopy part is working on its own before moving to the multiproc calls. So you want to be able to do the analogue of something like

from stimulus_presentation import n170
n170.present(15)

Which would execute a 15 second single-process test run of the stimulus presentation for the N170 experiment that is easier to debug etc. For example, running the stim pres function directly like this will let you see the terminal stdouts + stderrs if the script crashes, and use e.g. the ipython debugger in the terminal to investigate the cause of the problem. Unfortunately when running with multiproc the terminals will close immediately upon crashing, which makes debugging quite hard.

As for your specific multiproc Q: sounds like you know more about this (movies in psychopy) than me right now. One thing to note with psychopy though is that the behaviour can change / break significantly depending on the backend used (e.g. pygame/ not pygame). Also we have found a some diffeences in behavior due to the operating system. So if you are able, try things out on both Linux/Mac and windows machines. If you don't have the problem on e.g. Linux then you know this is specifically to do with multiproc's behaviour in windows.

Perhaps you could post some code snippets so we can have a clearer idea of what you're trying to do?

rschmaelzle commented 5 years ago

thanks, Jon. Great to hear that there is interest in this! I totally see the larger vision here and it is wonderful... I did confirm that the core psychopy code is showing the movie, though not exactly the way you describe. Doing that helped me overcome some issues - though some remain (will take added time so solve - so I am posting code now and will revisit later).

This is the run_exp script (not I use the mac-variant with some tweak)

import ctypes, os    
libc = ctypes.CDLL('/usr/lib/libc.dylib')
print(libc)

#os.environ['DYLD_FALLBACK_LIBRARY_PATH'] = '/lib:/usr/lib:/usr/bin/lib:/' + os.environ['DYLD_FALLBACK_LIBRARY_PATH'];

from muselsl import stream, list_muses, view, record
from multiprocessing import freeze_support, set_start_method, Process, Pool
from mne import Epochs, find_events
from time import time, strftime, gmtime
import os
from utils import utils
from collections import OrderedDict
import sys
from optparse import OptionParser
import warnings
warnings.filterwarnings('ignore')

parser = OptionParser()

parser.add_option("-d", "--duration",
                  dest="duration", type='int', default=400,
                  help="duration of the recording in seconds")
parser.add_option("-s", "--subject",
                  dest="subject", type='int', default=1,
                  help="subject number: must be an integer")
parser.add_option("-r", "--run",
                  dest="run", type='int', default=1,
                  help="run (session) number: must be an integer")
parser.add_option("-e", "--experiment",
                  dest="experiment", type='string', default="n170",
                  help="name of experiment from stimulus_presentation folder")

(options, args) = parser.parse_args()

duration   = options.duration
subject    = options.subject
session    = options.run
experiment = options.experiment
expprez    = experiment + '.present'

exec('from stimulus_presentation import ' + experiment)

recording_path = os.path.join(os.path.expanduser("~"), "eeg-notebooks", "data", "visual", "clip", "subject" + str(subject), "session" + str(session), ("recording_%s.csv" % strftime("%Y-%m-%d-%H.%M.%S", gmtime())))

#import pygame
#pygame.init()

stimulus = Process(target=eval(expprez), args=(duration,))
recording = Process(target=record, args=(duration, recording_path))

#stimulus.daemon=True
#recording.daemon=True

if __name__ == '__main__':
    #freeze_support()
    set_start_method('spawn', force=True)
    pool = Pool(processes=4)

    pool.apply_async(eval(expprez), args=(duration,))
    pool.apply_async(record, args=(duration,recording_path))

    pool.close()
    pool.join()

This is the modified experiment

from __future__ import division
from __future__ import print_function
from builtins import str
from builtins import range
from psychopy import visual, core
from time import time
from optparse import OptionParser
from glob import glob
from pylsl import StreamInfo, StreamOutlet

win = visual.Window((800, 600))
mov = visual.MovieStim3(win, '.movies/mymovie.mov', 
                                size=(320, 240),
                                flipVert=False, 
                                flipHoriz=False)

def present(time):   
    globalClock = core.Clock()
    while mov.status != visual.FINISHED:
        mov.draw()
        win.flip()
    win.close()
    core.quit()

def main():
    #while globalClock.getTime() < (mov.duration + 1.0):
    present(15)

if __name__ == '__main__':
    main()

Now if I run

python mac_run_exp_movie2.py --d 15 --s 77 --r 1 --e clip

I've been able to see the movie, but the multiproc opens many windows (4 currently), and the video only plays in one. So I need to look into this. Perhaps video only works on a single stream - I dont know how that will affect the EEG streaming/recording and am too little of an engineer/CS person to figure out the details. But it doesn't seem insurmountable.... Thanks again. I'll keep posted -perhaps sb has an idea right away.

rschmaelzle commented 5 years ago

i've been able to get it done. the code is not yet pretty, but since I cannot finesse it right now, I am posting it anyway. I'll try to fix it properly and then add it (though since I am working on a mac, that might change things). anyway, if sb is interested ...

this went into the presentation function -

`def present(duration = 20):

 # Create markers stream outlet
 info = StreamInfo('Markers', 'Markers', 1, 0, 'int32', 'myuidw43536')
 outlet = StreamOutlet(info)

 start = time()

 win = visual.Window((800, 600))
 mov = visual.MovieStim3(win, '/Users/Ralf/Desktop/muse/amanda/eeg-notebooks/notebooks/stimulus_presentation/movies/mymovie.mov', 
                                 size=(320, 240),
                                 flipVert=False, 
                                 flipHoriz=False)

 globalClock = core.Clock()

 timestamp = time()
 outlet.push_sample([1], timestamp)

 for i in range(15):   #sending a wave of spikes before and after the movie to check timing ...
     # Intertrial interval
     waitdur = 0.1
     core.wait(waitdur)
     # Send marker
     timestamp = time()
     outlet.push_sample([1], timestamp)

 starttime = globalClock.getTime()

 #mov.play()    
 while globalClock.getTime() < (starttime + 9):  
    mov.draw()
    win.flip()

 mov.pause()

 for i in range(15):
    # Intertrial interval
    waitdur = 0.1
    core.wait(waitdur)
    # Send marker
    timestamp = time()
    outlet.push_sample([1], timestamp)

 win.close()`

otherwise things didn't change (unlike I thought, the multiprocess was able to work with the movie - but there seemed weird contingencies that I still don't fully understand). Timing seemed ok - If I wanted 5 seconds, it took 5.1 secs, if 9 seconds -> 9.1 secs. Not perfect, but doable... perhaps this can be improved with preloading...

JohnGriffiths commented 5 years ago

Awesome work!

Seems you have this in hand.

When you are comfortable with the code go ahead and submit a pull request from your eeg-notebooks fork.

That will be the easiest way for others to test + work together on this.

Hopefully this functionality will be available to eeg-notebooks users soon.

Just out of interest: what is the scientific context you are working from here? How are you planning on using movie data in your EEG analyses?

rschmaelzle commented 5 years ago

@JohnGriffiths thanks, will do! as for the scientific context: my background is in EEG/ERP, but I moved more towards more natural, continuous stimuli (mostly media messages - movies, stories). I did a lot of work on inter-subject correlation analysis with fMRI, and also a bit on auditory entrainment with EEG. Please shoot me an email if you want to talk more - I'd be happy to discuss potential overlap/mutual interests.