Closed isolver closed 10 years ago
I am now testing with a 2 PC setup instead of both apps running on the one PC like I was yesterday and this am.
In this case, even the video delay is < 0.5 seconds at worst I would say; so this issue seems to be more related to running both apps on 1 PC than anything, so it likely is not an issue at all really.
I think this issue (at least on my setup) was partly due to some javascript debugging console logging that was happening 60 times / sec. The code you have also has this happening as well. You can check by opening "Inspect Element" when you right mouse click on the web app. The selected the console tab. If it is full of object console messages then you know the issue is occurring.
Anyhow, with that cleaned up, I can run everything on my desktop PC, including the SMI eye tracker server, with ~ 50% CPU load (surfing web, no videos) to 70% youtube video playing on about 1/4 of desktop area.
I also put a big count down timer on my desktop to look at the delay from when the timer changes by 1 second and when the change occurs in the webapp. The difference between when the change occurs on the desktop to when I see it in the webapp is <= 1 sec eye balling it.
I also have:
What do you think? Should we do an update on your system and try out the latest sw?
My impression is that we should close this issue frankly. It is something we always need to keep in mind; but I am seeing no issues even when running everything on a single $1000 2 year old desktop computer.
Another observation, The 40 minute recording that was ~ 8 GB had a video playing during the whole time which covered about 1/4 of the screen area.
I just processed more recordings, and a 30 minute recording where I was web surfing and entering the previous comment is 1.7 GB in size.
So obviously video size is highly dependent on the amount of change frame to frame that occurs on the screen.
I agree we should close it ... but I'd like to ask for the flexibility to control FFMPEG parameters to suite different projects/hw set ups. This may be done in a yaml file somewhere, allowing raw FFMPEG tags be changed. Or we could have some recommended "profiles", e.g., fast but large, static screenshots, etc., with each profile linked with a set of tuned FFMPEG parameters.
For most data collection having 1-2G per subject is acceptable. 10G will run into issues with video processing, data backup, etc.
On Thu, May 8, 2014 at 12:56 PM, garyfeng notifications@github.com wrote:
eters to suite different projects/hw set ups. This may be done in a yaml file somewhere, allowing raw FFMPEG tags be changed. Or we could have some recommended "profiles", e.g., fast but large, static screenshots, etc., with each profile linked with a set of tuned FFMPEG parameters.
All the parameters that are currently used to define the ffmpeg CLI are in the app_config.yaml. The param names basically match the ffmpeg cli setting key, or are more human readable for the settings I actually understand. ;)
This is the section of the yaml that has the ffmpeg related settings, along with other settings related to the config sections. I have made the ones used by the ffmpeg cli bold:
screen_capture: screen_index: 0 _screenresolution: [1920, 1080]
ffmpeg:*
_dshowfilters:
#
audio: virtual-audio-capturer ffmpeg_settings:
rtbufsize: 1404000 # 2097152 K
media_file: name: screen_capture extension: mkv
# http_stream:
This made me think if we should make a copy of the app_config and iohub_config files used for each session and save them in the session specific data folder. This way there would be a way to look at what settings were used easily at a later date.
What u think?
On Thu, May 8, 2014 at 2:42 PM, Sol Simpson sol@isolver-software.comwrote:
On Thu, May 8, 2014 at 12:56 PM, garyfeng notifications@github.comwrote:
eters to suite different projects/hw set ups. This may be done in a yaml file somewhere, allowing raw FFMPEG tags be changed. Or we could have some recommended "profiles", e.g., fast but large, static screenshots, etc., with each profile linked with a set of tuned FFMPEG parameters.
All the parameters that are currently used to define the ffmpeg CLI are in the app_config.yaml. The param names basically match the ffmpeg cli setting key, or are more human readable for the settings I actually understand. ;)
This is the section of the yaml that has the ffmpeg related settings, along with other settings related to the config sections. I have made the ones used by the ffmpeg cli bold:
screen_capture: screen_index: 0 _screenresolution: [1920, 1080]
ffmpeg:*
- path: '......\bin\ffmpeg\bin'*
- exe: ffmpeg.exe*
- stdout_file: ffmpeg_stdout*
- stderr_file: ffmpeg_stderr*
_dshowfilters:
Screen capture frames are taken by using Screen Capture Recorder
software. The installer in in the dependencies folder of the
project. This filter MUST be installed and configured or User
Monitor will not work.
#
- video: screen-capture-recorder*
audio options:
leave blank: no audio saved to screen cap video
virtual-audio-capturer: save sound from computer audio out.
Microphone: save what comes in the default audio input selected
by / within the OS settings (does not work on my PC but works
on yours I guess.
audio: virtual-audio-capturer ffmpeg_settings:
real time buffer size (in K)
rtbufsize: 1404000 # 2097152 K
params related to saving the screen stream to file
media_file: name: screen_capture extension: mkv
- ffmpeg_settings:*
- codec: libx264*
- pix_fmt: yuv420p*
- crf: 0 #18.0*
- preset: ultrafast*
- g: 250*
- threads: 0*
params related to the realtime stream to the View web app
# http_stream:
- host: 127.0.0.1*
host: 192.168.1.22
- write_port: 8082* read_port: 8084
- uri: screenstream*
- ffmpeg_settings:*
- scale: 1.0*
- threads: 0*
frame rate*
- r: 30*
bitrates (in K)*
- b:*
video*
- v: 1600*
I think it makes a lot of sense to have session-specific config files. One approach is to allow placing app_config and iohub_config yaml files inside a session folder, which then overwrite the default on at the default locations. I would also like the YAML files to be messaged somewhere into the log file(s), so that we can make sense of them outside of the folders. Does this make sense?
Regarding running different yaml config files:
Regarding saving config file settings:
Is that OK? Are you worried about files getting lost or editing by mistake?
Another option would be to:
Please let me know what you think.
sorry, wrong issue #
right, I replied in the other issue. I suggested copying to the session folder but rename with date/time appended, so that we keep a history of the YAML files for each study/session. Putting in HDF5 file is the same idea, but less obvious.
Re the logic of loading the config files -- we may want to revisit it if we have time, in V2.0. Would be nice to separate the logic for loading the View webserver/webapp from the loading of the Track side of things. But that's a low priority item.
I can actually see a delay of ~ 2 seconds between the screen state and the video stream state, especially when using non scaled 1920x1080 frames. However the mouse overlay is very fast, with < 5 sec delay. The decoding of the javascript frame decoding must be taking a significant amount of time.
So this means that right now the mouse / gaze overlay and event data tables is probably 1.5 sec ahead of the video frame it is being overlayed on.
We could cache the last few seconds of all of this in the browser, and then use data - x seconds old to use for overlay / info. The x seconds would likely need to be a config param so it can just be manually tweaked to minimize subjective differences in video vs. overlay data fort example.
Not sure how bug an issue this really is for the realtime feedback overlay though. Need your input on that.