ChildMindInstitute / mindlogger-app

MindLogger (React Native) data collection app
Other
15 stars 6 forks source link

Flanker: Pull Timestamps from Native for Better Accuracy #2563

Open WorldImpex opened 2 years ago

WorldImpex commented 2 years ago

As a admin, I want more accurate timestamps so that I can know exactly when a user sees the screen.

Requirements

experiement_start_timestamp: it is timestamp for starting flanker activity block_start_timestamp: it is timestamp for starting block ( test or trial ) by tapping next button on mobile app trial_start_timestamp: it is timestamp for start of each trail ( = each question ) each trial has 3 events ( fixation, stimulus and response ) event_start_timestamp: it is timestamp for start of event video_display_request_timestamp: it is timestamp when the request for drawing screen is sent to native side for event response_touch_timestamp: it is time when user hit < or > button on mobile

natalia-muzyka commented 2 years ago

@devbtech @WorldImpex the app crashes on Android, checked with the existing and new Flanker applet. Video: https://www.screencast.com/t/zHXBM6G8O7 Crash log: https://drive.google.com/file/d/1JiD1s3Q64GMbWkqEhzvxc3Cy30yNWZKA/view?usp=sharing

Environment: ML v0.20.14 staging Samsung Galaxy S7 // Android 8 Google Pixel 5a // Android 11 flanker_test@mail.com / 123456

WorldImpex commented 2 years ago

@natalia-muzyka That is to be expected for right now (they need to create this for android later)

natalia-muzyka commented 2 years ago

thank you @WorldImpex, also there is an export issue: after completing the activity on iOS I can't get flanker zip, plain report, and activity journey csv: https://www.screencast.com/t/1fOElWLPB

natalia-muzyka commented 2 years ago

Blocker issues: Fixed for iOS - [Flanker Native] The application is crashed after starting "Flanker_360" applet #2657 Fixed for iOS - [Flanker Native timestamps] Flanker ZIP, activity journey CSV and report CSV are not downloaded after clicking the export button#1621

natalia-muzyka commented 2 years ago

Implemented for iOS, needs to be implemented for Android later.

Environment: ML v0.21.48 iPhone 7 // iOS 13.1.1 Apple iPad 9th gen (2021) / iOS 15.5

anq83 commented 1 year ago

Discussed with @natalia-muzyka The mentioned other issues except Timestamps Accuracy are not actual anymore.

The Timestamps Accuracy should be discussed with @binarybottle

As @natalia-muzyka told: a developer from other team worked on it in the past. He switched iOS from web-view to native-swift-solution, but he didn't it for Android. We need to know why? And if he tried to do anything with web-view solution, if he tried to improve web-view timers accuracy or maybe he came to the conclusion that it is not possible?

anq83 commented 1 year ago

Hi @binarybottle , I've reviewed conversations in Slack, with you, Wil, and other team ~half year ago, measurements, reports etc.

I have some doubts and questions.

  1. In the task description: "I want more accurate timestamps...". Can you confirm that you compare the events (timestamps) on 240fps video and the corresponding data in output data report from mobile app?

Just for example, on video:

then you go to output report and observe:

that means the discrepancy: +30ms -20ms +20ms

And the idea is to decrease such discrepancies?

  1. These graphs mentioned in Slack:

image.png

and this one (the previous dev. team compared native swift component with web-view solution):

image.png

And here I have question:

Considering that iPhone v. <14 and most android devices have refresh screen rate 60Hz (60 refreshes per seconds) - how can they give us precisions like 4-6 milliseconds? How it is possible to get when during tests camera captured device's screen 60Hz?

The precision would be ~20ms.

  1. Does it make practical sence to get precision <50ms, considering that people reaction takes on average 150-300ms?

https://www.scientificamerican.com/article/bring-science-home-reaction-time/#:~:text=On%20average%2C%20reaction%20time%20takes,happen%20for%20you%20to%20react.

  1. Initially, April 7, you described:

image.png

So, the touch-to-fixation interval was the main issue to impove? How did you measure touch moment? Maybe you captured button-down animation (1st color change on tap)? Or you observed greyscale pixel?

If animation - it wouldn't be correct, becase button-down handler in code would be much more precise than animation.

  1. Considering the answers 1-4 and maybe some measurement tests we'll make decision should we go with current solution: webview + swift + r.n. integration layers, or add native Android component and go with kotlin + swift + r.n. integration layers or make prototype in react native and to see if we can switch to it and remove other implementations. I like the last option more.

  2. It worth to mention these two things:

a. React native starting from v 0.69 allows to use new architecture based on new communication layer to native OS libraries, which is based on direct communication via C++ instead of bridge-json-related async communication. That could give us better application performance and in theory - tiemstamps. We cannot switch to v0.69+ before refactoring. But we could to check this concept prototypes.

b. We rely on not-real-time operating systems: https://www.quora.com/Is-iOS-a-real-time-operating-system https://www.quora.com/Is-Android-a-real-time-operating-system

anq83 commented 1 year ago

All questions discussed yesterday via MS Teams.

I've tested 240fps slow-mo video record in iphoneX again. This time - pointed it to Monitor 60Hz, used Win 10, Written simple Console-based programm in C#:

while(true) {
    console.WriteLine(DateTime.Now.Milliseconds)
} 

And I observed results like: 100, 116, 133, 149, 165 etc. Each step ~16.5 ms, this is totally corresponds 60HZ screen refresh rate. But ~ every 10th time I've observed jump 32 ms, e.g. 165 -> 191. Maybe this is because OS is not real time: whether Win10 or iOS where Camera has been working. But, I think we can trust to iOS camera to measure app behavior in next tests.

First, I gonna to make tests using React Native flanker-imitation component, which will not contain full logic, it will be just imitation with predefined images in async storage (different sizes, resolutions), predefined timings, two buttons, measure tap-down reaction, moving through steps, change images, measurement of their appearance etc.

Test it on iOS and Android different devices with different baches of opened programms (try to make processor busy). Maybe test on clear project with R.N. v.>=0.69 (new architecture and they promise mush better performance)