psychoinformatics-de / remodnav

Robust Eye Movement Detection for Natural Viewing
Other
59 stars 16 forks source link

Good to see you, I want to ask about px2deg in your algorithm #39

Closed JunsangJasonPark closed 2 years ago

JunsangJasonPark commented 2 years ago

It is my great pleasure to read your article and the code for the eye movement classification algorithm. It was beneficial to develop my algorithm, which will be applied to the product.

Regarding the process of changing pixels to a degree, I want to ask a question.

I think you assume that the pixel is in direct proportion to the degree. For example, in your equation "degrees(atan2(.5 screen_size, viewing_distance)) / (.5 screen_resolution)", this calculates the degrees of one pixel. Furthermore, in the code, you calculate the angular velocity by multiplying pixels with the px2deg attribute. It means that "the degrees of two pixels" are the same as "the degrees of one pixel * 2". However, as you know, it is not. The distance between two points is not in direct proportion to its degrees.

So my question is, is it acceptable or robust to use px2deg for modifying pixels to degrees when it comes to calculating the angular velocity?

Instead, how about calculating each angular velocity between two points using an arctangent every time?

I have no background in eye movement research. I am still teaching myself, so my assumption or assertion can make no sense. So please understand the silly question.

Sincerely, Junsang Park

JunsangJasonPark commented 2 years ago

I did simple math by myself and got the answer that even though there is a difference between both ways, it is not that big. So it would be plausible to use the px2deg variable for more clean code.

For example, if we compare two distances 1px and 1000px, the difference between exact degrees and px2deg is around 1.4 degrees which is similar to the accuracy of eye-tracking hardware.

Am I right...?

adswa commented 2 years ago

Hi, I think you're right - I do not remember exactly how the px2deg factor was established (it might actually be based on how the original Nyström & Holmqvist algorithm was implemented), but I think that we regarded it as "accurate-enough-and-simple", as you say, too.

joriswvanrijn commented 2 years ago

Dear @JunsangJasonPark, are you still encountering this problem or have you implemented a different solution? See https://github.com/psychoinformatics-de/remodnav/issues/40 for our problems concerning the px2deg for larger viewing angles.

I'm curious to see if you have implemented a different solution.

JunsangJasonPark commented 2 years ago

Dear @joriswvanrijn, Thank you for your reply. I just checked #40 you mentioned. It is amazing! May I ask the research subject? I could not imagine which research needs to use a 1.55m widescreen and allow the participant to move their head. It is literally big-scale research, haha.

Unfortunately, as you know, because px2deg is not a major problem when it comes to small size screens, I did not try to find another solution. But, I can tell you the solution I used before px2deg.

My solution was quite simple. I calculate the degree between two dwell points in each timestamp using vector and arctangent. It means that you need to calculate as many times as the timestamp the dataset has, which can be high time and spatial complexity. However, if you don't need to process the data online (immediately), it doesn't matter.

Even though I didn't thoroughly check its reliability because my project did not necessarily ask for robustness in methods, I guess there can be a problematic assumption in my solution based on your research setting; e.g., when the participant moves their head, it is not valid to calculate the degree between two dwell points because the coordinate plane of each dwell point is different. So it might not be possible to calculate vector and arctangent between two. Of course, if the tracker can detect the dwell point accurately and participants move their heads in a small range, it can be a minor problem. If I come up with another solution, I will comment here. I am so sorry for not being helpful, and I wish you good luck with your project.

Because you are trying to allow the participant to move their head, the calibration should be more detailed and validated, I think. I am just curious how you plan to calibrate in your environment.

By the way, your profile homepage is fantastic! Have a nice day!

Sincerely, Junsang Park