marcus-nystrom / Titta

Python and PsychoPy interface to Tobii eye trackers using Tobii Pro SDK
Other
64 stars 27 forks source link

Recording gaze data with dynamic AOIs #50

Open jojoschmitt opened 8 months ago

jojoschmitt commented 8 months ago

Hey again,

this time we are wondering if it is possible to use Titta to record gaze data and add dynamic AOIs as well as custom events.

Normally, in a normal Tobii Pro Lab project (not with the external presenter), it is possible to do a screen recording and add dynamic AOIs and custom events manually post recording. However, when there are movable dialogs, like in our case, that the dynamic AOIs should relate to, it is manually infeasible to do this task. Moving one or multiple AOIs frame by frame for each participant by hand just costs too much time.

Short Question

Is there a way to start a screen recording using Titta, add custom markers (for when a dialog is visible on the scene), and add dynamic AOIs (based on the position of the dialog)?

Long Question

As we have not found a way to do screen recording with Titta, we just tried to record the gaze data, because we are not necessarily interested in the actual scene the participants are looking at. We are rather only interested in whether participants look into the AOIs. However, when not presenting any stimuli, we cannot add AOIs since an AOI is directly attached to a stimulus. So we tried to make Tobii Pro Lab believe that we were presenting a video. We uploaded a video and sent a stimulus event to Tobii Pro Lab for it to know the timing and duration of the video. We chose the video to be at least as long as the actual experiment duration. This video is, however, never shown to the participants, which can observe the stimulus scene as they should. Though, this fake video gives us a handle to add AOIs. Now there are several problems: 1) send_custom_event() is not implemented in Titta's Python version (was not a problem for us to implement though). 2) add_aois_to_video() is not implemented in Titta's Python version. This turns out to be a bigger problem. We can delegate the task to add_aois_to_image(), which adds the defined AOIs to the video, yet only as static AOIs. This means whenever we add the same AOI but with a different position, only the last position we defined for that AOIs remains in the video timeline. We assume there either is a complete different operation to call on Tobii Pro Lab that is not governed by Titta (e.g. AddDynamicAois instead of AddAois), or the merge_mode parameter of the AddAois need to be changed from replace_aois to something else. Unfortunately, we do not have any information or documentation about the external presenter API, even though we own an external presenter license, that can help us solve this problem.

Can someone of you help us with that problem or know a better more simple solution to reach our underlying objective?

Thanks in advance!

dcnieho commented 8 months ago

Thanks for writing in. For as far as i understand your problem, I indeed think sending an empty video is a useful approach. The only thing i do wonder is why you are using Pro Lab at all at this point, is it an option to just use Titta and write your own analysis in Python or so?

That said, I have the API reference doc. I indeed do not see it on their website now. Just email me and i'll send it. You are looking for the AddAois action described on page 19.

There is indeed a call to add AOIs to video, see: https://github.com/dcnieho/Titta/blob/master/talkToProLab/TalkToProLab.m#L337 The comment in there you will not like however. I vaguely remember i did this as i had no good inspiration for what the API should look like for my function to be useful for people. What you need is basically the vertices of the AOI at different key frames (note that for the static AOI, its sent as one key frame). On second thought with fresh eyes its pretty obvious how it should implemented: it should be a version of attachAOIToImage that takes multiple lists of vertices, associated keyframes moments (in seconds, thats what pro lab wants), and a list of associated boolean values indicating whether the AOI is active at that moment (see API reference). I'll look into implementing this in my MATLAB version of talk to pro lab.

The merge mode is not useful as the other option is to completely delete all AOIs from the media instead of just replace one with the same name, if there is one.

Lastly, please do contribute your implementation of send_custom_event (and see mine for reference to make sure you do all the required processing such as removing newlines and tabs: https://github.com/dcnieho/Titta/blob/master/talkToProLab/TalkToProLab.m#L483) Any pull requests in general are of course very welcome, if you run into anything that can be better implemented or documented.

andreu4448 commented 8 months ago

Hi Jojo,

I was going to suggest something similar to what dcnieho just mentioned. First, in this situation maybe Pro Lab is not the best option. However, an intermediate step to going full out into your own code is to use the Psychopy tools to generate the areas of interest as transparent polygons. You can reach out to their team for some consultation if necessary.

That said, there is a tool that could be helpful in Pro Lab. You can export an inport AOI from a file as described in the manual page 131: https://connect.tobii.com/s/lab-downloads?language=en_US

In order to do that however, you will need the timestamp of the keyframes and the vertices of the AOI as dcnieho was saying. All this, in the structure of the import file Pro Lab is expecting. My suggestion would be to draw the AOI on a section of a test recording and export the file to see how it looks like and then evaluate if that is something you can extract easily out of your stimuli setup.

jojoschmitt commented 8 months ago

Thanks for all the suggestions. We will evaluate which approach best suits us and let you know about it. We will try to implement a solution and share it s.t. future recordings of this kind can be handled more easily.

Regards, John

jojoschmitt commented 8 months ago

I quickly want to respond to the question, why we want to use Tobii Pro Lab. It basically gives a huge variety of analysis tools that are preexisting. All kinds of statistics on fixations within AOIs not only for individual participants, but also accumulated. Indeed, we could code all of this functionality ourselves, but since it is a given, why not use it. Obviously, it takes some effort to get to the point where we can use these tools.

Here comes a new question that is important to us for evaluating which approach we want to take: Can we only assign new positions of an AOI every second using the external presenter? (Or can we use decimal places to make the keyframes more precise?)

@dcnieho you mentioned this in your reply and this is also what is written in your README. However, a response time of only one full second is not really fine-grained enough to catch all fixations for moving stimuli. So even if we would implement dynamic AOIs for a video, it would not help us.

@andreu4448 in case the import of dynamic AOIs works for more fine-grained timestamps, this is an option. I will take a look into this.

dcnieho commented 8 months ago

@jojoschmitt I just mean the unit seconds, I assume timing is more granular, so e.g. t=3.45 s is possible. Where is that written in my README, i don't see it?

jojoschmitt commented 8 months ago

@dcnieho Sorry, it is just the variable name in the README key_frame_seconds that specifies it. Then this approach is also a viable option. But I will still take a closer look into the API docs.

jojoschmitt commented 8 months ago

The documentation is pretty clear to me. I will try to implement an example scenario for adding dynamic AOIs on monday.

dcnieho commented 8 months ago

@jojoschmitt: no worries, didn't catch that. Looking forward to hearing how it works out!

jojoschmitt commented 8 months ago

To try if adding dynamic AOIs works by adding multiple keyframes, we prepared the following static operation (with exception of the media ID).

operation = '''{
                "operation": "AddAois",
                "media_id": "%s",
                "aois": [{
                        "name": "test_aoi",
                        "color": "AAC333",
                        "key_frames": [
                        {
                            "is_active": true,
                            "seconds": 0.0,
                            "vertices": [
                                {"x": 100, "y":100},
                                {"x": 100, "y":200},
                                {"x": 200, "y":200},
                                {"x": 200, "y":100}
                            ]
                        },
                        {
                            "is_active": true,
                            "seconds": 1.1,
                            "vertices": [
                                {"x": 200, "y":200},
                                {"x": 200, "y":300},
                                {"x": 300, "y":300},
                                {"x": 300, "y":200}
                            ]
                        }
                                ],
                        "tags": [
                                 {
                                    "group_name": "test_group",
                                    "tag_name": "test_tag"
                                    }
                                ]
                        }],
                                "merge_mode": "replace_aois"
                }'''

It seems though, Tobii won't accept it, because the response is:

{'reason': 'Keyframe 1 of AOI [test_aoi] must heave greater time value than previous', 'status_code: 101', 'operation':'AddAois'}

I do not quite understand the response though, because the seconds parameter is indeed already greater for the second keyframe. Also the video we are using is about 10s long, so both keyframes fall within the video duration.

Status code 101 just means Invalid parameter according to the API documentation.

We have also tried all types of alterations to get it to work. First of all, with only one keyframe the operation works. We have tried inversing the order of keyframes in case Tobii reads the list from behind. We tried differnet combinations of the is_active paramater and bigger or smaller intervals between the keyframes by adjusting the seconds parameter. We also tried to send the operation outside and within the actual recodring.

In the end, we also exported manually created AOIs from Tobii Pro Lab. The keyframes of the exproted AOIs actually have the same structure as the key_frames parameter mentioned in the API documentation. But even with an exported dynamic AOI (with two keyframes), it was not possible to copy them into the operation string above. Deleting one keyframe worked again, though.

It has always been the same error response. Maybe you have an idea why this happens. If not we continue working on the export import idea.

dcnieho commented 8 months ago

@jojoschmitt: I do not know. Let me try to reach out at someone within Tobii.

jonashogstrom commented 8 months ago

I'm not sure but I think there might be a typo in the reference documentation. Try changing "seconds" to "time"

jonashogstrom commented 8 months ago

Another discrepancy between the documentation and reality appears to be that the value for the "time"-property should be in microseconds, and not seconds (datatype int64)

dcnieho commented 8 months ago

Thanks @jonashogstrom: you are correct on both counts. The following request works, it shows the AOI for one second and then switches it off.

{
  "operation": "AddAois",
  "media_id": "9ed058e9-d551-452f-9ff2-1ae8a6986414",
  "aois": [
    {
      "name": "test5",
      "color": "FF0000",
      "key_frames": [
        {
          "is_active": true,
          "time": 0,
          "vertices": [
            {
              "x": 500,
              "y": 500
            },
            {
              "x": 600,
              "y": 500
            },
            {
              "x": 600,
              "y": 600
            },
            {
              "x": 500,
              "y": 600
            }
          ]
        },
        {
          "is_active": false,
          "time": 1000000,
          "vertices": [
            {
              "x": 500,
              "y": 500
            },
            {
              "x": 600,
              "y": 500
            },
            {
              "x": 600,
              "y": 600
            },
            {
              "x": 500,
              "y": 600
            }
          ]
        }
      ],
      "tags": []
    }
  ],
  "merge_mode": "replace_aois"
}

It seems to be that if the time field is not present in the request, it defaults to 0, hence you get that specific error. unknown fields (such as seconds) appear to be ignored.

marcus-nystrom commented 8 months ago

Committed a fix to add one AOI to a video (untested)

jojoschmitt commented 8 months ago

Great! Thank you both for helping us out on this end. My time as a research assistant has unfortunately passed. A colleague will take over from here and test it for our purpose.

I will keep following the progress, however, I will not be able to make contributions since I am no longer able to use Tobii Pro Lab and test new implementations.