HARPLab / DReyeVR

VR driving 🚙 + eye tracking 👀 simulator based on CARLA for driving interaction research
https://arxiv.org/abs/2201.01931
MIT License
149 stars 40 forks source link

Can we draw AOIs? #138

Closed Nitro60zeus closed 8 months ago

Nitro60zeus commented 1 year ago

Is it possible to have certain areas of interest? For example, an AOI on the rearview mirror, so that we can have the count for the number of times the participant looks at the rearview mirror.

ajdroid commented 1 year ago

Not exactly by drawing an AOI but we do provide the tickfocus_hitpt in the recording which will tell you what 3D mesh the gaze was intersecting with at each point. The logic uses a collision channel to figure out what to include and/or ignore (we ignore the ego vehicle typically). Relevant code is here: https://github.com/HARPLab/DReyeVR/blob/57109001edcfe0d91a6f814cf9c3287ff2bb57e4/DReyeVR/EgoSensor.cpp#L287

Nitro60zeus commented 1 year ago

Oh, that's great! Thanks for your response. But how do I obtain that data for analysis? For example, how do I know if the gaze intersected with the traffic lights or not? Or let's say I have a sign board, and I want to know if the gaze intersects with that sign board or not.

Nitro60zeus commented 12 months ago

Ok, I see this. I assume this is what you meant... (got this txt file from /show_recorder_file_info.py -a -f /PATH/TO/RECORDER-FILE > recorder.txt )

image

Wrote a code to extract the names. Hope this helps anyone looking for the same cause:


import re
import csv

# Open the input text file
with open('4a.txt', 'r') as file:
    content = file.read()

# Use regular expressions to find all relevant values
timestamp_carla_values = re.findall(r'TimestampCarla:(\d+)', content)
pupil_diameter_values = re.findall(r'PupilDiameter:(-?[\d.]+)', content)
eye_openness_values = re.findall(r'EyeOpenness:(-?[\d.]+)', content)
hit_values = re.findall(r'FocusInfo:Hit:(\d+)', content)
actor_name_values = re.findall(r'ActorName:([\w]+)', content)

# Open a CSV file for writing
with open('z.csv', 'w', newline='') as csvfile:
    writer = csv.writer(csvfile)

    # Write the header row to the CSV file
    writer.writerow(['TimestampCarla', 'PupilDiameter A', 'PupilDiameter B', 'EyeOpenness A', 'EyeOpenness B', 'Hit', 'ActorName'])

    # Determine the minimum length of the lists to avoid index errors
    min_length = min(len(timestamp_carla_values), len(pupil_diameter_values), len(eye_openness_values), len(hit_values), len(actor_name_values))

    # Write the values to the CSV file, alternating EyeOpenness values
    for i in range(min_length):
        row = [
            timestamp_carla_values[i],
            pupil_diameter_values[i * 2] if i * 2 < len(pupil_diameter_values) else '',
            pupil_diameter_values[i * 2 + 1] if i * 2 + 1 < len(pupil_diameter_values) else '',
            eye_openness_values[i * 2] if i * 2 < len(eye_openness_values) else '',
            eye_openness_values[i * 2 + 1] if i * 2 + 1 < len(eye_openness_values) else '',
            hit_values[i] if i < len(hit_values) else '',
            actor_name_values[i] if i < len(actor_name_values) else ''
        ]
        writer.writerow(row)

print("CSV file 'z.csv' has been created with the specified values.")
Nitro60zeus commented 12 months ago

Not exactly by drawing an AOI but we do provide the tickfocus_hitpt in the recording which will tell you what 3D mesh the gaze was intersecting with at each point. The logic uses a collision channel to figure out what to include and/or ignore (we ignore the ego vehicle typically). Relevant code is here:

https://github.com/HARPLab/DReyeVR/blob/57109001edcfe0d91a6f814cf9c3287ff2bb57e4/DReyeVR/EgoSensor.cpp#L287

Hey, I have a small follow-up question. For how many seconds or milliseconds did you guys consider for the duration of collision of the gaze with the object mesh (eg: cyclist), to count as a successful fixation. (As in, when I am moving my eyes, i may glide over a number of objects, but the moment i stop and look at something (that duration, let's say 100 milliseconds) is when I actually looked at the object (or registered the object in my brain) or fixated my vision on it.)

ajdroid commented 11 months ago

Hi, we don't consider fixations in this part of the pipeline -- it just gives you instantaneous gaze hit locations. We filter for fixations post-parsing

ajdroid commented 11 months ago

We use this code for gaze event classification: https://github.com/HARPLab/ibmmpy

Nitro60zeus commented 11 months ago

Hi, we don't consider fixations in this part of the pipeline -- it just gives you instantaneous gaze hit locations. We filter for fixations post-parsing

I am extremely interested in the fixations. Can you tell me how do you filter for fixations post-parsing? Will be extremely helpful. I tried the DReyeVR parser, which is really a great tool. But I don't see any output related to fixations. Thanks a lot btw!

ajdroid commented 8 months ago

As I said above, we use the ibmmpy gaze event classifier to do fixation detection (essentially uses instantaneous velocity of gaze to form 2 clusters -- slow fixations and fast saccades)

StijnOosterlinck commented 5 months ago

@Nitro60zeus have you been able to detect gazes at the rearview mirror?