I am working on a similar problem but with a slightly different application. The goal of my project is to identify damage on a structure for visual building inspection. I have collected the data using Tobii Pro Glasses, and 3 of the 10 participants were on two building sites. My data is in the form of videos (3~5 mins long) with all the gaze information included with it.
You have used a screen-based eye tracker for your project, and I have used wearable glasses for my project, so I was wondering if there is a way I can use the data I have collected and replicate it with your method.
Also, while I was going to set up the environment, I got a challenge with Tobii's stream engine API as it's not available publicly, and they only provide a Pro SDK kit. Can you help me this how I can overcome this or how I can simply use my data to use your method and generate attention maps for building inspector's visual attention?
Hi @JamesQFreeman ,
I am working on a similar problem but with a slightly different application. The goal of my project is to identify damage on a structure for visual building inspection. I have collected the data using Tobii Pro Glasses, and 3 of the 10 participants were on two building sites. My data is in the form of videos (3~5 mins long) with all the gaze information included with it.
You have used a screen-based eye tracker for your project, and I have used wearable glasses for my project, so I was wondering if there is a way I can use the data I have collected and replicate it with your method. Also, while I was going to set up the environment, I got a challenge with Tobii's stream engine API as it's not available publicly, and they only provide a Pro SDK kit. Can you help me this how I can overcome this or how I can simply use my data to use your method and generate attention maps for building inspector's visual attention?
Thank you so much for your help.