Closed VisionResearchBlog closed 4 years ago
Hi @VisionResearchBlog, the error was probably caused by you pressing the Delete button multiple times, while the calibration was just being computed.
The calibration is computed in the foreground since it usually takes only a few seconds. During this period, the UI of Pupil Player will become unresponsive. You probably clicked on Delete multiple times because nothing was happening during that time.
We will create a fix preventing this crash in the next release. In the meantime you have to be careful not to press the Delete button multiple times while Player is unresponsive.
Hi @pfaion thank you for looking into this and I will follow your advice about the delete button.
Can you follow up on the other 2 issues (1) minimum pupil confidence slider does not seem to change the parameter as the onscreen text always reports 0.8 regardless of setting when I press recalculate calibration.
(2) Can you confirm if you can get a decent calibration using the eye data from the 4 markers from 00:02-00:13 ? I follow the instructions in the documentation & youtube video but the track is always worse than the one I am trying to improve on. Does the tracker need more than 4 calibration points for offline calibration?
@VisionResearchBlog
(1) I have been able to reproduce the non-working Minimum Pupil Confidence
slider. We will look into this issue and prepare a bugfix. Thank you for reporting it!
(2) Looking at your procedure in more detail here, there are a couple of things that come to my mind.
Your calibration points should span the entire field of view that your subjects will be looking at. In your case you are using a 4 point calibration only, which provides a very limited field of view. You will notice that we removed the 4 point calibration from later versions of Pupil and now only use it for validation since it would often not produce optimal results. For the same reasons, screen-based calibration is best suited for screen-based experiments. For experiments e.g. on a table as you are using, a better way would be to use a physical-marker based calibration such as Single Marker (or in previous versions also Manual Marker). I am aware that you might not be able to influence theses factors anymore, if all of your recordings have already been finished. But please keep these tips in mind for future recordings.
An option to improve the gaze accuracy for your existing recordings might be to try the 2D pipeline. Please read the best practices section on the gaze mapping pipelines for a more detailed explanation. Generally I would recommend to read the entire best practices page. If your recordings do not contain much slippage, this might be a viable option. The fact that you recorded a validation session at the end of the recording will come in very very handy to check this! I ran a quick 2D calibration and gaze mapping on the recording you sent to info@pupil-labs.com and I was able to come down to 1.8 degrees of error in the final validation by only calibrating in the initial section. You will need to run a post-hoc pupil detection for this to work, which might look bad because of the 3D model, but the 2D gaze mapping will not take the 3D eye model into account.
Another note is that I noticed you have multiple post-hoc gaze mappers set up. If these have overlapping time ranges, this can also negatively affect performance, since you will get duplicated gaze in these sections. Make sure to have only non-overlapping gaze mappers or disable all but one.
@pfaion Thanks for your advice all very helpful! Addressing your points in #2 - (1) Unfortunately we only recorded the validation but I will keep in mind for future (2) I will try the 2D mapping as you suggest and keep in mind the best practices to record an acclimation period for the tracker, then calibration, then validation (not just validation as we had done). (3) I did not realize multiple gaze mappers could be active at the same time - will keep this in mind to avoid conflicts
Closing this issue now, as the fix for the crash has been released with Pupil v2.1 and I hope all other questions were answered. Feel free to reopen if anything is left to clarify!
I have been trying to re-calibrate a participant and the results have been very poor.
I will send the project to data@pupil-labs.com and describe here: In this project the original eye gaze is far from the calibration points so I want to re-calibrate and maybe re-detect the pupil.
I deleted the old calibration and set the software to detect onscreen markers from 00:02-00:13. I then recalculate successfully. Note the minimum pupil confidence slider does not seem to change the parameter as the onscreen text always reports 0.8 regardless of setting when I press recalculate calibration.
When I then use this new calibration with gaze mapper and click recalculate the resulting eye track is far worse than the original. I don't understand since the eye images appear to be successfully tracked.
I tried several different settings of calibration and then gaze mapping and pupil detection all with poor results. Eventually I crashed the program, please see the screenshot and log file below. This may have been after deleting a calibration and then running remapping but I am not sure.
player.log