CNCLgithub / mot

Model implementation for "Adaptive computation as a new mechanism of human attention"
0 stars 0 forks source link

probe placement update #28

Closed eivinasbutkus closed 3 years ago

eivinasbutkus commented 4 years ago

Here are some stimuli with probe placement (probes are rings now): https://yale.app.box.com/folder/121883170205

NB: there are multiple shortcoming and questions, so I don't think the probe placement will make much sense, but you can get the idea how the rings look (if you can spot them haha).

I generated 10 trials with the same motion parameters (humans were 82% in this setting). Then z-scored attention exerted by the model across all trials. Finally, determined 5 quantiles of z-scored attention, and for each trial and for each quantile sampled uniformly from moments where attention falls in the quantile to place the probe.

Noteworthy changes to sensitivity and tracker masks:

  1. Got rid of the gaussian component in the masks. With ancestral sampling, we're able to track without gaussian component which was itself having some strange effects (e.g. pushing belief off the observed mask).
  2. Sensitivity produced attention weights are now smoother.

However, I think potentially it would be good to rerun probe placement with the settings that produced our best sensitivity results on exp0 (with gaussian mask component and without smooth sensitivity). Or perhaps at least validate the new model?

The main problem I find with sensitivity is that it seems like there is a lot of correlation between attention given to different trackers. It "gets" the overall difficulty of certain moments, and allocates attention, but then for some reason each tracker gets very similar attention. I'm not sure why this is the case, but I don't think it should be correlated across all trackers?

Crucial TODOs:

belledon commented 4 years ago

Thanks for posting. Can you also upload the performance compute? (just to see avg accuracy)

Got rid of the gaussian component in the masks.

Noted. I don't think it will solve the pushing problem. That's due to the model not being able to explain "holes" in tracker masks due to distractor occlussion.

The main problem I find with sensitivity is that it seems like there is a lot of correlation between attention given to different trackers. It "gets" the overall difficulty of certain moments, and allocates attention, but then for some reason each tracker gets very similar attention. I'm not sure why this is the case, but I don't think it should be correlated across all trackers?

That shouldnt be the case.. especially since we are no longer scaling KL

you can look at older attention maps posted in the slack channel. there should be clear delineation between different trackers. After looking through the 10 trials, almost all of the targets clump up early on which i think is the main cause of this.

However, I think potentially it would be good to rerun probe placement with the settings that produced our best sensitivity results on exp0 (with gaussian mask component and without smooth sensitivity). Or perhaps at least validate the new model?

Lets take a look at the isr motion model. Can you make another set of 10 trials that where similar to what your shared the other week but a little slower?

eivinasbutkus commented 4 years ago

TD avg accuracy is 82% DC avg accuracy is 81.6%