ctn-waterloo / modelling_ideas

Ideas for models that could be made with Nengo if anyone has time
9 stars 1 forks source link

Filling space with silence works worse than filling with noise #87

Open Seanny123 opened 7 years ago

Seanny123 commented 7 years ago

In "Cognitive mechanisms of memory for order in rhesus monkeys" by Victoria L. Templer and Robert R. Hampton. Monkeys perform a basic recognition task where they're given a list of images and then asked to choose between two images which one appeared in the list.

One of the experiments, where they manipulate filling the space between the valid image and the question with either more images or silence, has the surprising result that space filled with images caused better recall than space filled with silence.

Experiments 2 and 3 identified temporal order as the most likely determinant of choice, so we attempted to better define what constituted temporal order in Experiment 4. We found that the order of events was better discriminated when intervening images occurred, compared to unfilled intervals of equivalent duration. So it is unlikely that classic timing mechanisms that use an internal clock or oscillator to measure elapsed time (Meck, 1983; Ortega et al., 2009; Roberts, 1981) are responsible for discrimination of temporal order in this task. It may seem counterintuitive that performance would increase when there is more that happened, and thus more to remember, as when additional images intervene between target images. The occurrence of intervening events may serve to mark the passage of time, thereby increasing the subjective separation of two events.

I don't think this would happen with @xchoo's current working memory model.

tbekolay commented 7 years ago

This is a pretty interesting result... a random thought about how @xchoo's working memory model might (?) reproduce this is that the statistics of an image more closely match the statistics of the memory representation than silence does (since silence, presumably, means zero input, means that there can be no similarity between the visual display and the memory representation). While I think right now the rehearsal loop in @xchoo's model works independently of visual stuff, if there was some input from there you might be able to improve rehearsal by virtue of any image being more similar to the images stored in memory than silence. Of course, that's assuming that those statistics would be relatively close even after the compression to a semantic pointer.

xchoo commented 7 years ago

That's correct, this doesn't currently happen in my (serial) working memory model. In fact, there isn't much of a rehearsal loop at all. The first step to modelling this phenomenon would be to put in a proper implementation of the rehearsal loop, and possibly to encode some kind of context semantic pointer in the memory representation.

Although, taking a brief look at the paper, it might be that Spaun is capable of reproducing some of the results. The way they dropped images from the list (Experiment 4.1) is a little weird. It seems that in one instance, they dropped an image in the middle of the list (thereby reducing a list of 5 images to a list of 4 images presented over a 5-image list duration), vs dropping an image in the start of the list (thereby reducing a list of 5 images to a list of 4 images presented over a 4-image list duration [because the blank first image would just be ignored]). I would have liked to see one where they dropped the last image instead.