lucy3 / RRmemory

Psych 204 project
0 stars 0 forks source link

Meeting 11/2 #3

Open lucy3 opened 7 years ago

lucy3 commented 7 years ago

Maybe a separate cost for retrieving and storing, because sometimes we have a memory but are unable to get it. After retrieving once, it is less expensive to continue storing/retrieving. (aka memories that are constantly used are less likely to be forgotten.)

lucy3 commented 7 years ago

We will structure our preliminary study with remembering a bit string, a string of coin flips. How should we structure our reward for remembering? Depending on what your reward and costs are, there is a threshold of how much you would want to pay to remember something. How much should I pay before it's no longer worth it for the reward?

The amount of noise is dependent on the price paid. The evaluator will give you a score, and the optimized score minus price paid is your net at the end. Actual sequence --> remembered sequence with noise dependent on the price paid --> evaluator takes both sequences and give you a score, and you want to optimize that score.

lucy3 commented 7 years ago

In the future we can think of edit distance for the two strings as our scoring mechanism (substitution, insertion, deletion) but for now we'll just think about substitution. With primacy and recency effect, the noise could be different between the beginning, middle, and end of the string. The scoring function could also differ for the positions of the bits in the string depending on which positions are more important.

lucy3 commented 7 years ago

With how we have the bit string set up we aren't actually inferring anything. We should set it up so that it uses infer.

lucy3 commented 7 years ago

Right now our initial model is just using flip instead of drawing the probability of flips and inferring the weight of the coin that we are flipping (0's and 1's are heads and tails, a sequence of observations). How should we go about doing that?

lucy3 commented 7 years ago

Our model right now is not a matter of forgetting but it's misremembering... the sequence itself is altered.

lucy3 commented 7 years ago

Is "getting it right" a binary? E.g. all or nothing, $2 or $0

Also do we inject noise into the observations or into var p (the weight of the coin itself)

pakapol commented 7 years ago

We can have the cost of getting thing wrong. Say, cost = square of (p'-p) Otherwise, there are many ways to measure difference in distribution. (KL divergence/sample-sort)

pakapol commented 7 years ago

One more note--in addition to things we covered today

I think we may look at MH's model as "learning," whereas our model as "memory." Learning is a more abstract version of memory, and we don't only memorize the sequence of objects.

In our model, the agent "memorize" the sequence of observations, and is asked to recall it. We can introduce the noise to flip the bit string, and we evaluate the similarity between the "recalled string" and "input string."

So in MH's model, an agent starts with some belief about one attribute of an object. It sees a sequence of obversations, then update the belief. So we say it "learns" the parameter. In this case, we can either (i) introduce the noise to interfere with the belief, or (ii) we can also tamper with the observation itself! In this case, we need to come up with a way to measure the difference between two distributions.

There is a soft line between "learning" and "memory" (the type we generally see in the experimental tasks in Learning&Memory class). As for "learning" I think Ch.5, Ch.8 in probmods should be a really helpful examples on how could we approach "learning." I would try to save some time going through it again.