lucy3 / RRmemory

Psych 204 project
0 stars 0 forks source link

Meeting with MH 11/15 #6

Open lucy3 opened 7 years ago

lucy3 commented 7 years ago

After each iteration you have a posterior. what if you can pay one dollar, you keep the mean. you pay two dollars you get the mean and standard deviation. you pay three dollars you get the full distribution

This keeps the noise very simple and makes sure we're doing the resource analysis correctly. MH wants us to figure out the structure of the resource analysis. In resource analysis you pay a cost to keep more information and you also get a reward if you get the question right. (Figuring out units for these so that things matter would be great.) Paying the full posterior, the reward should be above three so that it's sort of worth it, but maybe the reward is four. I can always save the full posterior and make money but maybe I can just pay for the mean and make good guesses and make more money overall. Also you could not pay attention at all, and that costs zero. If the true weight of the coin is 0.5, then it isn't worth learning.

Instead of predicting the next day you predict the next twenty days. You don't have to be in sync but you will compare fractions of heads you predict. If you're just predicting the next one you need less information.

Biggest challenge for us is to simplify things down. Is it ever worth it to save the full posterior? If we don't have all of the information, e.g. we just paid to keep the mean, then how do figure out alpha and beta? We would need a prior on one of them, such as start with a uniform 0, 20 over the both of them. How should you compute the variance if you're not being exact and using infer? You have a bag of samples... there's a built in function for entropy.

If you have just the mean and standard deviation then how do you plug that in into a model to make predictions? In the hierarchical model, the variance won't be enough. Maybe this might be too much of a new direction.

Maybe an implanted false memory, or you revert back to your prior, gradually. With betas or Dirichlet it is easier to unobserve. Right now our model is sort of like a false memory.

Maybe try this: flip(function(pay))? sample(remembered_p): sample(prior)

lucy3 commented 7 years ago

Maybe instead you want to have a fixed number of true observations and then just make a prediction once at the end.

For the start of the paper say just assume you made ten observations. What's the cost and reward doing here? There are a lot of parameters in this space and calibration problems. And then try different noise functions.

lucy3 commented 7 years ago

It seems that noise is actually helpful when the data itself isn't stereotypically like that of a heavy weight coin

lucy3 commented 7 years ago

With some probability noise that is a function of cost, if we redraw p, our rewards scatter plot shows no trends. If we instead observe the opposite of our next observation, our rewards scatter plot does show trends.