drbenvincent / delay-discounting-analysis

Hierarchical Bayesian estimation and hypothesis testing for delay discounting tasks
http://www.inferencelab.com/delay-discounting-analysis/
MIT License
23 stars 9 forks source link

plot issue #139

Closed benepp closed 7 years ago

benepp commented 7 years ago

Hi First of all: Thanks for the nice toolbox! I have analyzed data from two groups and two conditions using the HierarchicalLogK model. The group parameters show significant differences between groups and conditions (they are actually quite substantial). However, when looking at the group plots there are no differences in the point estimates visible, they are basically identical. The individual data do show the expected variation, so it seems to be a plotting issue that is specific to the group plot.

I have another question that may sound somewhat awkward. Is it possible to extract the trial-by-trial estimates of the discount function and the values that go into the Softmax function? The reason I am asking is that I have an interest in using these values as regressors for the analysis of physiological data.

Thanks!! ben

drbenvincent commented 7 years ago

Thanks, glad you find it useful.

Question 1 Can I just check some things. You have 2 separate groups, each with 2 conditions? Any of this repeated measures? Are you essentially splitting the data into 2 groups or perhaps 4 (group x condition) and fitting hierarchical models to each one?

Question 2 It is certainly possible to extract the estimated present subjective values for each trial. This would be a straight-forward modification. This would be based upon the overall estimate of the discount rates.

But if you mean you'd want to know how the estimated discount rates 'evolve' as new trial data comes in, then this is doable but would involve a bit more coding. I think there are two options here:

  1. I could code up a matlab-only (i.e. no JAGS) single-participant model to produce trial-to-trial estimates of discount rates and present subjective values. This is probably an afternoon's work.
  2. The alternative would be to use the toolbox as it is, but basically run the entire analysis based on data from all participants, but just trial 1, then on trials 1-2, then on trials 1-3, etc. This is fiddly to do, and would take longer to compute, but it's all manageable.

Anyway, let's try to pin down exactly what is what so I can figure out what to change.

benepp commented 7 years ago

Thanks for the quick response!Question1:Its a 2x2 design. 1 between factor age group and 1 within factor condition (repeated measures). I am splitting it into 4 factors and fitting the model each one (this seemed to the most straightforward analysis to begin with). Question2: I am most interested in the subjective value regressor because this is what most people have looked at in the past and the predictions about underlying neural correlates are straightforward (ventromedial PFC). Regarding the time-evolving discount rate, its not really clear what the prediction would be and how such a regressor would look like. Typically k is thought of as an aggregate measure that reflects an individual differences characteristic but obviously it should fluctuate as a function of task characteristics.I will try Version 2 with a few participants see what this looks like ben

Benjamin T. Vincent notifications@github.com schrieb am 15:07 Donnerstag, 10.November 2016:

Thanks, glad you find it useful.Question 1 Can I just check some things. You have 2 separate groups, each with 2 conditions? Any of this repeated measures? Are you essentially splitting the data into 2 groups or perhaps 4 (group x condition) and fitting hierarchical models to each one?Question 2 It is certainly possible to extract the estimated present subjective values for each trial. This would be a straight-forward modification. This would be based upon the overall estimate of the discount rates.But if you mean you'd want to know how the estimated discount rates 'evolve' as new trial data comes in, then this is doable but would involve a bit more coding. I think there are two options here:

drbenvincent commented 7 years ago

In terms of your second question, I've made a note of this in a separate issue #140 and will hope to get to this soon.

Question 1... Yep, fitting 4 models is the best way to do this currently. Dealing with different experiment designs is a possible extension for future work.

When you run each model, you can set a meaningful savePath, so you'll end up with 4 of these. All of the participant and group level estimates and plots should just be saved into there, so I'm a bit confused that you seem to be getting the same group level plots out. Each model object should only have access to it's own inferences, so I'm a bit unsure of why you might be getting repeated group level plots out.

I've just run the HierarchicalLogK models on 2 different sets of data and the group level plots (both the univariate summary and the multi panel plot with discount function etc) seems to be fine. So am having trouble replicating the error. Although I'm working off of the very latest code in the dev branch

Feel free to paste in your analysis script and I can see if there's some mistake perhaps. Either that or email it to me. I'm assuming you might not want to send the raw data, but I'll see what I can figure out from looking at your analysis script.

benepp commented 7 years ago

Attached please find the script as well as two plots. One shows the difference in logK as a function of group and condition. The effects are not huge but there is a significant main effect of group and an interaction between group and condition. The other plot shows the resulting discount rates for the four conditions overlayed over each other. Note that I had to change the output format in myExport to .ps because otherwise it does not print true vector graphics (see attached). thanks for the help!ben

Benjamin T. Vincent notifications@github.com schrieb am 16:34 Donnerstag, 10.November 2016:

In terms of your second question, I've made a note of this in a separate issue #140 and will hope to get to this soon.Question 1... Yep, fitting 4 models is the best way to do this currently. Dealing with different experiment designs is a possible extension for future work.When you run each model, you can set a meaningful savePath, so you'll end up with 4 of these. All of the participant and group level estimates and plots should just be saved into there, so I'm a bit confused that you seem to be getting the same group level plots out. Each model object should only have access to it's own inferences, so I'm a bit unsure of why you might be getting repeated group level plots out. I've just run the HierarchicalLogK models on 2 different sets of data and the group level plots (both the univariate summary and the multi panel plot with discount function etc) seems to be fine. So am having trouble replicating the error. Although I'm working off of the very latest code in the dev branchFeel free to paste in your analysis script and I can see if there's some mistake perhaps. Either that or email it to me. I'm assuming you might not want to send the raw data, but I'll see what I can figure out from looking at your analysis script.— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

drbenvincent commented 7 years ago

I don't think the attachments work emailing through here. Feel free to send to my university address. b dot t dot vincent at dundee dot ac dot uk

drbenvincent commented 7 years ago

Based on off-line discussions this was a misunderstanding rather than a bug. But the issues will be pursued in #140 and #141