Closed alizeelopez closed 8 years ago
Dear Alizee, May be you could send us your observation function, as well as inputs and some piece of observed data (and the specification of you priors), so that we could try to reproduce the problem? Best, Jean.
Hi,
You'll find what you asked for below :
Here is my full model:
function [ gx ] = choice_model_weighted_with_bias(x_t,P,u_t,in) b1=P(1); thetav1=P(2); thetav2=P(3); d=P(4); V1=u_t(1,:); % Rating de l'item 1 V2=u_t(2,:); % Rating de l'item 2 b=exp(b1); % Choice : gx = 1./(1+exp((-((thetav1_V1)-(thetav2_V2)+d)./b)));
with priors : prior=[0 1 1 0]; var_prior=[100 0 0 0; 0 100 0 0 ; 0 0 100 0; 0 0 0 100];
And here is my wining model from the classical comparison :
function [ gx ] = choice_model(x_t,P,u_t,in) b1=P(1); d=P(2); V1=u_t(1,:); % Rating de l'item 1 V2=u_t(2,:); % Rating de l'item 2 b=exp(b1); % Choice : gx = 1./(1+exp((-(V1-V2+d)./b)));
With priors : prior=[0 0]; var_prior=[100 0 ; 0 100];
Here are V1 (option value 1) , V2 (option value 2) , and choices (1 for a choice for V1) from one subject :
V1=[58 34 94 100 41 28 76 88 80 100 97 82 68 94 77 54 68 95 75 0 100 69 83 28 50 100 56 83 58 30 59 67 50 64 100 14 74 69 32 74 61 100 100 86 0 73 8 70 63 60 59 59 76 57 60 100 93 100 78 100 90 81 68 34 59 20 87 60 76 56 89 92 100 97 0 50 75 84 55 71 0 55 90 61 83 100 0 91 57 50 76 0 94 76 35 61 59 70 50 74 0 100 56 32 100 35 100 100 80 86 35 100 18 50 92 86 100 100 60 14 75 27 65 31 100 59 79 81 64 64 69 94 40 61 0 64 58 58 91 57 57 57 75 0 43 96 55 96 69 58 57 61 87 55 84 100 0 100 100 0 27 82 90 68 100 55 55 100 97 94 100 100 60 100 58 80 10 41 58 100 93 85 50 91 40 57 66 50 64 100 100 61 100 81 89 90 93 60 61 59 0 60 100 100 100 74 64 18 93 100 26 61 64 89 56 82 100 100 90 55 59 27 61 0 50 43 50 76 91 35 100 50 100 3 100 55 100 91 100 58 54 88 89 58 100 76 69 24 64 100 28 61 83 31 78 50 38 60 0 100 69 100 59 96 79 96 56 57 50 0 24 58 20 56 87 63 78 93 74 100 61 54 100 68 100 100 91 89 26 71 38 78 97 31 57 61 59 50 58 57 59 65 67 71 0 57 43 60 100 87 100 8 66 3 79 92 100 0 54 80 86 50 24 79 93 0 78 74 94 88 74 78 63 88 100 77 82 89 58 54 100 95 37 50 79 85 93 0 87 66 66 100 76 92 58 10 69 60 78 89 100 90 100 55 100 100 100 78 68 64 37 69 79 59 0 81 63 73 87 58 55 51 43 100 54 24 100 56 28 73 27 83 100 59 57 100 64 100 58 58 30 66 59 63 64 93 89 64 50 83 18 73 60 56 74 51 59 100 86 86 100 18 50 59 60 100 65 63 65 71 31 66];
V2=[14 0 0 100 91 95 35 18 25 7 57 27 68 55 67 100 7 37 45 5 100 60 31 84 75 18 38 78 25 95 29 18 11 27 0 100 54 50 87 51 54 92 0 21 0 69 7 45 63 67 71 38 75 31 22 37 59 100 36 80 61 67 24 0 10 74 28 27 14 8 32 8 100 9 0 65 42 68 21 50 100 32 6 19 41 0 0 62 9 42 62 0 93 25 85 44 40 42 69 61 0 3 67 100 23 22 95 100 55 57 5 4 100 7 17 37 100 74 17 74 8 0 12 0 71 21 31 27 60 58 65 100 84 50 5 35 35 57 94 19 14 62 12 100 94 25 36 73 59 64 55 62 68 31 76 5 100 100 85 100 24 36 59 74 25 100 9 35 66 63 68 62 12 74 32 12 0 42 9 0 0 71 65 76 75 21 50 16 56 58 92 73 17 58 39 51 45 5 35 19 100 25 0 97 0 32 32 0 71 76 85 51 56 14 60 25 100 0 0 71 27 0 32 0 91 25 30 36 85 93 100 75 5 100 84 2 69 54 0 87 0 56 54 28 71 22 14 0 21 0 66 23 62 13 59 16 69 56 100 0 56 69 39 68 45 7 58 27 68 100 0 71 100 11 59 27 27 78 30 23 94 100 95 26 73 17 61 62 93 57 77 69 25 25 67 58 50 8 23 10 14 24 10 32 88 28 13 58 0 9 100 0 56 100 68 35 100 100 72 63 37 24 9 59 41 88 12 19 6 10 17 24 13 64 100 72 55 57 100 0 3 57 24 77 76 81 18 100 25 60 57 94 27 12 62 0 29 57 39 64 4 50 5 100 100 21 68 18 13 51 6 57 21 60 100 14 56 59 58 56 18 13 73 5 67 2 84 40 75 57 6 21 93 44 58 0 12 0 27 12 66 28 13 14 22 81 27 60 30 75 7 65 63 71 61 26 57 17 66 57 80 100 74 32 30 97 25 27 39 64 0 59];
Choice=[0 1 1 0 0 0 1 0 0 1 1 1 1 1 0 0 1 1 0 1 0 0 1 1 0 1 0 1 0 0 1 0 1 1 1 0 1 1 0 0 0 0 1 1 1 1 0 1 0 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0 0 0 1 1 1 0 1 0 0 1 0 0 1 0 0 1 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 1 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 1 1 1 0 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 0 1 0 1 0 0 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1];
Sorry for all the details in this post but I can't attach other files than images.
Best,
AlizΓ©e
Hi Jean,
I want to explain choices between two options and a bias existing in those choices. To do that, I have a classical softmax with the two values of the options : π(A )=1/(π^((β(ππ΄βππ΅))/π½)β‘) In order to find how the bias toward the option A is implemented, I defined a full model with 3 more parameters: π(A )=1/(π^((β(Ξ³a* ππ΄βΞ³b* ππ΅+π ))/π½)β‘) with these hypothesis :
I inverted the 8 possible models with your toolbox and the winning model of the comparison with VBA_groupBMC is the model with only the parameter d, with a posterior probability of 1.
When I make the comparison by computing the log-evidence of the reduced models using Savage Dickey ratios (VBA_SavageDickey), I found a totally different result : now it's the model with only Ξ³a and Ξ³b allowed to vary that win the comparison, again with a posterior probability of 1.
I can't figure out why those two kinds of inversion give me so different results. Do you have an idea ? I this not the way to test those hypothesis ? What result should I take into account ?
If you need more information, let me know,
Best,