Rohan2821999 / MathCog_Modelling

A cognitive model to understand basic math acuity in children using weber fraction and numerical distance effect based algorithms
1 stars 0 forks source link

Graphs for Ratio vs N1 and Ratio vs P_Acc #1

Closed Rohan2821999 closed 8 years ago

Rohan2821999 commented 8 years ago

@cbattista Below are the graphs (colormapped and normalized):

Ratio vs N1 figure ratio vs n1

Ratio vs P_Acc figure ratio vs p_acc

cbattista commented 8 years ago

wow nice, now we're talking!

so now, increase the value of w' and we can see how the graph changes as individual ability decreases...

On Tue, Aug 2, 2016 at 12:47 PM, Rohan2821999 notifications@github.com wrote:

@cbattista https://github.com/cbattista Below are the graphs (colormapped and normalized):

Ratio vs N1 [image: figure ratio vs n1] https://cloud.githubusercontent.com/assets/13100688/17342646/0b72b780-58af-11e6-94a1-cba432db93bc.png

Ratio vs P_Acc [image: figure ratio vs p_acc] https://cloud.githubusercontent.com/assets/13100688/17342706/3e335d78-58af-11e6-9848-bb8a89720de2.png

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO6xxfz65JzwTSgOcuOxHYLNBvsw3ks5qb57YgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Cool... The graphs above were for w = 0.15.. Below, I have the graph for w=0.20 (Ratio vs n1):

figure ratio vs n1 _0 2

The graph (colors) seem to shift towards the left (probably dictating an increase in difficulty).

There didn't seem to a very visible difference for w = 0.17. Therefore, didn't post here!

cbattista commented 8 years ago

Cool, yeah, seems to be working as planned. Nice job. Now, we can pretty clearly see that it's a vertical wall, rather than a diagonal one as I was assuming. Increase the w' to something really high like .3 or .5 and let's see what happens.

-C

On Tue, Aug 2, 2016 at 1:03 PM, Rohan2821999 notifications@github.com wrote:

Cool... The graphs above were for w = 0.15.. Below, I have the graph for w=0.20 (Ratio vs n1):

[image: figure ratio vs n1 _0 2] https://cloud.githubusercontent.com/assets/13100688/17343166/2d7451fc-58b1-11e6-8d2d-5c323314c060.png

The graph (colors) seem to shift towards the left (probably dictating an increase in difficulty).

There didn't seem to a very visible difference for w = 0.17. Therefore, didn't post here!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237026122, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO66RJ_CSoeC3nhp8kQEbfscDVAsrks5qb6KLgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Yeah. I think that means that the difficulty is solely depended on the Ratio and not on either n1 or n2 as previously hypothesized.

Graph for w = 0.3 figure ratio vs n1 _0 3

Graph for w = 0.5 figure ratio vs n1 _0 5

Rohan2821999 commented 8 years ago

Here is the linear regression on the RT vs Distance model.. Doesn't look too good! :p

rt_model

cbattista commented 8 years ago

Still gives us a slope to begin with though, right? And a bad model is better than no model. However, check out this paper for some more real world data on the numerical distance effect ... http://www.sciencedirect.com/science/article/pii/S0022096508000520

-C

On Tue, Aug 2, 2016 at 3:55 PM, Rohan Hundia notifications@github.com wrote:

Here is the linear regression on the RT vs Distance model.. Doesn't look too good! :p

[image: rt_model] https://cloud.githubusercontent.com/assets/13100688/17348185/7ba8c0c0-58c9-11e6-8c04-7df4b8cedf87.png

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237074446, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO9Lct8H8X_yCVweDv0tKQgEFOMV0ks5qb8rogaJpZM4Ja-rf .

cbattista commented 8 years ago

@Rohan2821999 - also, keep in mind that this slope isn't estimated assuming there's multiple subjects. as you can see in the previous paper, there are individual differences in the numerical distance effect. to get a handle on this you'd run a hierarchical linear model to determine the amount of individual variability in intercept/slope in the fit lines. but no need to do that now for our purposes...

Rohan2821999 commented 8 years ago

Alright.Would keep that in mind! So now what should I be doing with the RT vs Distance graph?

cbattista commented 8 years ago

The slope and intercept of the fitline can be used to get an estimate of the mean RT for each n1/n2. But to do our simulation, we will want to use a normal distribution (well, probably skewed normal, but let's start with normal) to determine the RT for a given trial. So, we will need a standard deviation as well. Using the real data, compute the SD for each distance, and let me know whether the SDs change considerably as distance increases. For simplicity's sake, hopefully they are similar, but let's find out.

By the way, now might be a good time to put a bit of time into learning pandas - http://pandas.pydata.org/pandas-docs/stable/ - which is super useful for sorting/aggregating/grouping data for analysis. Ever used this library before?

On Wed, Aug 3, 2016 at 11:36 AM, Rohan Hundia notifications@github.com wrote:

Alright.Would keep that in mind! So now what should I be doing with the RT vs Distance graph?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237329084, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOylpZmQ8y7ZGeL2b3NW8KaxTIMbgks5qcN-hgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

What do you mean by SD for each distance? I am thinking of the SD of all the samples, so I am not sure how would I get different SD's for each distance. The way I am thinking is using the sample of all Distances to compute the mean and then using the mean square error sum over all possible distances and divide by number of samples.

So you don't want me to sum over all values of distances but just compute : mean - distance(i)?

Nope, haven't used pandas before (briefly read about it), but would be certainly glad to explore and learn using this library.

Rohan2821999 commented 8 years ago

The above computation would just be the variance for each sample, do you want me to do that?

cbattista commented 8 years ago

Yep, what you propose is correct. But, no need to write out the function yourself, just use http://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html or http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.std.html .

Pandas is pretty cool. It lets you put a table of data into an object called a DataFrame which can then be used to organize things. Here's some pseudocode...

load your data into a dataframe

df = pandas.read_csv(filename...)

get a list of the unique distances (assuming we have a column called

distance, would have to make one if you don't, which i will leave as a challenge to you) distances = list(set(df['distance'))

iterate over distances and compute an SD for each

for dist in distance:

get all the RTs for a given distance

RTs = df[df[distance==dist]]['RT']
#compute SD for that distance
SD = numpy.std(RTs)
M = numpy.mean(RTs)
print dist, M, SD

So here we're just trying to figure out whether the SD changes much for each distance. Once we know that, we can determine how we want to generate our normal distribution to represent the RTs for each distance, which is the next step in our simulation process.

On Wed, Aug 3, 2016 at 12:50 PM, Rohan Hundia notifications@github.com wrote:

The above computation would just be the variance for each sample, do you want me to do that?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237350948, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOx3n8CwX3ySA1QguqH1IW9CxKkdsks5qcPD0gaJpZM4Ja-rf .

cbattista commented 8 years ago

whoops sorry this line - RTs = df[df[distance==dist]]['RT'] - is malformed, should be...

RTs = df[df[distance]==dist]['RT']

On Wed, Aug 3, 2016 at 1:13 PM, Christian Battista < battista.christian@gmail.com> wrote:

Yep, what you propose is correct. But, no need to write out the function yourself, just use http://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html or http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.std.html .

Pandas is pretty cool. It lets you put a table of data into an object called a DataFrame which can then be used to organize things. Here's some pseudocode...

load your data into a dataframe

df = pandas.read_csv(filename...)

get a list of the unique distances (assuming we have a column called

distance, would have to make one if you don't, which i will leave as a challenge to you) distances = list(set(df['distance'))

iterate over distances and compute an SD for each

for dist in distance:

get all the RTs for a given distance

RTs = df[df[distance==dist]]['RT']
#compute SD for that distance
SD = numpy.std(RTs)
M = numpy.mean(RTs)
print dist, M, SD

So here we're just trying to figure out whether the SD changes much for each distance. Once we know that, we can determine how we want to generate our normal distribution to represent the RTs for each distance, which is the next step in our simulation process.

On Wed, Aug 3, 2016 at 12:50 PM, Rohan Hundia notifications@github.com wrote:

The above computation would just be the variance for each sample, do you want me to do that?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237350948, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOx3n8CwX3ySA1QguqH1IW9CxKkdsks5qcPD0gaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

I couldn't get the code that you sent me working (few errors which I couldn't fix). Hence I coded the thing my way (bit sloppy and long) using looping.

The standard deviations vary a lot for a small change in distance as well.

For distance of 2 (Std = 415.2 and Mean = 980.4) and For distance of 3 (Std = 3503.9 and Mean = 920)

cbattista commented 8 years ago

Hah wow - but have you excluded those crazy long RTs you found yesterday? Also if Jonathan is around he may be able to assist with pandas. ...

On Wed, Aug 3, 2016, 3:22 PM Rohan Hundia notifications@github.com wrote:

I couldn't get the code that you sent me working (few errors which I couldn't fix). Hence I coded the thing my way (bit sloppy and long) using looping.

The standard deviations vary a lot for a small change in distance as well.

For distance of 2 (Std = 415.2 and Mean = 980.4) and For distance of 3 (Std = 3503.9 and Mean = 920)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237393772, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOxMXjeHawh18GyxQ8lQN49m9oRUDks5qcRS2gaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Oh.. I forgot about that :p.. Okay so now the revised Std's are as follows for RT<=200:

distance 2 --> 415 distance 3 --> 351 distance 4 --> 410 distance 5 --> 392

cbattista commented 8 years ago

Great let's just use 400 as the SD then...

On Wed, Aug 3, 2016, 3:53 PM Rohan Hundia notifications@github.com wrote:

Oh.. I forgot about that :p.. Okay so now the revised Std's are as follows for RT<=200:

distance 2 --> 415 distance 3 --> 351 distance 4 --> 410 distance 5 --> 392

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237400015, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOwloRQr2O0r5PGTnnVfWKfXSB1Maks5qcRvagaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Cool - I guess we could use SD as 370 (I averaged over all distances). The slope of the line is -12.7 and the Y-intercept is 1091.8

cbattista commented 8 years ago

OK cool. So now you can simulate the expected RT for a single trial. You'll simulate a distribution of possible RTs, and sample from it for each trial. Use numpy to get yourself a normal distribution to sample from - this function will need two parameter - a mean and an SD. For each distance, calculate the mean using your linear equation, and use 370 as the SD (in the function I think these are called the loc and scale parameters). Then make a graph of all the simulated data, just like you did for the accuracy (so use ratio and size on the x and y axis). Normalize the colors from 1s-2s so we can see how the time increases. I wonder if this line will be diagonal...

On Wed, Aug 3, 2016 at 7:19 PM, Rohan Hundia notifications@github.com wrote:

Cool - I guess we could use SD as 370 (I averaged over all distances). The slope of the line is -12.7 and the Y-intercept is 1091.8

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-237433964, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO1wBznjE7fbb66l6o0v_OtL0Sjayks5qcUxNgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Okay.. So this is what I did:

cmap = cm.RdYlGn c = cm.ScalarMappable(cmap=cmap, norm = mpl.colors.Normalize(vmin=1,vmax=2)) for i in xrange(len(ratio)): val = np.random.normal(910,370) # 910 is mean averaged over all distances and 370 is std averaged colors = c.to_rgba(val) plt.scatter(ratio[i],n1[i],color = colors)

plt.xlabel('Ratio') plt.ylabel('n1') plt.show()

Is this correct? If so, the graph output of this code is :

figure std distribution

Rohan2821999 commented 8 years ago

Hey Christian, I was trying to correct my P(Acc) code but the Zip command for iterating over n1 for the corresponding doesn't seem to work as expected. I was wondering how could I correct the code below:

for ratio in ratios:
    for n1,n2 in zip(n1s,n2s): # Zip doesn't seem to work as expected (lacks correspondence of n1&n2)
        numerator = abs(n1-n2)
        denominator = (math.sqrt(2)*w*(((n1**2)+(n2**2))**0.5))
        P_Error = 0.5*math.erfc(numerator/denominator)
        P_Acc = 1 - P_Error
        colors = c.to_rgba(P_Acc)
        plt.scatter([ratio], [n1],color = colors)

Any clues from you would be really helpful. Thanks!

cbattista commented 8 years ago

OK, so first thing to ask is: how is n2 generated. If you

print n1
print n2

what is the result?

On Sat, Aug 6, 2016 at 10:36 PM, Rohan Hundia notifications@github.com wrote:

Hey Christian, I was trying to correct my P(Acc) code but the Zip command for iterating over n1 for the corresponding doesn't seem to work as expected. I was wondering how could I correct the code below:

for ratio in ratios: for n1,n2 in zip(n1s,n2s): # Zip doesn't seem to work as expected (lacks correspondence of n1&n2) numerator = abs(n1-n2) denominator = (math.sqrt(2)w(((n12)+(n22))*_0.5)) P_Error = 0.5_math.erfc(numerator/denominator) P_Acc = 1 - P_Error colors = c.to_rgba(P_Acc) plt.scatter([ratio], [n1],color = colors)

Any clues from you would be really helpful. Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-238064887, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO1dzOkaJRAas66SNjP6tWfrVrpwCks5qdW7ugaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

printing n1 and n2 would give me all unique values of n1 and n2 in ascending order (from lowest to greatest). But for my graph I would like to have n1 values and the corresponding n2 values.

cbattista commented 8 years ago

Ah, if you're asking what I think you're asking, you want:

for n1 in n1s: for n2 in n2s: print n1, n2

This will give you all possible combinations of n1 and n2.

On Sun, Aug 7, 2016 at 8:39 PM, Rohan Hundia notifications@github.com wrote:

printing n1 and n2 would give me all unique values of n1 and n2 in ascending order (from lowest to greatest). But for my graph I would like to have n1 values and the corresponding n2 values.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-238135429, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO6egq0D4iaYNSZi80U51c5CL0UMPks5qdqUKgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

I don't want all possible combinations of n1s and n2s; I just want the combinations that actually exists in the raw data file and exactly in that order

How I actually want to make it work is something like below:

for ratio in ratios:`
       for i in xrange (len(n1s)):
            numerator = abs(n1s[i]-n2s[i])
             # and the rest of the code 

But this gives me memory errors because of huge number of iterations within my nested for loop and my plotting function is within the for loop as well. To avoid this I tried using zip over the list sorted n1s and n2s but that doesn't seem to work as I expect it to.

Rohan2821999 commented 8 years ago

In the above code:

n1s = (data['n1'])
n2s - (data('n2']) 

# This gives me an array of all n1s and n2s and iterating and plotting over all of these in the above nested loop takes in a lot of memory

So, then I tried:

n1s = list(sort(data['n1']))
n2s = list(sort(data['n2']))

# and then used zip over this but this doesn't seem to work as the code above where I iterate over n1s[i] and n2s[i].
cbattista commented 8 years ago

Oh I get it now, you are trying to isolate the unique n1-n2 pairings. Here's how I would approach that (a little 'brute force' but hey that's the fun of computing in 2016).

n1s = (data['n1'])
n2s = (data['n2'])

ns = []

for n1, n2 in zip(n1s, n2s):
    ns.append([n1, n2])

ns = list(set(ns))

#This should give you the list of unique n1/n2 pairings, which you should
be able to iterate over...

for n1, n2 in ns:
    print n1, n2 #pretty sure this should work...

On Sun, Aug 7, 2016 at 11:47 PM, Rohan Hundia notifications@github.com wrote:

In the above code:

n1s = (data['n1']) n2s - (data('n2'])

This gives me an array of all n1s and n2s and iterating and plotting over all of these in the above nested loop takes in a lot of memory

So, then I tried:

n1s = list(sort(data['n1'])) n2s = list(sort(data['n2']))

and then used zip over this but this doesn't seem to work as the code above where I iterate over n1s[i] and n2s[i].

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-238154452, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOxCWwGWKwga-PN7j1YkLK5QqW43Vks5qdtEJgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Regarding sampling from P_Acc values using Gaussian distribution, do you want to plot from the normal distribution (something like below):

vals = np.random.normal(P_Acc)
colors = c.to_rgba(vals)
plt.scatter([ratio],[n1], color = colors)

If so, what do you want to the std of normal as - should the std be a super small number (like 0.000001)?

Rohan2821999 commented 8 years ago

The graph after this for std 0.2 vals = np.random.normal(P_Acc,0.2) is:

sampling_p acc

Rohan2821999 commented 8 years ago

Graph after sampling from the discrete values array (It would be different every time but more or less like this) : sampling_p acc

Rohan2821999 commented 8 years ago

I have got a list of Easiness values using the formula and they are almost similar (expect for a very few) to the easiness values in the raw data file. Should I be plotting these?

Rohan2821999 commented 8 years ago

I did object orient my code a bit by using functions, however that doesn't seem to resolve the issue of Plotting the easiness graph over all unique possibilities.

I figured out why there is memory error. The issue is that the combinations of n1 and RT while using zip over it has around 3500 unique combinations and then when I iterate this under the unique ratios loop the number of iterations to be done multiply 20 times. So these iterations are super large and therefore give memory error.

Previously (in the RT model) we were using zip over n1 and n2 and these had only 25 unique combinations. Therefore the nested loop within the ratio loop worked in the other script.

Any hints on what could be done here?

cbattista commented 8 years ago

OK, let me give you an outline here to try to make things clear

note: n1s and ratios are sets

for ratio in ratios: for n1 in n1s: E = simulate(n1, ratio) scatter([ratio], [n1], color= colormap(E))

you will write the function simulate that takes n1 and ratio as arguments, and it should return the easyness value for that n1/ratio pairing

-C

On Tue, Aug 9, 2016 at 1:57 PM, Rohan Hundia notifications@github.com wrote:

I did object orient my code a bit by using functions, however that doesn't seem to resolve the issue of Plotting the easiness graph over all unique possibilities.

I figured out why there is memory error. The issue is that the combinations of n1 and RT while using zip over it has around 3500 unique combinations and then when I iterate this under the unique ratios loop the number of iterations to be done multiply 20 times. So these iterations are super large and therefore give memory error.

Previously (in the RT model) we were using zip over n1 and n2 and these had only 25 unique combinations. Therefore the nested loop within the ratio loop worked in the other script.

Any hints on what could be done here?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-238688744, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO4ebA7k47EF9cjAbJNIobIB5Dwn0ks5qeOmmgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

But to get values of E,

I have to use the formula -- E = Acc - (RT/2seconds) and to obtain all values of E I would have to iterate over all unique RT's

cbattista commented 8 years ago

heheh just make the code snippet I sent you work

On Tue, Aug 9, 2016 at 2:08 PM, Rohan Hundia notifications@github.com wrote:

But to get values of E,

I have to use the formula -- E = Acc - (RT/2seconds) and to obtain all values of E I would have to iterate over all unique RT's

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-238691974, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuOyCmmndUSDKwbcHZa1sH1FZNpSnCks5qeOw_gaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Oh Wait, I think I get what you mean. I'll try that..

Rohan2821999 commented 8 years ago

Doesn't work for me, my code snippet is below:

def Ea(n_1,n_2,r,rt):
    numer = abs(n_1-n_2)
    deno = (math.sqrt(2)*w*(((n_1**2)+(n_2**2))**0.5))
    P_Err = 0.5*math.erfc(numer/deno)
    P_A = (1-P_Err)
    array_poss = np.random.choice([0,1],size=(10),p=[1-P_A,P_A])
    val = np.random.choice(array_poss)
    return val-(rt/2000)

def Plot_E_Space(c = cm.ScalarMappable(cmap=cmap, norm = mpl.colors.Normalize(vmin=-1,vmax=1))):
    for ratio in ratios:
        for n1,n2,RT in zip(n1s,n2s,Reaction_T):
            Easiness = Ea(n1,n2,ratio,RT)
            #print(Easiness)
            colors = c.to_rgba(Easiness)
            plt.scatter([ratio], [n1],color = colors)

Is it because I am using zip over n1,n2 and RT and then iterating over the possibilities generated by all three? Using just:

n1 in ns:

rest of code

I am unable to obtain the RT values required for the easiness computation

cbattista commented 8 years ago

check something into git that has the structure i previously sent you and we can go from there...

this is the function you are making. it doesn't take any arguments but n1 and ratio.

E = simulate(n1, ratio)

-C

On Tue, Aug 9, 2016 at 2:57 PM, Rohan Hundia notifications@github.com wrote:

Doesn't work for me, my code snippet is below:

def Ea(n_1,n_2,r,rt): numer = abs(n_1-n_2) deno = (math.sqrt(2)w(((n_12)+(n_22))*_0.5)) P_Err = 0.5_math.erfc(numer/deno) P_A = (1-P_Err) array_poss = np.random.choice([0,1],size=(10),p=[1-P_A,P_A]) val = np.random.choice(array_poss) return val-(rt/2000)

def Plot_E_Space(c = cm.ScalarMappable(cmap=cmap, norm = mpl.colors.Normalize(vmin=-1,vmax=1))): for ratio in ratios: for n1,n2,RT in zip(n1s,n2s,Reaction_T): Easiness = Ea(n1,n2,ratio,RT)

print(Easiness)

        colors = c.to_rgba(Easiness)
        plt.scatter([ratio], [n1],color = colors)

Is it because I am using zip over n1,n2 and RT and then iterating over the possibilities generated by all three? Using just:

n1 in ns:

rest of code

I am unable to obtain the RT values required for the easiness computation

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-238704875, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO1hZUD_7s4AEIh66HE4eF6F0ypoTks5qePe8gaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

Here are the plots for Easiness over the entire space: w = 0.15 space_easiness_0 15

w = 0.2 space_easiness_0 2

w = 0.5 space_easiness_0 5

Rohan2821999 commented 8 years ago

Hey Christian,

I have added a class to my the Easiness_Mapping code for separate subject creations of child and adult:You could have a look at the code in my MathCog Modelling Repo, the script name is Easiness_Map.py.

What should be my next steps? How many such different instances of childs and adults should I simulate? And how should I be tweaking the weber fraction, slope and intercept parameters for different subjects? (Should I just randomly keep on increasing w and decreasing slope for child?)

Rohan2821999 commented 8 years ago

Also, should I create a txt document that contains various subjects (simulated children and adults) and read in their slopes, ws, intercepts like a dictionary items into the python file. I think this could work like a small UI (reading data from text file and probably make things simpler).

Rohan2821999 commented 8 years ago

Here are the two graph (Without Guessing, With Guessing)

Without Guessing: no_guessing

With Guessing (10% probability of guessing): 10_guessing

I think it looks decent. Or should I be changing the Probability of guessing?

cbattista commented 8 years ago

Yeah, looks good to me... -C

On Thu, Aug 11, 2016 at 5:48 PM, Rohan Hundia notifications@github.com wrote:

Here are the two graph (Without Guessing, With Guessing)

Without Guessing: [image: no_guessing] https://cloud.githubusercontent.com/assets/13100688/17609580/66543726-5feb-11e6-9974-694badd123f4.JPG

With Guessing (10% probability of guessing): [image: 10_guessing] https://cloud.githubusercontent.com/assets/13100688/17609587/80e30b58-5feb-11e6-9fcc-ae8dc41cf82a.JPG

I think it looks decent. Or should I be changing the Probability of guessing?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-239335237, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO3lF8vgujkU48Zr3tfA8YI8io4Cfks5qe8LFgaJpZM4Ja-rf .

Rohan2821999 commented 8 years ago

I have added a graph of actual easiness vals vs simulated vals for three subject agre range 16-18..

image

Is the correlation established in the graph decent enough? (The cluster on the top right seems kinda good) Now, I would be running a Pearson correlation test to get a more accurate measure.

cbattista commented 8 years ago

Neato. So top right corner is doing really well, but top right is, if anything a negative correlation, which is pretty bad model performance - so from this data I'd expect the overall correlation to be really close to zero actually. But this is only 3 out of 150+ subjects so let's see what happens when you run the pearson correlation on the whole set.

Also would be informative to plot the whole data set too, assuming it doesn't take forever to draw.

On Fri, Aug 19, 2016 at 1:11 PM, Rohan Hundia notifications@github.com wrote:

I have added a graph of actual easiness vals vs simulated vals for three subject agre range 16-18..

[image: image] https://cloud.githubusercontent.com/assets/13100688/17822811/003d599e-660e-11e6-92cd-952a718cea38.png

Is the correlation established in the graph decent enough? (The cluster on the top right seems kinda good) Now, I would be running a Pearson correlation test to get a more accurate measure.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-241122456, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO6ByxsAEtq0XFaGZXNS_4bMg5nlPks5qhg3vgaJpZM4Ja-rf .

cbattista commented 8 years ago

Whoops sorry, that should say 'the top left is, if anything a negative correlation...'

On Fri, Aug 19, 2016 at 1:18 PM, Christian Battista < battista.christian@gmail.com> wrote:

Neato. So top right corner is doing really well, but top right is, if anything a negative correlation, which is pretty bad model performance - so from this data I'd expect the overall correlation to be really close to zero actually. But this is only 3 out of 150+ subjects so let's see what happens when you run the pearson correlation on the whole set.

Also would be informative to plot the whole data set too, assuming it doesn't take forever to draw.

On Fri, Aug 19, 2016 at 1:11 PM, Rohan Hundia notifications@github.com wrote:

I have added a graph of actual easiness vals vs simulated vals for three subject agre range 16-18..

[image: image] https://cloud.githubusercontent.com/assets/13100688/17822811/003d599e-660e-11e6-92cd-952a718cea38.png

Is the correlation established in the graph decent enough? (The cluster on the top right seems kinda good) Now, I would be running a Pearson correlation test to get a more accurate measure.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Rohan2821999/MathCog_Modelling/issues/1#issuecomment-241122456, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOuO6ByxsAEtq0XFaGZXNS_4bMg5nlPks5qhg3vgaJpZM4Ja-rf .