AllenDowney / ThinkBayes2

Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.
http://allendowney.github.io/ThinkBayes2/
MIT License
1.8k stars 1.49k forks source link

include penalties in SAT problem #29

Open francescolosterzo opened 4 years ago

francescolosterzo commented 4 years ago

Hello!

After going through the two modellings of the SAT problem in chapter 12 (in version 1.0.9 of the book), I was wondering if it was possible to do a further step forward and include the penalties in the problem.

Here is what I did (full version is here, just look for "Scenario 3").

To me, the main points in the more advanced modelling proposed in section 12.5 are:
1) for a given efficacy and difficulty pair the probability of giving the correct answer (ProbCorrect) is computed. 2) ProbCorrect is used to compute the binary PMF for the single question with sat.BinaryPmf(...). Assuming the raw score to be the same as the number of correct answers, the binary PMF has only values 0 and 1.
3) The pmfs for all the questions are summed together

Starting from above I simply introduced a different function to compute the "binary" pmf:

def BinaryPmfWithPenalty(p, penalty=-0.25):

    pmf = thinkbayes2.Pmf()
    pmf.Set(1, p)
    pmf.Set(penalty, 1-p)

    return pmf

I use this new function:

An additional modification was needed due to the fact that the raw score is not the number of correct answers anymore. On the one hand the outcome of Exam.Reverse(...) is the number of correct answers, while the outcome of PmfCorrect(...) is a raw score which now includes the penalties (e.g. for an exam with 53 correct answers and 1 wrong answer the raw score is 52.75). This mismatch is taken care of inside the Likelihood(...) method.

Did anyone try to include the penalties too? If yes, what did you do?