At step k of the user simulation, we compute P(k)_rel(k) and save this in a list. When the item at step k is a true positive, P(k)_rel(k) = (# of true positives)/(# cells inspected). When the item at step k is a false positive, P(k)_rel(k) = 0. Thus to compute the average precision, we sum P(k)_rel(k) over all k and then divide by the total number of errors.
The problem was that the denominator should have been the total number of errors, not the total number of cells inspected.
See: http://en.wikipedia.org/wiki/Information_retrieval#Average_precision
At step k of the user simulation, we compute P(k)_rel(k) and save this in a list. When the item at step k is a true positive, P(k)_rel(k) = (# of true positives)/(# cells inspected). When the item at step k is a false positive, P(k)_rel(k) = 0. Thus to compute the average precision, we sum P(k)_rel(k) over all k and then divide by the total number of errors.