Closed wangwei1619 closed 4 years ago
Some very quick notes:
The metric is used often for comparing predicted and observed PK parameters (e.g. clearance, AUC_inf). Fold error is defined as [Predicted Value]/[Observed Value]. The percent of model-predicted PK parameters falling within 2-fold of the observed PK parameters is then calculated.
The upper bound of the interval is defined by multiplying by the fold-error and the lower bound of the interval is defined by dividing by the fold error. As examples (and commonly used metrics) 2-fold prediction error is defined as the interval between 0.5 to 2.0-fold error. 1.5-fold error is defined as the interval between 0.667 to 1.5-fold error. 1.33-fold error is defined as the interval between 0.75 to 1.33-fold error and so on.
2-fold error on PK parameters is not acceptable much anymore, except in the world of toxicology. In pharmacology, 1.5-fold error is a standard metric for evaluating predictive utility when extrapolating between species and 1.33-fold error is a standard metric for evaluating predictive utility when extrapolating within species (e.g. from adults to children).
Of interest is applying this method to the actual observed concentrations in the PK profile, instead of just the PK parameters. While we know what kind of accuracy is denoted by finding 80% of predicted clearance values within 1.5-fold error from the observed clearances, we do not understand yet what level of accuracy is denoted by achieving 80% of predicted concentrations falling within 1.5-fold error of the observed values (whether more or less accuracy, who knows yet?)
Presenting this data is most often done in a table where the percent of model predictions are shown at 2-fold error, 1.5-fold error and 1.33 or 1.3-fold error for comparison (obviously these will decrease across these intervals).
Another useful plot is the comparison of predicted vs. observed values on log-log x-y plot where the fold limits can be plotted on either side of the line of equality.
Here's an example: This paper is coming out in a few days in JCP. The predicted concentrations are falling within 2.0-fold, 1.5-fold and 1.33-fold of the observed values 93%, 90% and 80% of the time, respectively (if I recall correctly). A figure like the following can clearly show where points are outside of the 2.0-fold error interval and where points are within the bounds:
Dear @prvmalik , Thanks for your specified explaination. In the last figure you presented, the value 0.1 on Y-axis is corresponding to values of 0.05 and 2 on X-axis by judging two dash lines. It is representing 2-fold error interval, right? So that 93% of predicted data is located in this interval, right?
Hi Wangwei, yes I checked the code and the dashed lines in the 6th element are drawing the 2.0-fold error boundaries rather than the 1.5-fold error boundaries.
@prvmalik I was just wondering which source you used for your following statement:
"2-fold error on PK parameters is not acceptable much anymore, except in the world of toxicology. In pharmacology, 1.5-fold error is a standard metric for evaluating predictive utility when extrapolating between species and 1.33-fold error is a standard metric for evaluating predictive utility when extrapolating within species (e.g. from adults to children)."
This would be useful for referencing if you could share with us. e.g. Recent papers pediatric PBPK translation from adult are applying 2 fold as the outer error boundaries.
https://www.ncbi.nlm.nih.gov/pubmed/31405354 https://www.ncbi.nlm.nih.gov/pubmed/29027194 https://www.ncbi.nlm.nih.gov/pubmed/27566992
Cheers
It might be interesting to have a look at the regulatory standpoint on this. You might find this FDA white Paper interesting:
read the full text here:
Best, Tobias
Hi, all I always see some sentences like 'the predicted result is within 2-fold of oberved data' in literature. And in one paper (DOI: 10.1002/cpt.1013. Epub 2018 Feb 2.), it says 'When evaluating the accuracy and acceptability of predictions, a commonly applied criteria is for values to be within 2-fold of the observed values'. So how to evaluate the performance of a predicition if its result is less than observed data? Is '50% of observed result' applied here? By the way, how is this '2-fold' criteria generated?