LAPKB / Pmetrics

https://lapkb.github.io/Pmetrics/
21 stars 8 forks source link

NPAG report not being created #154

Closed CDarlow closed 1 year ago

CDarlow commented 1 year ago

Hi,

I'm currently modelling some PKPD data, and the NPAG report sometimes gets created and sometimes doesn't and I'm not sure why. The datafile/model is identical (save some adjustments to variable search windows) between when it creates one and when it doesn't, and in all cases the model successfully converges with a good fit. I noticed with colleagues working on other datasets that this sometimes happens too.

This isn't a huge issue, as I can just replicate the figures separately, but it saves time if NPAGreports are automatically created.

If of help, the console output when it is creating the output files does raise the following error message:

Saving R data objects to [xxxx]......

Use PM_load() to load them.

The following objects have been saved:

NPdata: All output from NPAG pop: Population predictions at regular, frequent intervals post: Posterior predictions at regular, frequent inteverals final: Final cycle parameters and summary statistics cycle: Cycle information op: Observed vs. population and posterior predicted cov: Individual covariates and Bayesian posterior parameters mdata: The data file used for the run Error in data.frame(x = model.frame(x, p$x$visdat[[1]]()), y = model.frame(y, : arguments imply differing number of rows: 26, 27 In addition: Warning messages: 1: In stringr::str_split(err[i + 1], ",")[[1]] %>% as.numeric() : NAs introduced by coercion 2: In stringr::str_split(err[i + 1], ",")[[1]] %>% as.numeric() : NAs introduced by coercion

Thank you

mnneely commented 1 year ago

Thanks for reporting. There are two issues arising here. 1) There is an error calculating the linear regression of obs vs. pred for some reason. Perhaps a prediction was made that was not a number (e.g. from trying to take a log of a negative value) and it was removed, causing obs and pred to no longer match and the rows to differ. 2) There may be a malformed error block.

Would you be willing to post your model file content here as a start?

CDarlow commented 1 year ago

Thank you Michael. Happy to share my model file below (with values edited out)

Pri

V1,xx,xx Cl1,xx,xx Ka,xx,xx KPC,xx,xx KCP,xx,xx Kgs,xx,xx Kks,xx,xx E50_1s,xx,xx H1s,xx,xx IC,xx,xx POPmax,xx!

Sec

Ke1=Cl1/V1 HEs = ((X(2)/V1)H1s)/((E50_1sH1s)+((X(2)/V1)**H1s))

INIT

X(4) = IC

Out

Y(1) = X(2)/V1 Y(2) = DLOG10(X(4))

Err

G=5 0.05,0.1,0,0! 1.4,0.15,0,0!

DIFF

XP(1) = -KaX(1) XP(2) = KaX(1) - Ke1X(2) - KCPX(2) + KPCX(3) XP(3) = KCPX(2) - KPCX(3) XP(4) = (Kgs X(4) (1-(X(4)/POPmax)))-(KksX(4)* HEs)

mnneely commented 1 year ago

Thanks, that's helpful. We need to correct the R6 PM_model code to handle the fixed coefficients properly. That's the source of error 2 related to str_split. In the meantime, if you don't have C0, C1, C2, C3 in your data file, you can just remove the "!" from the #Err coefficient lines to suppress that error, since Pmetrics will use the values in the model file when they are missing in the data file. If they are in the data file, without the "!" in the model file, Pmetrics will use the ones in the data file. I'll work on the fix and upload to the v2_dev branch for now, before integrating into the next release.

The other error I'm convinced is due to generating negative values for X(4) and trying to take the log. You probably predict very low values, and then your error model randomly makes them negative. Try putting a catch in at the end of the #DIFF block, like:

&IF(X(4) < 0) X(4) = 0.01

Or choose some other lower boundary value for X(4) that makes sense. Let me know if that all helped. There's nothing I can do in the code about that, because it's a feature of your model and data.

mnneely commented 1 year ago

Fix for the assay error polynomial mishandling when fixed with "!" now pushed to v2_dev branch. Can't help with models that generate negative predictions which are attempted to be log transformed beyond the suggestion above. :) We will be switching to a new report format an will include more robust code to protect against parts that fail without crashing the entire report.