Closed Dorota-D closed 3 years ago
It is Ok with the combined model.
SAEM didn't swap out a variance of 0
to 1
like focei did; This has been changed since it is common practice to swap these with 1;
With smaller datasets, like theo_sd
this can cause an over-inflation of the residuals were the prediction is zero, and optimizing for that value; As always only accept a fit if the diagnostics are reasonable.
library(nlmixr)
one.cmt <- function() {
ini({
tka <- 0.45
tcl <- log(c(0, 2.7, 100))
tv <- 3.45
eta.ka ~ 0.6
eta.cl ~ 0.3
eta.v ~ 0.1
prop.sd <- 0.1
})
model({
ka <- exp(tka + eta.ka)
cl <- exp(tcl + eta.cl)
v <- exp(tv + eta.v)
linCmt() ~ prop(prop.sd)
})
}
fit <- nlmixr(one.cmt, theo_sd, list(print=0), est="saem")
#> RxODE 1.0.5 using 4 threads (see ?getRxThreads)
#> Calculating covariance matrix
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> Error : Error calculating covariance via linearization
#> Calculating residuals/tables
#> done
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : Covariance matrix non-positive definite, corrected by sqrtm(fim %*%
#> fim)
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : Linearization of FIM could not be used to calculate covariance.
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : NAs introduced by coercion
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : Bounds are ignored in SAEM
print(fit)
#> ── nlmixr SAEM(Solved); OBJF not calculated fit ────────────────────────────────
#>
#> Gaussian/Laplacian Likelihoods: AIC() or $objf etc.
#> FOCEi CWRES & Likelihoods: addCwres()
#>
#> ── Time (sec $time): ───────────────────────────────────────────────────────────
#>
#> saem setup table covariance other
#> elapsed 0.734 0.879383 0.008 0.006 0.386617
#>
#> ── Population Parameters ($parFixed or $parFixedDf): ───────────────────────────
#>
#> Est. SE %RSE Back-transformed(95%CI) BSV(CV%) Shrink(SD)%
#> tka 0.406 0.191 47 1.5 (1.03, 2.18) 70.3 0.684%
#> tcl 1.01 0.071 7.03 2.75 (2.39, 3.16) 24.3 0.705%
#> tv 3.47 0.0374 1.08 32.2 (29.9, 34.7) 11.3 20.6%
#> prop.sd 0.171 0.171
#>
#> Covariance Type ($covMethod): |fim|
#> Fixed parameter correlations in $cor
#> No correlations in between subject variability (BSV) matrix
#> Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs)
#> Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink
#>
#> ── Fit Data (object is a modified tibble): ─────────────────────────────────────
#> # A tibble: 132 x 16
#> ID TIME DV PRED RES IPRED IRES IWRES eta.ka eta.cl eta.v ka
#> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0 0.74 0 0.74 0 0.74 0.74 0.0871 -0.500 -0.0546 1.64
#> 2 1 0.25 2.84 3.07 -0.233 3.50 -0.659 -1.10 0.0871 -0.500 -0.0546 1.64
#> 3 1 0.570 6.57 5.56 1.01 6.25 0.317 0.297 0.0871 -0.500 -0.0546 1.64
#> # … with 129 more rows, and 4 more variables: cl <dbl>, v <dbl>, tad <dbl>,
#> # dosenum <dbl>
fit <- nlmixr(one.cmt, theo_sd, list(print=0), est="focei")
#> calculating covariance matrix
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> done
#> Calculating residuals/tables
#> done
#> Warning in (function (uif, data, est = NULL, control = list(), ...,
#> sum.prod = FALSE, : Hessian reset during optimization; (can control by
#> foceiControl(resetHessianAndEta=.))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : initial ETAs were nudged; (can control by foceiControl(etaNudge=.,
#> etaNudge2=))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : ETAs were reset to zero during optimization; (Can control by
#> foceiControl(resetEtaP=.))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : last objective function was not at minimum, possible problems in
#> optimization
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : S matrix non-positive definite
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : The variance of all elements are unreasonably small, <1e-7
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : covariance step failed
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : gradient problems with initial estimate and covariance; see $scaleInfo
print(fit)
#> ── nlmixr FOCEi (outer: nlminb) fit ────────────────────────────────────────────
#>
#> OBJF AIC BIC Log-likelihood
#> FOCEi 232.9682 489.5679 509.7476 -237.784
#>
#> ── Time (sec $time): ───────────────────────────────────────────────────────────
#>
#> setup optimize covariance table other
#> elapsed 0.264582 0.758061 0.758063 0.017 0.777294
#>
#> ── Population Parameters ($parFixed or $parFixedDf): ───────────────────────────
#>
#> Est. Back-transformed BSV(CV%) Shrink(SD)%
#> tka 0.45 1.57 90.7 12.9%
#> tcl 0.868 2.38 59.1 21.3%
#> tv 3.43 31 32.4 -0.121%
#> prop.sd 0.1 0.1
#>
#> Covariance Type ($covMethod): failed
#> No correlations in between subject variability (BSV) matrix
#> Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs)
#> Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink
#> Minimization message ($message):
#> false convergence (8)
#> In an ODE system, false convergence may mean "useless" evaluations were performed.
#> See https://tinyurl.com/yyrrwkce
#> It could also mean the convergence is poor, check results before accepting fit
#> You may also try a good derivative free optimization:
#> nlmixr(...,control=list(outerOpt="bobyqa"))
#>
#> ── Fit Data (object is a modified tibble): ─────────────────────────────────────
#> # A tibble: 132 x 20
#> ID TIME DV PRED RES WRES IPRED IRES IWRES CPRED
#> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0 0.74 1.83e-15 0.740 4.41e+14 0 0.74 0.74 -3.29e-16
#> 2 1 0.25 2.84 3.31e+ 0 -0.473 -2.00e- 1 3.43 -0.588 -1.72 3.31e+ 0
#> 3 1 0.570 6.57 5.95e+ 0 0.623 1.82e- 1 6.08 0.486 0.799 5.95e+ 0
#> # … with 129 more rows, and 10 more variables: CRES <dbl>, CWRES <dbl>,
#> # eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> # tad <dbl>, dosenum <dbl>
Created on 2021-03-09 by the reprex package (v1.0.0)
@mattfidler Thank you for the reply. Indeed, the proportional error was the worst one, and even in Monolix, it did not provide realistic results. I have another question about handling BLQ data and censoring. In the RxODE documentation, I've found that the 'CENS' and 'LIMIT' columns in the dataset should be recognized. However, I could not confirm it in the nlmixR documentation. If so, will all algorithms handle it, or is it limited to 'focei'?
focei
and saem
both support the cens
and limit
columns in the dataset.
The nlmixr
documentation is lagging behind the RxODE
documentation, but it is on my todo list.
nlme
, on the other hand, does not support cens
or limit
.
Thank you so much! I'm looking forward to seeing this project grow.
Great
Hello,
I have some problems with fitting a model with a proportional error using saem. At first, I thought it was the problem with my data, but it also occurs with the theo_sd dataset. I've used a simple 1-cmt model for this example:
one.cmt <- function() { ini({ tka <- 0.45 tcl <- log(c(0, 2.7, 100)) tv <- 3.45 eta.ka ~ 0.6 eta.cl ~ 0.3 eta.v ~ 0.1 prop.sd <- 0.7 }) model({ ka <- exp(tka + eta.ka) cl <- exp(tcl + eta.cl) v <- exp(tv + eta.v) linCmt() ~ prop(prop.sd) }) } fit <- nlmixr(one.cmt, theo_sd, list(print=5), est="saem")
The first couple of iterations look like this: 1: 0.1287 1.2322 3.6174 0.5700 0.2850 0.0950 inf 5: 0.1287 1.2322 3.6174 0.4643 0.2321 0.0774 nan 10: 0.1287 1.2322 3.6174 0.3618 0.1796 0.0602 nan
Interestingly, the proportional error works fine with focei (at least for theo_sd, but my data is more complex). The additive error seems to be ok. When I tried using a combined error, it converged, and the parameters were more or less similar to the ones obtained in Monolix. However, when I observed gradients, I noticed that the proportional error estimates are updated last. This is the output from theo_sd with 1cmt model and the combined error:
1: -0.1035 1.2020 3.5329 0.5700 0.2850 0.1008 8.7751 1.0000 5: -0.2479 1.9252 3.8812 0.6075 0.2904 0.0821 4.6754 1.0000 10: -0.0070 1.9399 3.8867 0.4998 0.2247 0.0957 1.5933 1.4400 15: -0.2010 1.9586 4.1084 0.4667 0.1871 0.0984 0.7435 1.6900 20: -0.1121 1.8804 4.1319 0.3718 0.1447 0.0761 0.5811 1.6900
Is it ok, or are there some problems with the proportional error & saem in the combined model too?
I really appreciate any help you can provide.