Closed Dorota-D closed 3 years ago
In nlmixr this is:
transform(y) = transform(f) + (a+b*f^c)*eps
When transform()
is the identity transform and c=1
this is equivalent to monolix:
Concentration = Cc + (a + b*Cc) * e
For something like cp ~ add(add.sd)+prop(prop.sd)
these are equivalent when using combined1
When using lognormal error like:
cp ~ lnorm(add.sd)
Then the transformation is applied to both sides, as in the following article:
https://dx.doi.org/10.1007%2Fs10928-015-9460-y
Note that the likelihoods are computed on the untransformed scale, so likelihoods on a lognormal scale can be compared to likelihoods on untransformed scales.
This transformation can be box-cox, yeo-johnson, probit or logit; For a partial list see:
https://github.com/nlmixrdevelopment/nlmixr/blob/master/vignettes/residualErrors.csv
Note there is also an undocumented propT
which takes into consideration transform(f)
instead of f
for proportional errors.
This should read:
transform(y)=transform(f)+sqrt(a^2+b^2*f^(2*c))*eps
which would be equivalent to monolix's combined 2 with the same caveats as above.
Note that Monolix also applies the transformation in certain circumstances, though it describes it later in the documentation.
Does it mean that nlmixR/RxODE log-transforms F?
Not unless requested.
I assume that in nlmixR, a and b are constants, and eps follows a typical distribution with mean 0 and sd = 1
Yes, a
, b
, and possibly c
are estimated constants and in most cases the eps
is assumed to follow a normal
distributions with mean 0 and variance/sd 1. With Yeo-Johnson and cox-box transformations the lambda
parameter is also estimated.
Also, I've found information that back-transformed parameters are converted to %CV.
This is incorrect. Back-transformed parameters are translated to their untransformed scale.
Lets look at the theo example:
library(nlmixr)
one.cmt <- function() {
ini({
## You may label each parameter with a comment
tka <- 0.45 # Log Ka
tcl <- log(c(0, 2.7, 100)) # Log Cl
## This works with interactive models
## You may also label the preceding line with label("label text")
tv <- 3.45; label("log V")
## the label("Label name") works with all models
eta.ka ~ 0.6
eta.cl ~ 0.3
eta.v ~ 0.1
add.sd <- 0.7
})
model({
ka <- exp(tka + eta.ka)
cl <- exp(tcl + eta.cl)
v <- exp(tv + eta.v)
linCmt() ~ add(add.sd)
})
}
f <- nlmixr(one.cmt, theo_sd, "focei", control=list(print=0))
#> ℹ parameter labels from comments will be replaced by 'label()'
#> RxODE 1.0.6 using 4 threads (see ?getRxThreads)
#> calculating covariance matrix
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> done
#> Calculating residuals/tables
#> done
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : initial ETAs were nudged; (can control by foceiControl(etaNudge=.,
#> etaNudge2=))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : ETAs were reset to zero during optimization; (Can control by
#> foceiControl(resetEtaP=.))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : last objective function was not at minimum, possible problems in
#> optimization
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : gradient problems with initial estimate and covariance; see $scaleInfo
print(f)
#> ── nlmixr FOCEi (outer: nlminb) fit ────────────────────────────────────────────
#>
#> OBJF AIC BIC Log-likelihood Condition Number
#> FOCEi 116.8035 373.4033 393.5829 -179.7016 68.42019
#>
#> ── Time (sec $time): ───────────────────────────────────────────────────────────
#>
#> setup optimize covariance table other
#> elapsed 0.880506 0.115779 0.115785 0.014 0.44693
#>
#> ── Population Parameters ($parFixed or $parFixedDf): ───────────────────────────
#>
#> Parameter Est. SE %RSE Back-transformed(95%CI) BSV(CV%) Shrink(SD)%
#> tka Log Ka 0.464 0.195 42.1 1.59 (1.08, 2.33) 70.4 1.81%
#> tcl Log Cl 1.01 0.0751 7.42 2.75 (2.37, 3.19) 26.8 3.87%
#> tv log V 3.46 0.0437 1.26 31.8 (29.2, 34.6) 13.9 10.3%
#> add.sd 0.695 0.695
#>
#> Covariance Type ($covMethod): r,s
#> Some strong fixed parameter correlations exist ($cor) :
#> cor:tcl,tka cor:tv,tka cor:tv,tcl
#> 0.158 0.422 0.744
#>
#>
#> No correlations in between subject variability (BSV) matrix
#> Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs)
#> Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink
#> Minimization message ($message):
#> relative convergence (4)
#>
#> ── Fit Data (object is a modified tibble): ─────────────────────────────────────
#> # A tibble: 132 x 20
#> ID TIME DV PRED RES WRES IPRED IRES IWRES CPRED CRES
#> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0 0.74 1.79e-15 0.740 1.07 0 0.74 1.07 -5.61e-16 0.74
#> 2 1 0.25 2.84 3.26e+ 0 -0.423 -0.225 3.85 -1.01 -1.45 3.22e+ 0 -0.379
#> 3 1 0.570 6.57 5.83e+ 0 0.740 0.297 6.78 -0.215 -0.309 5.77e+ 0 0.795
#> # … with 129 more rows, and 9 more variables: CWRES <dbl>, eta.ka <dbl>,
#> # eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>, tad <dbl>,
#> # dosenum <dbl>
Created on 2021-03-16 by the reprex package (v1.0.0)
Here the back-transformed V
is on the normal scale of 31.8
instead of the estimated 3.46
.
%CV
You see that the between subject variability estimates are transformed based on the log-normal transformation; With a lognormal transformation the coefficient is constant and given by:
cv% = sqrt(exp(var(bsv))-1)
Nlmixr detects that these are lognormally distributed because they are of the form:
ka = exp(tka + eta.ka)
However, if they are not log-normally distribution the sd is reported instead; Take for instance changing v=exp(tv+eta.v)
to v=tv+eta.v
.
In the output below the BSV(%CV)
is changed to BSV(CV% or SD)
library(nlmixr)
one.cmt <- function() {
ini({
## You may label each parameter with a comment
tka <- 0.45 # Log Ka
tcl <- log(c(0, 2.7, 100)) # Log Cl
## This works with interactive models
## You may also label the preceding line with label("label text")
tv <- 30; label("log V")
## the label("Label name") works with all models
eta.ka ~ 0.6
eta.cl ~ 0.3
eta.v ~ 0.1
add.sd <- 0.7
})
model({
ka <- exp(tka + eta.ka)
cl <- exp(tcl + eta.cl)
v <- tv + eta.v
linCmt() ~ add(add.sd)
})
}
f <- nlmixr(one.cmt, theo_sd, "focei", control=list(print=0))
#> ℹ parameter labels from comments will be replaced by 'label()'
#> → creating full model...
#> → pruning branches (`if`/`else`)...
#> ✔ done
#> → loading into symengine environment...
#> ✔ done
#> → creating full model...
#> → pruning branches (`if`/`else`)...
#> ✔ done
#> → loading into symengine environment...
#> ✔ done
#> → calculate jacobian
#> → calculate ∂(f)/∂(η)
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → calculate ∂(R²)/∂(η)
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → finding duplicate expressions in inner model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → optimizing duplicate expressions in inner model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → finding duplicate expressions in EBE model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → optimizing duplicate expressions in EBE model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → compiling inner model...
#> ✔ done
#> → compiling EBE model...
#> ✔ done
#> RxODE 1.0.6 using 4 threads (see ?getRxThreads)
#> calculating covariance matrix
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> done
#> Calculating residuals/tables
#> done
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : initial ETAs were nudged; (can control by foceiControl(etaNudge=.,
#> etaNudge2=))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod
#> = FALSE, : ETAs were reset to zero during optimization; (Can control by
#> foceiControl(resetEtaP=.))
#> Warning in (function (uif, data, est = NULL, control = list(), ..., sum.prod =
#> FALSE, : gradient problems with initial estimate and covariance; see $scaleInfo
print(f)
#> ── nlmixr FOCEi (outer: nlminb) fit ────────────────────────────────────────────
#>
#> OBJF AIC BIC Log-likelihood Condition Number
#> FOCEi 128.7837 385.3835 405.5631 -185.6917 550.3686
#>
#> ── Time (sec $time): ───────────────────────────────────────────────────────────
#>
#> setup optimize covariance table other
#> elapsed 3.310947 0.134588 0.134593 0.013 0.857872
#>
#> ── Population Parameters ($parFixed or $parFixedDf): ───────────────────────────
#>
#> Parameter Est. SE %RSE Back-transformed(95%CI) BSV(CV% or SD)
#> tka Log Ka 0.408 0.2 49 1.5 (1.02, 2.23) 63.2
#> tcl Log Cl 1.03 0.0804 7.82 2.79 (2.39, 3.27) 35.2
#> tv log V 30.9 1.22 3.95 30.9 (28.6, 33.3) 0.707
#> add.sd 0.77 0.77
#> Shrink(SD)%
#> tka 0.533%
#> tcl 0.933%
#> tv 53.9%
#> add.sd
#>
#> Covariance Type ($covMethod): r,s
#> Fixed parameter correlations in $cor
#> No correlations in between subject variability (BSV) matrix
#> Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs)
#> Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink
#> Minimization message ($message):
#> relative convergence (4)
#>
#> ── Fit Data (object is a modified tibble): ─────────────────────────────────────
#> # A tibble: 132 x 20
#> ID TIME DV PRED RES WRES IPRED IRES IWRES CPRED CRES
#> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0 0.74 0 0.74 0.961 0 0.74 0.961 -8.52e-16 0.74
#> 2 1 0.25 2.84 3.20 -0.363 -0.212 3.89 -1.05 -1.36 3.16e+ 0 -0.316
#> 3 1 0.570 6.57 5.78 0.788 0.351 6.75 -0.179 -0.233 5.78e+ 0 0.789
#> # … with 129 more rows, and 9 more variables: CWRES <dbl>, eta.ka <dbl>,
#> # eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>, tad <dbl>,
#> # dosenum <dbl>
Created on 2021-03-16 by the reprex package (v1.0.0)
Hello, I have some troubles with interpreting the values of residual error returned by nlmixR. According to the RxODE documentation, the combined1 and combined2 error models are as follows: combined1: transform(y)=transform(f)+(a+bf^c)eps combined2: transform(y)=transform(f)+(a^2+b^2f^(2c))eps However, in the Monolix documentation, the same errors are presented without transformation: combined1: Concentration = Cc + (a + bCc) e combined2: Concentration = Cc + sqrt(a^2 + (bCc)^2) e
Does it mean that nlmixR/RxODE log-transforms F? I assume that in nlmixR, a and b are constants, and eps follows a typical distribution with mean 0 and sd = 1. Or is it fixed to 1, while the residual error elements have mean = 0 and sd = a or b?
Also, I've found information that back-transformed parameters are converted to %CV. Is it still correct? Sorry to bother you, but I am confused.