As you can tell from the previous two issues that I just submitted, I'm having a lot of thoughts about steady-state. I'm running a clinical trial simulation where the system comes to steady-state in about 2 weeks. After that, it's all just time within the dosing interval and residual variability differences for how each measurement behaves.
It would be handy if I could tell mrgsolve to try to find steady-state automatically in a simple way.
What I have done for the current simulation is a lot of mucking about and playing with the time variable so that it takes advantage of the speed of steady-state calculation while taking advantage of the simplicity of different data management.
The specific algorithm that I'm thinking of is something like the following (with an indicator of SS=-1 in the dataset to take advantage of these automatic features):
If SS == 0 or SS == 1, perform with current specifications.
If SS == -1
From the beginning to steady-state time simulate as normal (i.e. allow the system to come to steady-state and show the values as the system is coming to steady-state as though SS == 0)
From when steady-state is achieved (using the current detection of steady-state rules for SS == 1) through the end of the dataset for the individual or until the first record where SS != -1, assume that the only difference for all the remaining records is the time after dose. Simulate a single dosing interval with all time after dose values in that interval, and resample from the residual error distribution for each additional value.
If there are additional values after the last dosing interval at steady-state, simulate from the end of the dosing interval to the last sample as though SS == 0.
The first issue I see with this is something like auto-regressive residual error, but that would be a problem for normal SS == 1 records, anyway, so it shouldn't be a specific issue here.
As you can tell from the previous two issues that I just submitted, I'm having a lot of thoughts about steady-state. I'm running a clinical trial simulation where the system comes to steady-state in about 2 weeks. After that, it's all just time within the dosing interval and residual variability differences for how each measurement behaves.
It would be handy if I could tell mrgsolve to try to find steady-state automatically in a simple way.
What I have done for the current simulation is a lot of mucking about and playing with the time variable so that it takes advantage of the speed of steady-state calculation while taking advantage of the simplicity of different data management.
The specific algorithm that I'm thinking of is something like the following (with an indicator of
SS=-1
in the dataset to take advantage of these automatic features):SS == 0
orSS == 1
, perform with current specifications.SS == -1
SS == 0
)SS == 1
) through the end of the dataset for the individual or until the first record whereSS != -1
, assume that the only difference for all the remaining records is the time after dose. Simulate a single dosing interval with all time after dose values in that interval, and resample from the residual error distribution for each additional value.SS == 0
.