Open patrick-kidger opened 3 years ago
@lxuechen Benchmarks look rather encouraging for the BrownianInterval. e.g. see below. (Every benchmark exhibits the same behaviour.)
I would note that the number of time steps is very low in the benchmarks (only 100). I expect that larger timesteps will result in the BrownianPath overtaking the BrownianInterval.
This is encouraging! Would you have an idea as to where the gain is coming from?
I would guess it's coming from 1) using __slots__
, and 2) explicitly modeling increments.
How much different are the plots when these bms are plugged in the solvers?
I can think of a couple other things for certain cases:
BrownianInterval
is reasonably smart about where it starts searching; IIRC BrownianPath
doesn't use any hinting about where to start searching.BrownianPath
: not having to do list inserts, which are O(n) on average..to()
copy and instead initialising right on the GPU via device=
. TBH this is really just a bug in the other implementations though.The equivalent plots for the solvers look pretty much identical; I just picked one at random.
Several side comments:
Switch to interval interface.
This breaks backwards compatibility. I'd be nice if we could have tb
as an optional argument for BPath and BTree for now.
Removed 'trapezoidal_approx' as option from SRK solvers
This line might be wrong if the second output of bm
is H in James' thesis, since I_k0 isn't exactly the space-time Levy area.
A side comment is that I don't totally support merging the SRK methods for now. But I think we should be able to get rid of the tableuas folder if you find it disturbing.
Also the docstring needs to change for that file.
Several side comments:
Switch to interval interface.
This breaks backwards compatibility. I'd be nice if we could have
tb
as an optional argument for BPath and BTree for now.
Done.
Removed 'trapezoidal_approx' as option from SRK solvers
This line might be wrong if the second output of
bm
is H in James' thesis, since I_k0 isn't exactly the space-time Levy area.
I'll add that to the bug list!
A side comment is that I don't totally support merging the SRK methods for now. But I think we should be able to get rid of the tableuas folder if you find it disturbing.
The SRK methods actually aren't merged at the moment; for now at least I've taken the reasonably straightforward approach of unifying as much of the easy code as I can and leaving their step
s separate. The tableuas folder doesn't bother me. :)
Also the docstring needs to change for that file.
Done!
I can think of a couple other things for certain cases:
- For sequential access: the algorithm used in
BrownianInterval
is reasonably smart about where it starts searching; IIRCBrownianPath
doesn't use any hinting about where to start searching.- For random access with
BrownianPath
: not having to do list inserts, which are O(n) on average.- For the GPU: not to having to do a
.to()
copy and instead initialising right on the GPU viadevice=
. TBH this is really just a bug in the other implementations though.The equivalent plots for the solvers look pretty much identical; I just picked one at random.
I see. For GPUs the copying part makes sense. The list inserts argument doesn't seem to be the most convincing to me, since blist should have O(logn) complexity.
Overall seems like awesome progress!
Good point about the blist.
Thanks. I'm expecting to try and tackle most of the bugs first. I've just seen your email so I'll continue responding over there.
I've left the test and examples alone as I suspect you'll be much more familiar with how those should go than I am.
I'd recommend we first test BrownianInterval and check for its correctness. This actually should have been done before rewriting the solvers. The problem is that now, I couldn't run the rate inspect diagnostics, since the BInterval is raising errors.
I think all the 'SDE' things are the bits that I'm least familiar with, and that you're most familiar with. Can you handle them?
I can do this.
I've left the test and examples alone as I suspect you'll be much more familiar with how those should go than I am.
I'd recommend we first test BrownianInterval and check for its correctness. This actually should have been done before rewriting the solvers. The problem is that now, I couldn't run the rate inspect diagnostics, since the BInterval is raising errors.
One of the reasons I had it in a separate branch originally! What error is being raised?
I think all the 'SDE' things are the bits that I'm least familiar with, and that you're most familiar with. Can you handle them?
I can do this.
Excellent, thankyou!
If my derivation is correct then I think the term needed in the SRK should be:
I_k0 = (t - s) * (H_{s,t} + 0.5 W_{s,t})
But it's also 1am where I am. Does that look about right to you?
If my derivation is correct then I think the term needed in the SRK should be:
I_k0 = (t - s) * (H_{s,t} + 0.5 W_{s,t})
But it's also 1am where I am. Does that look about right to you?
Yes, this is right. I_k0
is exactly the U_{s, t} term in the note I sent you, and the conversion is eq (4) in that note.
I think, for the moment, I wouldn't worry too much about rewriting/refactoring more of the existing code.
The thing that I'm worried more about is whether the new BrownianInterval
is resilient enough to stand through numerical tests. The idea here is that I'm seeing subtle issues introduced by this new data structure and the various rewrites, and I don't want to be moving forward before we properly test BrownianInterval
.
This also relates to the habit of mine where I'd like to run the test suit every time I make a minor change, so that I know exactly what could go wrong.
Here are tests that I think would benefit us when we go forward for BrownianInterval
(and even for BPath and BTree when I add in the L.A. approxs)
to
, i.e. to a new device or dtype should transfer/convert all existing tensors -- including those in the cache if possible.I think once these are in place, I'd have much more confidence in going forward.
Another note is that is there a way that we can simplify the interface, as right now the user is basically forced to manually supply levy_area_approx='davie'/'foster'/'space-time' just in order to use SRK. Using a high order scheme shouldn't be this difficult!
Also, does Davie approximation have any advantages over Foster? I thought Foster's version is more accurate, at the cost of one extra random number generation, which is totally reasonable considering the costs of other components.
Your concerns about tests are reasonable - bugs and tests are definitely the next things to handle.
I agree that having to specify levy area to use SRK should be fixed; this is already on the bug list. I'm not sure what the best way to handle this is.
I think my preferred fix would be to upgrade the default sdeint(..., bm=None)
case to initialise a Brownian object with the appropriate levy area, and regard the explicit bm=something
case as more of a power user's case, for which it's up to them to do things correctly.
I also like the ability to then switch between davie/foster approximation independently of the solver, in particular as better approximations may be introduced in the future.
The alternative option is slightly messier, but still doable -
Brownian should be fixed to a particular choice of levy area approximation, as doing otherwise introduces headaches: what if it's queried with one choice, and then a different one?
Having the choice of levy area approximation be specifiable by the solver, rather than on __init__
, then means that some sort of "deferred __init__
" is necessary, which is called when Brownian is passed into sdeint. I think that would only be used to set the levy area approximation flag, and nothing else, so this is definitely doable, but I'm not a fan of having partially initialised objects wandering around.
Care to make a judgement one way or the other?
Davie's approximation is slightly faster than Foster, by a bit more than just the random number generation (there's a bit of algebra too), but AFAIK there's not a big difference between them however you slice it. But for forward compatibility with even better approximations in the future, I'd avoid code layouts for which Foster gets baked in as the only option.
I think my preferred fix would be to upgrade the default sdeint(..., bm=None) case to initialise a Brownian object with the appropriate levy area, and regard the explicit bm=something case as more of a power user's case, for which it's up to them to do things correctly. I also like the ability to then switch between davie/foster approximation independently of the solver, in particular as better approximations may be introduced in the future.
I agree with this, and the plan seems nice.
Davie's approximation is slightly faster than Foster, by a bit more than just the random number generation (there's a bit of algebra too), but AFAIK there's not a big difference between them however you slice it. But for forward compatibility with even better approximations in the future, I'd avoid code layouts for which Foster gets baked in as the only option.
SGTM.
@lxuechen This is labelled as Ito-specific: https://github.com/google-research/torchsde/blob/133a74570affed019006ce89bb0135ef220f697a/torchsde/_core/base_sde.py#L53 Is there any part of that which actually relies on that though? It looks like it's just general drift/diffusion operations.
Is there any part of that which actually relies on that though? It looks like it's just general drift/diffusion operations.
This is removed in #20 and #21.
To-do list so far:
Brownian motions:
calls inAs of #19 support single-point evaluationexamples/demo.ipynb
calls inAs of #19 support single-point evaluationtests/problems.py
tests/test_brownian_path.py
tests/test_brownian_tree.py
__init__
:t
outside of t0, t1. (Patrick) (#9, #10, #28)Solvers:
integrate_logqp
andinterp_logqp
SDEs:
gdg_prod
for the adjoint. (Chen)sde_type
and themethod
.Misc:
python_requires
.diagnostics/ito_scalar.py
appears to be using huge amount of memory, and is also using a diagonal (and thus not compatible) problem.sdeint
,sdeint_adjoint
to automatically select the probably-best solver for a given noise type / SDE type combination. (Rather than just always using SRK.) (#73)Tests:
test_sdeint.py
,test_adjoint.py
,test_adjoint_logqp.py
(Chen) (#21)test_brownian_interval.py::test_normality_conditional
(#44)test_sdeint.py
to pytest. (#73)Bugs to fix:
a
from space-time Levy area to Levy area is of the wrong shape. (Patrick) (#28)I_k0
in SRK.method='srk'
andlevy_area_approximation='none'
are incompatible with one another. (Done in #19)bm
isn't passed then it defaults to being aBrownianPath
on the CPU, potentially causing device errors later. (Done in #19)