google-research / torchsde

Differentiable SDE solvers with GPU support and efficient sensitivity analysis.
Apache License 2.0
1.52k stars 195 forks source link

0.2.0 TODO list #15

Open patrick-kidger opened 3 years ago

patrick-kidger commented 3 years ago

To-do list so far:

Brownian motions:

Solvers:

SDEs:

Misc:

Tests:

Bugs to fix:

patrick-kidger commented 3 years ago

@lxuechen Benchmarks look rather encouraging for the BrownianInterval. e.g. see below. (Every benchmark exhibits the same behaviour.)

I would note that the number of time steps is very low in the benchmarks (only 100). I expect that larger timesteps will result in the BrownianPath overtaking the BrownianInterval.

image

lxuechen commented 3 years ago

This is encouraging! Would you have an idea as to where the gain is coming from?

I would guess it's coming from 1) using __slots__, and 2) explicitly modeling increments.

How much different are the plots when these bms are plugged in the solvers?

patrick-kidger commented 3 years ago

I can think of a couple other things for certain cases:

The equivalent plots for the solvers look pretty much identical; I just picked one at random.

lxuechen commented 3 years ago

Several side comments:

Switch to interval interface.

This breaks backwards compatibility. I'd be nice if we could have tb as an optional argument for BPath and BTree for now.

Removed 'trapezoidal_approx' as option from SRK solvers

This line might be wrong if the second output of bm is H in James' thesis, since I_k0 isn't exactly the space-time Levy area.

A side comment is that I don't totally support merging the SRK methods for now. But I think we should be able to get rid of the tableuas folder if you find it disturbing.

Also the docstring needs to change for that file.

patrick-kidger commented 3 years ago

Several side comments:

Switch to interval interface.

This breaks backwards compatibility. I'd be nice if we could have tb as an optional argument for BPath and BTree for now.

Done.

Removed 'trapezoidal_approx' as option from SRK solvers

This line might be wrong if the second output of bm is H in James' thesis, since I_k0 isn't exactly the space-time Levy area.

I'll add that to the bug list!

A side comment is that I don't totally support merging the SRK methods for now. But I think we should be able to get rid of the tableuas folder if you find it disturbing.

The SRK methods actually aren't merged at the moment; for now at least I've taken the reasonably straightforward approach of unifying as much of the easy code as I can and leaving their steps separate. The tableuas folder doesn't bother me. :)

Also the docstring needs to change for that file.

Done!

lxuechen commented 3 years ago

I can think of a couple other things for certain cases:

  • For sequential access: the algorithm used in BrownianInterval is reasonably smart about where it starts searching; IIRC BrownianPath doesn't use any hinting about where to start searching.
  • For random access with BrownianPath: not having to do list inserts, which are O(n) on average.
  • For the GPU: not to having to do a .to() copy and instead initialising right on the GPU via device=. TBH this is really just a bug in the other implementations though.

The equivalent plots for the solvers look pretty much identical; I just picked one at random.

I see. For GPUs the copying part makes sense. The list inserts argument doesn't seem to be the most convincing to me, since blist should have O(logn) complexity.

lxuechen commented 3 years ago

Overall seems like awesome progress!

patrick-kidger commented 3 years ago

Good point about the blist.

Thanks. I'm expecting to try and tackle most of the bugs first. I've just seen your email so I'll continue responding over there.

lxuechen commented 3 years ago

I've left the test and examples alone as I suspect you'll be much more familiar with how those should go than I am.

I'd recommend we first test BrownianInterval and check for its correctness. This actually should have been done before rewriting the solvers. The problem is that now, I couldn't run the rate inspect diagnostics, since the BInterval is raising errors.

I think all the 'SDE' things are the bits that I'm least familiar with, and that you're most familiar with. Can you handle them?

I can do this.

patrick-kidger commented 3 years ago

I've left the test and examples alone as I suspect you'll be much more familiar with how those should go than I am.

I'd recommend we first test BrownianInterval and check for its correctness. This actually should have been done before rewriting the solvers. The problem is that now, I couldn't run the rate inspect diagnostics, since the BInterval is raising errors.

One of the reasons I had it in a separate branch originally! What error is being raised?

I think all the 'SDE' things are the bits that I'm least familiar with, and that you're most familiar with. Can you handle them?

I can do this.

Excellent, thankyou!

patrick-kidger commented 3 years ago

If my derivation is correct then I think the term needed in the SRK should be:

I_k0 = (t - s) * (H_{s,t} + 0.5 W_{s,t})

But it's also 1am where I am. Does that look about right to you?

lxuechen commented 3 years ago

If my derivation is correct then I think the term needed in the SRK should be:

I_k0 = (t - s) * (H_{s,t} + 0.5 W_{s,t})

But it's also 1am where I am. Does that look about right to you?

Yes, this is right. I_k0 is exactly the U_{s, t} term in the note I sent you, and the conversion is eq (4) in that note.

lxuechen commented 3 years ago

I think, for the moment, I wouldn't worry too much about rewriting/refactoring more of the existing code.

The thing that I'm worried more about is whether the new BrownianInterval is resilient enough to stand through numerical tests. The idea here is that I'm seeing subtle issues introduced by this new data structure and the various rewrites, and I don't want to be moving forward before we properly test BrownianInterval.

This also relates to the habit of mine where I'd like to run the test suit every time I make a minor change, so that I know exactly what could go wrong.

lxuechen commented 3 years ago

Here are tests that I think would benefit us when we go forward for BrownianInterval (and even for BPath and BTree when I add in the L.A. approxs)

I think once these are in place, I'd have much more confidence in going forward.

lxuechen commented 3 years ago

Another note is that is there a way that we can simplify the interface, as right now the user is basically forced to manually supply levy_area_approx='davie'/'foster'/'space-time' just in order to use SRK. Using a high order scheme shouldn't be this difficult!

Also, does Davie approximation have any advantages over Foster? I thought Foster's version is more accurate, at the cost of one extra random number generation, which is totally reasonable considering the costs of other components.

patrick-kidger commented 3 years ago

Your concerns about tests are reasonable - bugs and tests are definitely the next things to handle.

I agree that having to specify levy area to use SRK should be fixed; this is already on the bug list. I'm not sure what the best way to handle this is. I think my preferred fix would be to upgrade the default sdeint(..., bm=None) case to initialise a Brownian object with the appropriate levy area, and regard the explicit bm=something case as more of a power user's case, for which it's up to them to do things correctly. I also like the ability to then switch between davie/foster approximation independently of the solver, in particular as better approximations may be introduced in the future.

The alternative option is slightly messier, but still doable - Brownian should be fixed to a particular choice of levy area approximation, as doing otherwise introduces headaches: what if it's queried with one choice, and then a different one? Having the choice of levy area approximation be specifiable by the solver, rather than on __init__, then means that some sort of "deferred __init__" is necessary, which is called when Brownian is passed into sdeint. I think that would only be used to set the levy area approximation flag, and nothing else, so this is definitely doable, but I'm not a fan of having partially initialised objects wandering around.

Care to make a judgement one way or the other?

Davie's approximation is slightly faster than Foster, by a bit more than just the random number generation (there's a bit of algebra too), but AFAIK there's not a big difference between them however you slice it. But for forward compatibility with even better approximations in the future, I'd avoid code layouts for which Foster gets baked in as the only option.

lxuechen commented 3 years ago

I think my preferred fix would be to upgrade the default sdeint(..., bm=None) case to initialise a Brownian object with the appropriate levy area, and regard the explicit bm=something case as more of a power user's case, for which it's up to them to do things correctly. I also like the ability to then switch between davie/foster approximation independently of the solver, in particular as better approximations may be introduced in the future.

I agree with this, and the plan seems nice.

Davie's approximation is slightly faster than Foster, by a bit more than just the random number generation (there's a bit of algebra too), but AFAIK there's not a big difference between them however you slice it. But for forward compatibility with even better approximations in the future, I'd avoid code layouts for which Foster gets baked in as the only option.

SGTM.

patrick-kidger commented 3 years ago

@lxuechen This is labelled as Ito-specific: https://github.com/google-research/torchsde/blob/133a74570affed019006ce89bb0135ef220f697a/torchsde/_core/base_sde.py#L53 Is there any part of that which actually relies on that though? It looks like it's just general drift/diffusion operations.

lxuechen commented 3 years ago

Is there any part of that which actually relies on that though? It looks like it's just general drift/diffusion operations.

This is removed in #20 and #21.