Closed L-M-Sherlock closed 1 year ago
@L-M-Sherlock does it work with other backends?
@L-M-Sherlock does it work with other backends?
It works with burn_wgpu. I haven't tested burn_tch because building torch-sys costs too much time.
I tested the tch backend. It was blocked by calculating loss.
Fixed with https://github.com/burn-rs/burn/issues/686
Describe the bug
My model can be trained when the batch size is one. But a fatal error happened when I set batch_size = 2.
To Reproduce
My model is complicated, so I haven't find a minimal script to reproduce the error.
For details, please see: https://github.com/open-spaced-repetition/fsrs-optimizer-burn/pull/16#issuecomment-1689326326
The code reproducing error: https://github.com/open-spaced-repetition/fsrs-optimizer-burn/pull/16/commits/23b8772eb2e8c56b1cda40cb66b08aa5d20772c8