Closed winglet0996 closed 3 months ago
Hi, indeed we usually see a slowdown when using multiple workers. In the next release (or current main branch), we will add a persistent worker flag. We haven’t benchmarked in depth how the speed compares with this setting (if you try it out please report it). For the JAX warning, you can ignore it. JAX is not called during training so it shouldn’t increase your runtime. To sum it up, PyTorch Lightning has some ideas to maximize performance that don’t necessarily improve performance.
Hi, I am following the scRNA-seq tutorial.
After running
I got
When I set this at the very beginning
I got tons of
and the training speed was much slower than the previous one (time/iter increaesd 5x).
Versions:
scvi-tools-1.1.5