I just learned (from a talk) that the backend is not actually parallelized at present, although the method is embarrassingly parallel in principle. I am happy to help with this issue if there is scope. What are the main blockers? E.g. can https://pytorch.org/docs/stable/multiprocessing.html#module-torch.multiprocessing not be used?
I just learned (from a talk) that the backend is not actually parallelized at present, although the method is embarrassingly parallel in principle. I am happy to help with this issue if there is scope. What are the main blockers? E.g. can https://pytorch.org/docs/stable/multiprocessing.html#module-torch.multiprocessing not be used?