aywagner / DIPOLE

DIstributed Persistence-Optimized Local Embeddings
MIT License
12 stars 8 forks source link

Parallelize backend #1

Open ulupo opened 3 years ago

ulupo commented 3 years ago

I just learned (from a talk) that the backend is not actually parallelized at present, although the method is embarrassingly parallel in principle. I am happy to help with this issue if there is scope. What are the main blockers? E.g. can https://pytorch.org/docs/stable/multiprocessing.html#module-torch.multiprocessing not be used?

ulupo commented 2 years ago

Implemented in #4 (using joblib)

ulupo commented 2 years ago

Incorrectly thought this was merged into main in #4, reopening.