Open forEachWhileTrue opened 1 year ago
Hi,
There shouldn't be anything preventing you from running on CPU (as long as you don't try to use any of the point cloud fusion utils or some of the visualization scripts, which might contain non-portable .cuda()
calls, but you should probably be able to remove those without much trouble).
Re: MPS, I haven't tried it, so I can't guarantee the operators will all work. You will definitely have to play around with the map_location
argument to torch.load
to get the weights to the correct device. If you have success, let us know :)
I've made a start here: https://github.com/rerun-io/simplerecon/pull/1 . Once I have something working, I will clean it up and turn it into a PR against this repo.
Hey there!
I'm going to write my bachelor thesis about solving the occlusion problem in AR with 3D reconstruction. There is really exciting stuff like NeuralRecon and this project.
Sadly I can't run SimpleRecon because my Mac doesn't support CUDA. Is there a way to switch the device to CPU or even MPS? https://pytorch.org/docs/master/notes/mps.html
Otherwise I have to get SimpleRecon running on a remote device which supports CUDA and my first thought was Google Colab. I'm really new to PyTorch and neural networks in general so maybe you can help me out here and/or give me some recommendations. This would be great!
Thanks in advance and making this stuff open source. :)