Closed jwuphysics closed 1 year ago
Hi John, thanks for the report. We're aware of both problems. For the second one, there's PR #35, I just haven't found the time to merge it yet.
For the first one, we have a new branch new-aux
, which will organize the input and output data in dicts, and load them all with the same method, so you wouldn't have z
on a different device as the rest. If you can wait a little longer, both of these problem should be fixed.
Sounds great, and glad to hear you're on it! Will close this for now, then.
The latest main
branch has the fix in for the RuntimeError. However, it looks like using dicts as input gives pretty poor performance for our data loader. Until we solve that, I recommend making sure to have all inputs to the spender functions reside on the same device. In your case, you want to make sure that z
is on the GPU.
I tried running the examples in the README using Pytorch version 1.12.1 on a CUDA-enabled machine. I ran into two small issues at this point:
This first was that the
model.wave_rest
tensor was on the GPU device whilez
was on the CPU, which triggered a RuntimeError here:This can be easily resolved with a
self.wave_rest.unsqueeze(1).cpu()
call.Once that's fixed, we hit a second snag a few lines down (here I'm showing the entire error stack):
In the updated syntax, we can simply replace line
model.py:340
with:and it all works.
If I have time I'll make a PR, assuming you don't get to it first.