Open DBL1997 opened 6 years ago
Hi, see the top of the readme. I experienced instabilities with Tesla V100 and P40 myself. I tried different pytorch and cudnn versions and it did not help. However I did not try CUDA 9.1, maybe it would fix this.
On Fri, May 25, 2018 at 4:22 AM DBL1997 notifications@github.com wrote:
I'm using 2 computers to run your code (denoising, F16):
- GTX745+PyTorch0.2.1+CUDA8.0
- Tesla P40+PyTorch0.4.0+CUDA9.0 All the parameters I used are the default parameters in your code except I enlarge the iteration number to 20000. However, the result of these two computers are very different.
The result of 0.2.1 is good: the loss is decreasing in a stable pace, and finally it can learn all the noise. The result of 0.4.0 is not normal: the loss is always around the 0.01. it decreases in a slow pace, and sometimes the loss just jumps from 0.01 to 0.02.
So I was wondering should I change some parameters or do some modification to adjust the higher PyTorch Version?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/DmitryUlyanov/deep-image-prior/issues/33, or mute the thread https://github.com/notifications/unsubscribe-auth/AGanZLWPbvVAgIFk76YqHtrCO1X3xDs9ks5t11zHgaJpZM4UNOkZ .
-- Best, Dmitry
I'm using 2 computers to run your code (denoising, F16):
The result of 0.2.1 is good: the loss is decreasing in a stable pace, and finally it can learn all the noise. The result of 0.4.0 is not normal: the loss is always around the 0.01. it decreases in a slow pace, and sometimes the loss just jumps from 0.01 to 0.02.
So I was wondering should I change some parameters or do some modification to adjust the higher PyTorch Version?