Closed mhr closed 3 years ago
For me, I was able to make it work using pip install disent --user
to install disent globally and then install PyTorch in a distinct virtual environment, which appears to allow disent to use all of its dependencies while allowing me to customize the version of PyTorch I'm using for my project. I realize this is probably not strictly kosher, but it does the trick.
I was able to run all of the import statements from https://disent.dontpanic.sh/en/latest/#metrics, except for from disent.frameworks.vae import BetaVae
, which results in some module import errors.
That's okay for my purposes, since I'm going to just use the metrics functions for my project.
Thank you for reporting this, glad you found a solution.
I unfortunately wont re-open this issue as this should have been fixed in the development versions. I will however be working on trimming down the dependencies, and removing unnecessary parts of the project.
Hello,
First, thank you for creating this package, it will make my life so much easier. One problem I've run into though is that since installing the package, it only allows me to run my PyTorch models on the CPU, even though when I run torch.device(0), it gives me "cuda". But when I run torch.cuda.is_available(), I get False. I'm on a Windows machine. In another virtual environment, where I don't have disent installed, I can load my models on the GPU just fine. Any idea why this is happening? I'm going to keep playing around with installation of the different packages, and if I figure it out, I'll reply to this post with the solution.