Closed sweg44 closed 3 years ago
Hi! Thanks for reporting. Could you give a bit more details around your problem setup? (what OS, what task are you performing, what Python version). Could you maybe copy-paste the exact code you are running in both these settings?
Re. the first issue, this seems it cannot find a suitable compiler. Do you have PyTorch installed with GPU support following the instructions from the Pytorch website as well as the full CUDA toolkit? Also, this error should only arise when you are trying to train on GPU. Perhaps you should still be able to train on CPU. I'm guessing you are supplying device
:gpu
to the algorithm in this setting. Does it also fail when you try to train on CPU? (by setting device
:cpu
in the parameters dict)?
Re. the second issue, this seems odd. Which task are you performing? Did you run this on an example from the examples directory? Numpy float64 arrays support the .repeat
operation, at least as of version 1.19.2 as far as I know. What are the datatypes that you supply to the algorithm? (In the Numba package, it should be a tuple of Numpy arrays, whereas in the torch version, it should be torch or numpy arrays)
Thanks!
@sweg44 Hi, there is a new release that has fixed a number of bugs I have identified (although unrelated to your issue), maybe that would work for you too. Let me know, otherwise I'll close this issue next week assuming you have found a solution.
Closing this issue. Hope you found a solution!
I have tried to use this package in PyTorch mode and in Numba and both fail.
For the PyTorch version I keep getting "cpp_extension.py Error Checking Compiler Version"
For the Numba version I keep getting "AttributeError: 'float' object has no attribute 'repeat'"