Closed mmazeika closed 6 years ago
Hmm that's interesting that I didn't notice this. Could you do me a favor and see if it goes a way if you change fmax
to ::fmax
so that it uses the global CUDA version
I changed return fmax(0.0, z) + fmin(0.0, alpha * (exp(z) - 1.0));
in the elu function to return ::fmax(0.0, z) + fmin(0.0, alpha * (exp(z) - 1.0));
, but the error function didn't change. It didn't start complaining about the fmin.
I tried changing line 54 in the original code from candidate_cell[index] = elu(gates[gates_row + 2 * state_size + column]);
to candidate_cell[index] = sigmoid(gates[gates_row + 2 * state_size + column]);
so as to avoid calling fmax and fmin, and I got pages of errors as a result. Two errors in the printout that repeat several times are
error: wrong number of template arguments (5, should be 2) return __and_<__not_<is_same<tuple<_Elements...>
and
error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return __and_<is_constructible<_Elements, _UElements&&>...>::value;
.
I've attached the printout in a text file to avoid clutter. torch.cuda.is_available() returns True in Python.
Ah, I see. I was using a different python environment from the one I normally use, so when I actually run test = torch.FloatTensor([1]).cuda()
, I get the error
Found GPU0 GeForce GTX 770M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old.
I'll let you know if this problem is fixed with PyTorch installed from source.
Sounds good, let me know.
Yep, that did the trick.
Hi, I met exactly the same issue when trying to compile it. And I've checked my PyTorch version is up-to-date (0.4.1) and my cuda version is 9.1.
I cloned the repository, and the CPU version compiles, but I get the following error when running
python setup.py install
in the cuda folder.I'm using PyTorch 0.4.0 installed via conda a few weeks ago, Python 3.5, CUDA 9.0, cuDNN 7.1.4, and GCC 6.4.0.