Closed nh2 closed 5 years ago
@nh2 We already mention CUDA
-capable devices, it felt clear to us that this implied CUDA to be installed, but obviously it's not the case, so any PR or feedback on painpoints to improve the wording would be welcome :-)
Though, technically, CUDA is not a dependency we can install, it really depends on the user's setup. The dependencies mentionned in your quote are related to the Python wheel.
We already mention
CUDA
-capable devices, it felt clear to us that this implied CUDA to be installed
You mention it only in the training section, which people who just want to use the pretrained model will not read.
And even there it does only mention CUDA devices, not that (which?) parts of the CUDA SDK are required, or which version (e.g. my current understanding is that even with a full CUDA 8 isntallation, it won't work because CuDNN version 9 is required).
Though, technically, CUDA is not a dependency we can install
That is fine, as long as it's documented what the user shall provide as prerequisites, they'll likely do it. But right now it suggests that it truly takes care of installing all the required dependencies, and specifically so under the deepspeech-gpu
install, which is quite misleading and makes for a bad first experience.
We already mention
CUDA
-capable devices, it felt clear to us that this implied CUDA to be installedYou mention it only in the training section, which people who just want to use the pretrained model will not read.
And even there it does only mention CUDA devices, not that (which?) parts of the CUDA SDK are required, or which version (e.g. my current understanding is that even with a full CUDA 8 isntallation, it won't work because CuDNN version 9 is required).
Though, technically, CUDA is not a dependency we can install
That is fine, as long as it's documented what the user shall provide as prerequisites, they'll likely do it. But right now it suggests that it truly takes care of installing all the required dependencies, and specifically so under the
deepspeech-gpu
install, which is quite misleading and makes for a bad first experience.
I'm not deying it, I'm open to any suggestion on how to reformulate / where to write that.
We already mention
CUDA
-capable devices, it felt clear to us that this implied CUDA to be installedYou mention it only in the training section, which people who just want to use the pretrained model will not read.
And even there it does only mention CUDA devices, not that (which?) parts of the CUDA SDK are required, or which version (e.g. my current understanding is that even with a full CUDA 8 isntallation, it won't work because CuDNN version 9 is required).
Though, technically, CUDA is not a dependency we can install
That is fine, as long as it's documented what the user shall provide as prerequisites, they'll likely do it. But right now it suggests that it truly takes care of installing all the required dependencies, and specifically so under the
deepspeech-gpu
install, which is quite misleading and makes for a bad first experience.
Right, obviously we need better doc around there, because it's the same deps as upstream TensorFlow: CUDA 9.0 and CuDNN v7.2. Though, I'm not sure how / where to document that in an efficient manner.
@nh2 Even smarter than just documenting it, I was thinking maybe we could catch the ImportError
and others on the Python / NodeJS code and expose a better error message.
I'm still not sure where would be the easiest place to find that in the documentation, so I'm open to your feedback.
@lissyx This looks great, thank you!
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
The current README says:
This isn't the case: If I try it, I get
ImportError: libcusolver.so.9.0: cannot open shared object file: No such file or directory
, as did1655
1666
The README should mention that CUDA (and which version, and which sub-projects) is a dependency.