Closed hkitna closed 5 years ago
I just wanted to say that this is usually a driver error, then I saw this line:
cuda 8.0 (No install nvidia driver, only install cuda from official website)
Do you have any other driver installed? GPU training will not run without GPU support. Does typing
nvidia-smi
produce any information?
And somewhat unrelated:
[2019-11-09 07:52:23] [data] Setting vocabulary size for input 1 to 410825
A vocabulary size of multiple hundreds of thousands items is not a good idea for NMT in general
I just wanted to say that this is usually a driver error, then I saw this line:
cuda 8.0 (No install nvidia driver, only install cuda from official website)
Do you have any other driver installed? GPU training will not run without GPU support. Does typing
nvidia-smi
produce any information?
Dear Emjotde,
Thanks for your reply. I am using rx5700xt. So that I just install cuda only. Does it mean I need to find a PC which is using Nvidia graphic card?
Yes. CUDA won't work on non-NVidia GPUs. There are some alternatives, but hardly anyone supports them.
Yes. CUDA won't work on non-NVidia GPUs. There are some alternatives, but hardly anyone supports them.
Oh. I got it. I just replace my graphic card a few day ago.
I will try it again in Nvidia GPUs. Thanks for your answer.
And somewhat unrelated:
[2019-11-09 07:52:23] [data] Setting vocabulary size for input 1 to 410825
A vocabulary size of multiple hundreds of thousands items is not a good idea for NMT in general
And I think it is because I am using the non-process corpus data? I think I need to token and clean before I use the corpus data. But anyway, I will try to re-do it on Nvidia GPUs.
Thanks again for your help.
Oh, definitely. Also take a look at sub-word segmentation via SentencePiece (https://github.com/google/sentencepiece) or BPE (https://github.com/rsennrich/subword-nmt) and this example/tutorial: https://github.com/marian-nmt/marian-examples/tree/master/training-basics-sentencepiece
Dear all,
It is my first time for me to learning NMT. I had prepared some data for training model.
I use ./build/marian --train-sets corpus.en corpus.ro and I got the below error.
I am using the below version Ubuntu 16.04 LTS g++ 5.4 cuda 8.0 (No install nvidia driver, only install cuda from official website) Boost 1.58 cmake 3.5.1
Any suggestion/idea would be appreciated.
Thanks.
[2019-11-09 07:52:23] [data] Setting vocabulary size for input 1 to 410825 [2019-11-09 07:52:23] Error: Curand error 203 - /home/mio/marian/src/tensors/rand.cpp:75: curandCreateGenerator(&generator_, CURAND_RNG_PSEUDO_DEFAULT) [2019-11-09 07:52:23] Error: Aborted from marian::CurandRandomGenerator::CurandRandomGenerator(size_t, marian::DeviceId) in /home/mio/marian/src/tensors/rand.cpp:75
[CALL STACK] [0x922001]
[0x922ab8]
[0x9210b4]
[0x92082c]
[0x5d380b]
[0x4efde7]
[0x4f041b]
[0x5057ac]
[0x43b134]
[0x41969a]
[0x7f2f89ca1830] __libc_start_main + 0xf0 [0x438719]
Aborted (core dumped)