Closed loretoparisi closed 7 years ago
nvcc fatal : The version ('80100') of the host compiler ('Apple clang') is not supported
There is an incompatibility between your CUDA version and compiler version. You should find a workaround by just Googling the error The version ('80100') of the host compiler ('Apple clang') is not supported.
If you do not care about CUDA support, you can also disable it:
rm -f CMakeCache.txt
cmake -DCUDA_TOOLKIT_ROOT_DIR="" ..
@guillaumekln thank you, this time it worked:
[ 93%] Building CXX object cli/CMakeFiles/translate.dir/BatchReader.cc.o
[ 96%] Building CXX object cli/CMakeFiles/translate.dir/BatchWriter.cc.o
[100%] Linking CXX executable translate
[100%] Built target translate
and
[loretoparisi@:mbploreto cli]$ ./translate --help
OpenNMT Translator:
--help display available options
--model arg path to the OpenNMT model
--src arg path to the file to translate (read from the
standard input if not set)
--tgt arg path to the output file (write to the standard
output if not set
--phrase_table arg path to the phrase table
--replace_unk replace unknown tokens by source tokens with the
highest attention
--batch_size arg (=30) batch size
--beam_size arg (=5) beam size
--max_sent_length arg (=250) maximum sentence length to produce
--time output average translation time
--profiler output per module computation time
--threads arg (=0) number of threads to use (set to 0 to use the
number defined by OpenMP)
--cuda
Let me check it (Tokenize question :) before closing this.
I'm trying to read from stdin
and write to stdout
like, but I get a Segmentation fault:
[loretoparisi@:mbploreto cli]$ ./translate --model /root/wmt14.en-fr.fconv-float/model.th7 --beam_size 5
Segmentation fault: 11
Is it a model trained with OpenNMT?
@guillaumekln Ah! that was the issue, it's a Fairseq pretrained model! I guess they both had a similar LSTM. I'm going to download from http://opennmt.net/Models/, and try again. Give me a while, thank you.
@guillaumekln ok, thanks now it seems it is working when feeding from stdin
[loretoparisi@:mbploreto cli]$ echo "The quick brown fox jumps over the lazy dog" | ./translate --model /root/onmt_baseline_wmt15-all.en-de_epoch13_7.19_release.t7 --beam_size 5
Der <unk> Fuchs springt über den faulen Hund
Are the <unk>
tokens not available in this pretrained dict (in this case quick
)? Is the translate
applying the BPE tokenization or I have to use tokenize
before it?
Are the
tokens not available in this pretrained dict (in this case quick)?
Yes, <unk>
are out of vocabulary words. This model generates them often as the target is non subtokenized German.
Is the translate applying the BPE tokenization or I have to use tokenize before it?
It expects space-separated tokens after tokenization.
@guillaumekln Thank you. Okay, so I have to tokenize with tokenize
cli, before calling translate
. Is this way correct:
[loretoparisi@:mbploreto build]$ echo "The quick brown fox jumps over the lazy dog" | \
./lib/tokenizer/cli/tokenize | \
./cli/translate --model /root/onmt_baseline_wmt15-all.en-de_epoch13_7.19_release.t7 --beam_size 5
Yes. More generally, you should apply the same tokenization you used on your training data.
@guillaumekln ok clear that. So considering these pre-trained, are them tokenized with BPE?
It's described in the "Corpus Prep" column.
Got it, thanks. In this case it would be a aggressive
mode then!
echo "The quick brown fox jumps over the lazy dog" | \
./lib/tokenizer/cli/tokenize --mode aggressive | \
./cli/translate --model /root/onmt_baseline_wmt15-all.en-de_epoch13_7.19_release.t7 --beam_size 5
Der <unk> Fuchs springt über den faulen Hund
while the multilingual model it fails:
echo "Le renard brun rapide saute sur le chien paresseux" | ./lib/tokenizer/cli/tokenize --mode aggressive | ./cli/translate --model /root/onmt_esfritptro-4-1000-600_epoch13_3.12_release.t7 --beam_size 5
$ Error: Assertion `thtensor' failed. at /Users/loretoparisi/Documents/Projects/AI/CTranslate/src/th/Obj.cc:378
but it could be a GPU trained model.
@guillaumekln Was the Error: Assertion
thtensor' failed.` due to the GPU/CPU model? Thanks, I'm closing this then.
Yes, because it was a GPU model.
Ok thank you gonna closing this one.
The
cmake ..
was successfull:while when doing
make
: