Closed ztcintokyo closed 4 years ago
Hi, thanks for your interest!
I am not sure how to speed this up, however here are some ideas that might help.
The generation uses the fairseq-generate
command in the _fairseq_generate
function.
Good luck, Best, Louis
Hi, thank you for your reply. I'm wondering how to use GPU for the generation? I tried "CUDA_VISIBLE_DEVICES=3 python scripts/generate.py < train-1-en.txt > train-1-en-simp.txt" but nvidia-smi shows that this GPU didn't work. Maybe it's a stupid question, hoping for your reply, thank you!
Can you try CUDA_VISIBLE_DEVICES=0 python -c 'import torch; print(torch.cuda.device_count())'
as per this issue?
Maybe it's a pytorch problem.
I tried CUDA_VISIBLE_DEVICES=3 python -c 'import torch; print(torch.cuda.device_count())' and the output is 1
Hi @ztcintokyo
I am not sure what is the problem then, maybe you should drop into pdb and check that use_cuda
is set to True
here and check that during the generation, the tensors are indeed on the GPU.
I have solved the problem. Thank you for your patience!
Hi I would like to generate 1M sentences using your ACCESS. However, using generate.py seems slow, I'll appreciate it if you can give me some ideas to accelerate it. Thank you!