Open ngohlong opened 2 years ago
Hi @ngohlong , Thank you for your interest in our work!
It says that the GPU is out of memory. Which kind of GPU are you using?
By the way, our newer code2seq model is more memory efficient, but takes longer to train.
Uri
Hi @urialon, Thank you so much for your quick response.
I am using a Quadro M4000 GPU of 8GB. From the above lines, could you please tell me which one says that the GPU is out of memory? I tried to reduce the batch size but it did not work.
I will consider your code2seq
model but I still want to study the code2vec
one.
Long
The OOM when allocating tensor
means that the GPU is Out Of Memory.
The model itself is very memory hungry, because it has huge vocabularies,
and thus huge embedding matrices.
I think that maybe Google Colab has GPUs with more memory that are freely available.
Best, Uri
On Fri, Aug 19, 2022 at 9:08 AM ngohlong @.***> wrote:
Hi @urialon https://github.com/urialon, Thank you so much for your quick response.
I am using a Quadro M4000 GPU of 8GB. From the above lines, could you please tell me which one says that the GPU is out of memory? I tried to reduce the batch size but it did not work.
I will consider your code2seq model but I still want to study the code2vec one.
Long
— Reply to this email directly, view it on GitHub https://github.com/tech-srl/code2vec/issues/161#issuecomment-1220659913, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSOXMFTLCCELAMVWHE2LQDVZ6BOTANCNFSM57AE3AGA . You are receiving this because you were mentioned.Message ID: @.***>
Thank you so much for your answer, Uri. I really appreciate that.
Best regards, Long
Hello,
I would like to train the model from scratch with the java14m dataset. However, I encountered an issue as below. Could you please help me to solve this? Thank you so much in advance.
Best regards, Long Ngo