YoungXiyuan / DCA

This repository contains code used in the EMNLP 2019 paper "Learning Dynamic Context Augmentation for Global Entity Linking".
https://arxiv.org/abs/1909.02117
45 stars 15 forks source link

Unable to run on google colab #4

Closed SravyaMadupu closed 3 years ago

SravyaMadupu commented 3 years ago

I am trying to run the code on google colab. CUDA exits with error: CUDA out of memory. Could you please help me which parameters could be changed for this error.

Result:

load conll at ../data/generated/test_train_data load csv 370United News of India process coref load conll reorder mentions within the dataset create model tcmalloc: large alloc 1181786112 bytes == 0xb04c000 @ 0x7efca71911e7 0x7efca15535e1 0x7efca15bc90d 0x7efca15bd522 0x7efca1654bce 0x50a7f5 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 0x507f24 0x50b053 0x634dd2 0x634e87 0x63863f 0x6391e1 0x4b0dc0 0x7efca6d8eb97 0x5b26fa --- create EDRanker model --- prerank model --- create NTEE model --- --- create AbstractWordEntity model --- main model create new model --- create MulRelRanker model --- --- create LocalCtxAttRanker model --- --- create AbstractWordEntity model --- ^C

YoungXiyuan commented 3 years ago

Thank you for your interest in our work.

I am sorry for that I am not familiar with google colab.

We trained and evaluated the DCA framework on a GeForce GTX 1080 card with 8GB memory which is enough for the whole process.

As for the parameters that may potentially influence the memory usage, I remember that no matter how the parameters are changed, the memory usage of DCA framework remains stable.

Maybe you could have a try on a local workstation, and feel free to contact me if you have any questions. (:

SravyaMadupu commented 3 years ago

I changed nothing and all of a sudden I am able to run it now. Thank you so much for the response. :-) Closing the issue.