Open antgr opened 5 years ago
@titipata ?
Hi @antgr, I used one of the lambda machine https://lambdalabs.com/deep-learning/workstations/4-gpu to train the model. It's probably the GPU memory that cause the problem for you. I'll have more refined answer later on.
Hi @titipata is there any workaround that I could use to train with one gpu? Even if the final model will be less capable.. Specificity I would like to jointly train your model with another argumentation mining task. Do you think that could your model help me on the other task?
@antgr I actually train with one GPU. However, the memory in GPU probably gets a bit high ~ 6-7 GB (from maximum 10 GB). I'd say the easiest workaround is to reducing batch size or size of the model.
Definitely, I think this will help improve other tasks, specifically if argument mining task is in the science domain.
Hi, I run the experiment in my machine and also in colab (https://colab.research.google.com/drive/10z-ZpmTRBIegicA4p9ueA_BOLet-7fHJ), but my machine halts (1810932it [37:24, 2382,33it/s]) and so does colab (1748626it [21:51, 1.29s/it]). So, what are the hardware requirements to run it smoothly?