Closed Kepler-Br closed 4 years ago
Here's my google colab notebook: https://colab.research.google.com/drive/1EYuZXEqouYQa8VPK6jgOgdsuKuaD5Ub8?usp=sharing
Well, problem solved it self: I was running gpt-2 on cpu. When memory consumption exceeds actual memory, colab will kill cell without saying that. But I need help with notebook anyway.
Tried to finetune your GPT2 Russian on russian dataset, but got this problem:
Here's how I run run_lm_finetuning.py:
But after I deleted warmup_steps from function call, training starts successfully but somewhat it cancels it self during first iteration:
I'm using google colab for training Even replacing get_constant_schedule with get_constant_schedule_with_warmup doesn't help: training still cancels it self with ^C. I tried different pip
transformers
versions, but nothing works. Sampling works flawlessly btw. This is what happens when I try to use TPU on colab:Here's my pip packages: