AI4Bharat / Indic-BERT-v1

Indic-BERT-v1: BERT-based Multilingual Model for 11 Indic Languages and Indian-English. For latest Indic-BERT v2, check: https://github.com/AI4Bharat/IndicBERT
https://indicnlp.ai4bharat.org
MIT License
276 stars 41 forks source link

finetuning.ipynb - Colab is broken #11

Closed jhgorse closed 3 years ago

jhgorse commented 3 years ago

Greetings,

Thank you for this excellently documented package. I am having some trouble getting the Collab to run step 4, Fine-tune the Model. Here is the output:

/content/indic-bert/indic-bert
/usr/local/lib/python3.6/dist-packages/transformers/modeling_auto.py:798: FutureWarning: The class 'AutoModelWithLMHead' is deprecated and will be removed in a future version. Please use 'AutoModelForCausalLM' for causal language models, 'AutoModelForMaskedLM' for masked language models and 'AutoModelForSeq2SeqLM' for encoder-decoder models.
  FutureWarning,
Some weights of the model checkpoint at ai4bharat/indic-bert were not used when initializing AlbertForMaskedLM: ['sop_classifier.classifier.weight', 'sop_classifier.classifier.bias']
- This IS expected if you are initializing AlbertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing AlbertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
---------------------------------------------------------------------------
MisconfigurationException                 Traceback (most recent call last)
<ipython-input-4-df6be9fbd108> in <module>()
     17 ]
     18 
---> 19 finetune_main(argvec)

5 frames
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py in sanitize_gpu_ids(gpus)
    394                 You requested GPUs: {gpus}
    395                 But your machine only has: {all_available_gpus}
--> 396             """)
    397     return gpus
    398 

MisconfigurationException: 
                You requested GPUs: [0]
                But your machine only has: []

What do you think might be going wrong?

Cheers, Joe

gowtham1997 commented 3 years ago

@jhgorse

MisconfigurationException: 
                You requested GPUs: [0]
                But your machine only has: []

Looks like the GPU runtime was not enabled or not available on colab. The unavailability of GPUS could be due to the usage limits of colab.

This could also happen if you have the CPU version of PyTorch installed and requesting computations on GPU. So recheck if the PyTorch installation is for your Cuda version from https://pytorch.org/

jhgorse commented 3 years ago

Aha! I can see the collab was using requirements.txt instead of requirements_colab.txt. Trying requirements_colab.txt now. It seems to work. Thank you! =)