cdqa-suite / cdQA

⛔ [NOT MAINTAINED] An End-To-End Closed Domain Question Answering System.
https://cdqa-suite.github.io/cdQA-website/
Apache License 2.0
616 stars 191 forks source link

Fine Tune model using GPU #298

Closed mmehta-navomi closed 4 years ago

mmehta-navomi commented 5 years ago

Hello, I am fine-tuning the model using GPU. Running into the memory issue since the process using only one of the GPUs I have. Is there a way to distribute the process?

Thank you

andrelmfarias commented 5 years ago

Hi, yes there's one way to do it but it's not straightforward.

Could you please try the snippet below?

import joblib
import torch

reader = joblib.load("bert_qa.joblib")
reader.local_rank = -1
reader.device = torch.device("cuda")
reader.n_gpu = torch.cuda.device_count()

from cdqa.pipeline import QAPipeline
cdqa_pipeline = QAPipeline(reader=reader)
cdqa_pipeline.fit_reader('path-to-custom-squad-like-dataset.json')
mmehta-navomi commented 4 years ago

I haven't tried this snippet yet but will do soon.

I will close this issue for now. Thanks