Closed mmehta-navomi closed 4 years ago
Hi, yes there's one way to do it but it's not straightforward.
Could you please try the snippet below?
import joblib
import torch
reader = joblib.load("bert_qa.joblib")
reader.local_rank = -1
reader.device = torch.device("cuda")
reader.n_gpu = torch.cuda.device_count()
from cdqa.pipeline import QAPipeline
cdqa_pipeline = QAPipeline(reader=reader)
cdqa_pipeline.fit_reader('path-to-custom-squad-like-dataset.json')
I haven't tried this snippet yet but will do soon.
I will close this issue for now. Thanks
Hello, I am fine-tuning the model using GPU. Running into the memory issue since the process using only one of the GPUs I have. Is there a way to distribute the process?
Thank you