Language model fine-tuning on NER with an easy interface and cross-domain evaluation. "T-NER: An All-Round Python Library for Transformer-based Named Entity Recognition, EACL 2021"
Hi @asahi417, using Ubuntu and GPU, I have found different behavior of CUDA when we call:
torch.backends.mps.is_available()
While the older versions of CUDA make exceptions, the newer ones return False.
So, to be more comprehensive, I would suggest setting self.device="cpu" when we face an exception. So we can have:
# GPU setup
try:
# Mac M1 Support https://github.com/asahi417/tner/issues/30
self.device = 'mps' if torch.backends.mps.is_available() else 'cpu'
except Exception:
self.device = 'cpu'
if self.device == 'cpu':
self.device = 'cuda' if torch.cuda.device_count() > 0 else 'cpu'
Hi @asahi417, using Ubuntu and GPU, I have found different behavior of CUDA when we call:
torch.backends.mps.is_available()
While the older versions of CUDA make exceptions, the newer ones return False.So, to be more comprehensive, I would suggest setting
self.device="cpu"
when we face an exception. So we can have:Thank you!