/usr/local/lib/python3.10/dist-packages/transformers/training_args.py in __post_init__(self)
1331 self.framework == "pt"
1332 and is_torch_available()
-> 1333 and (self.device.type != "cuda")
1334 and (get_xla_device_type(self.device) != "GPU")
1335 and (self.fp16 or self.fp16_full_eval)
hi! when i set:
model_name = model_checkpoint.split("/")[-1] batch_size = 8
args = TrainingArguments( f"{model_name}-finetuned-localization", evaluation_strategy = "steps", save_strategy = "steps", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=3, weight_decay=0.01, load_best_model_at_end=True, metric_for_best_model="accuracy", push_to_hub=True, )
i get the error:
/usr/local/lib/python3.10/dist-packages/transformers/training_args.py in __post_init__(self) 1331 self.framework == "pt" 1332 and is_torch_available() -> 1333 and (self.device.type != "cuda") 1334 and (get_xla_device_type(self.device) != "GPU") 1335 and (self.fp16 or self.fp16_full_eval)
/usr/local/lib/python3.10/dist-packages/transformers/training_args.py in device(self) 1695 """ 1696 requires_backends(self, ["torch"]) -> 1697 return self._setup_devices 1698 1699 @property
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py in get(self, obj, objtype) 52 cached = getattr(obj, attr, None) 53 if cached is None: ---> 54 cached = self.fget(obj) 55 setattr(obj, attr, cached) 56 return cached
/usr/local/lib/python3.10/dist-packages/transformers/training_args.py in _setup_devices(self) 1629 self._n_gpu = 1 1630 else: -> 1631 self.distributed_state = PartialState(backend=self.ddp_backend) 1632 self._n_gpu = 1 1633 if not is_sagemaker_mp_enabled():
NameError: name 'PartialState' is not defined
i'm using the colab to finetuning esm models with pytorch