Closed yyhuang01 closed 4 months ago
在我删除with autocast(dtype=self.precision):中的dtype=self.precision后,即函数变成with autocast():后开始训练了。我现在有一个疑问,在我训练的过程中,对GPU的占用会随着迭代次数的增加而增加,我此前没有见过这种情况,这是由于什么原因造成的? 我现在正在使用自己的数据集进行训练,每一张图片的大小是1024*1024,我该如何确定适合的BATCH SIZE?
您好,我在试图训练时遇到了如上所示错误,完整的出错信息如下: File "train_net.py", line 416, in
launch(
File "/home/h/deep_learn/f_d/detectron2/detectron2/engine/launch.py", line 84, in launch
main_func(*args)
File "train_net.py", line 410, in main
return trainer.train()
File "/home/h/deep_learn/f_d/detectron2/detectron2/engine/defaults.py", line 486, in train
super().train(self.start_iter, self.max_iter)
File "/home/h/deep_learn/f_d/detectron2/detectron2/engine/train_loop.py", line 155, in train
self.run_step()
File "/home/h/deep_learn/f_d/detectron2/detectron2/engine/defaults.py", line 496, in run_step
self._trainer.run_step()
File "/home/h/deep_learn/f_d/detectron2/detectron2/engine/train_loop.py", line 493, in run_step
with autocast(dtype=self.precision):
TypeError: init() got an unexpected keyword argument 'dtype'
请问我该如何解决?