zju3dv / disprcnn

Code release for Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation (CVPR 2020, TPAMI 2021)
Apache License 2.0
213 stars 36 forks source link

CUDA error: out of memory #28

Closed pengweiweiwei closed 3 years ago

pengweiweiwei commented 3 years ago

I used eight NVIDIA TITAN V which memory is 12G,when i run sh scripts/train_idispnet.sh , After epoch 99 ,it occurs the problem belowing: Traceback (most recent call last): File "tools/kitti_object/train_idispnet_fa.py", line 90, in <module> main() File "tools/kitti_object/train_idispnet_fa.py", line 84, in main fit_one_cycle(learner, args.epochs, args.maxlr) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/train.py", line 23, in fit_one_cycle learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/basic_train.py", line 200, in fit fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/basic_train.py", line 112, in fit finally: cb_handler.on_train_end(exception) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callback.py", line 323, in on_train_end self('train_end', exception=exception) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callback.py", line 251, in __call__ for cb in self.callbacks: self._call_and_update(cb, cb_name, **kwargs) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callback.py", line 241, in _call_and_update new = ifnone(getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs), dict()) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callbacks/tracker.py", line 105, in on_train_end self.learn.load(f'{self.name}', purge=False) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/basic_train.py", line 269, in load state = torch.load(source, map_location=device) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 386, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 573, in _load result = unpickler.load() File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 536, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 403, in restore_location return default_restore_location(storage, map_location) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 119, in default_restore_location result = fn(storage, location) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 99, in _cuda_deserialize return storage_type(obj.size()) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/cuda/__init__.py", line 615, in _lazy_new return super(_CudaBase, cls).__new__(cls, *args, **kwargs) RuntimeError: CUDA error: out of memory Traceback (most recent call last): File "tools/kitti_object/train_idispnet_fa.py", line 90, in <module> main() File "tools/kitti_object/train_idispnet_fa.py", line 84, in main fit_one_cycle(learner, args.epochs, args.maxlr) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/train.py", line 23, in fit_one_cycle learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/basic_train.py", line 200, in fit fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/basic_train.py", line 112, in fit finally: cb_handler.on_train_end(exception) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callback.py", line 323, in on_train_end self('train_end', exception=exception) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callback.py", line 251, in __call__ for cb in self.callbacks: self._call_and_update(cb, cb_name, **kwargs) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callback.py", line 241, in _call_and_update new = ifnone(getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs), dict()) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/callbacks/tracker.py", line 105, in on_train_end self.learn.load(f'{self.name}', purge=False) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/fastai/basic_train.py", line 269, in load state = torch.load(source, map_location=device) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 386, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 573, in _load result = unpickler.load() File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 536, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 403, in restore_location return default_restore_location(storage, map_location) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 119, in default_restore_location result = fn(storage, location) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/serialization.py", line 99, in _cuda_deserialize return storage_type(obj.size()) File "/home/yhzn/anaconda3/envs/disprcnn/lib/python3.7/site-packages/torch/cuda/__init__.py", line 615, in _lazy_new return super(_CudaBase, cls).__new__(cls, *args, **kwargs) RuntimeError: CUDA error: out of memory what should i do to solve this problem? traning with only one GPU ? Looking forward to your reply,thanks!

pengweiweiwei commented 3 years ago

@f-sky

ootts commented 3 years ago

It is a bug of fastai, just ignore it.

pengweiweiwei commented 3 years ago

So it means I finished training iDispNet? I could continue to run sh scripts/train_rpn.sh ?

pengweiweiwei commented 3 years ago

@f-sky

ootts commented 3 years ago

Yes.

pengweiweiwei commented 3 years ago

ok , thanks!