yuhsuanyeh / BiFuse

[CVPR2020] BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion
MIT License
173 stars 28 forks source link

Out of memory #17

Open josvanr opened 2 years ago

josvanr commented 2 years ago

Hello!

I'm trying to run your script but run into memory problems. I'm not sure how to tackle this; I tried to use a smaller sample image, but even at size 100x50 I still run out of memory. Maybe it is not related to the image size?.. The error messages I get are quoted below, and the numbers there don't seem to be related to the image size. If I look at the amount of gpu memory used as the script runs, it starts at more or less zero and increases to the max of 2048Mb then the script exits..

Any suggestions are welcome!

thnx.

Test Data Num: 1 Load: BiFuse_Pretrained.pkl Traceback (most recent call last): File "main.py", line 115, in main() File "main.py", line 111, in main saver.LoadLatestModel(model, None) File "/sda1/bifuse/BiFuse/Utils/ModelSaver.py", line 33, in LoadLatestModel params = torch.load(name) File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 608, in load return _legacy_load(opened_file, map_location, pickle_module, *pickle_load_args) File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 787, in _legacy_load result = unpickler.load() File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 743, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 175, in default_restore_location result = fn(storage, location) File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 155, in _cuda_deserialize return storage_type(obj.size()) File "/home/jos/.local/lib/python3.6/site-packages/torch/cuda/init.py", line 606, in _lazy_new return super(_CudaBase, cls).new(cls, args, **kwargs) RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 1.96 GiB total capacity; 1.25 GiB already allocated; 15.56 MiB free; 1.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

josvanr commented 2 years ago

Following the suggestion of an earlier thread, I did get it to work without gpu (using cpu only)...