TRI-ML / permatrack

Implementation for Learning to Track with Object Permanence
MIT License
112 stars 13 forks source link

Error while testing the code #11

Closed ssbilakeri closed 2 years ago

ssbilakeri commented 2 years ago

when I run test.py file facing bellow error. Please help me to fix it.

RuntimeError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 7.93 GiB total capacity; 6.65 GiB already allocated; 48.38 MiB free; 6.92 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fcbce710193 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: + 0x1bccc (0x7fcbce951ccc in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: + 0x1cd5e (0x7fcbce952d5e in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #3: at::native::empty_cuda(c10::ArrayRef, c10::TensorOptions const&, c10::optional) + 0x284 (0x7fcbd47d96b4 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #4: + 0x45bd7d8 (0x7fcbd31207d8 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #5: + 0x1f4fb37 (0x7fcbd0ab2b37 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #6: + 0x3f0f795 (0x7fcbd2a72795 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #7: + 0x1f4fb37 (0x7fcbd0ab2b37 in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #8: std::result_of<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const::{lambda(c10::DispatchTable const&)#1} (c10::DispatchTable const&)>::type c10::LeftRight::read<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const::{lambda(c10::DispatchTable const&)#1}>(c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x18c (0x7fcbcbd1481c in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #9: c10::guts::infer_function_traits<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const::{lambda(c10::DispatchTable const&)#1}>::type::return_type c10::impl::OperatorEntry::readDispatchTable<c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const::{lambda(c10::DispatchTable const&)#1}>(c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x4e (0x7fcbcbd2253c in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #10: at::Tensor c10::Dispatcher::callUnboxedOnly<at::Tensor, c10::ArrayRef, c10::TensorOptions const&, c10::optional >(c10::OperatorHandle const&, c10::ArrayRef, c10::TensorOptions const&, c10::optional) const + 0x9d (0x7fcbcbd1fb1b in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #11: + 0x5912d (0x7fcbcbd1712d in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #12: dcn_v2_cuda_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0xa59 (0x7fcbcbd1808a in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #13: dcn_v2_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0x143 (0x7fcbcbcf1463 in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #14: + 0x3ffff (0x7fcbcbcfdfff in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so) frame #15: + 0x3d6ae (0x7fcbcbcfb6ae in /home/mca/Downloads/perm-test/src/lib/model/networks/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so)

frame #21: THPFunction_apply(_object*, _object*) + 0xa8f (0x7fcc1976a82f in /home/mca/anaconda3/envs/CenterTrack_new/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
pvtokmakov commented 2 years ago

Hi,

thanks for your interest in our work! This looks like an out of memory exception. You need to use a GPU with more memory or decrease the frame resolution.