In both cases, the program runs for a while, but stops with the following message
connection reset by peer It was preceded by "RuntimeError: Unable to find a valid cuDNN algorithm to run convolution when training on host machine with gpu"
Is the above error because of lower specs of machine.
My GPU model:
VRAM:
122MiB / 7973MiB ~ 8 GB
pytorch 1.11
cuda 11.3
What are reasonable specification for GPU model and memory that can make the program run to completion.
Hi,
I tried to run bundleSDF using the two datasets provided, i.e., milk dataset, and HO3D dataset. For the milk dataset the following command was issued:
python run_custom.py --mode run_video --video_dir 2022-11-18-15-10-24_milk --out_folder bundlesdf_2022-11-18-15-10-24_milk --use_segmenter 1 --use_gui 0 --debug_level 2
For HO3D dataset the following command was issued:
python run_ho3d.py --video_dirs /mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/HO3D_v3/evaluation/SM1 --out_dir /home/bowen/debug/ho3d_ours
In both cases, the program runs for a while, but stops with the following message
connection reset by peer
It was preceded by "RuntimeError: Unable to find a valid cuDNN algorithm to run convolution when training on host machine with gpu"Is the above error because of lower specs of machine.
My GPU model:
VRAM: 122MiB / 7973MiB ~ 8 GB
pytorch 1.11 cuda 11.3
What are reasonable specification for GPU model and memory that can make the program run to completion.
thanks,