GuillaumeRochette / PanopticProcessing

Other
2 stars 0 forks source link

Preprocessing the dataset seems cumbersome #3

Closed mioyeah closed 1 year ago

mioyeah commented 1 year ago

I encountered many difficulties in preprocessing the dataset. Can you provide a processed dataset?

GuillaumeRochette commented 1 year ago

What difficulties did you encounter? I have finished my PhD and I am no longer working at the university of Surrey, so it will be difficult for me to get access to the data.

mioyeah commented 1 year ago

I encountered an error while running the script src_video=/home/mio/work/project/PanopticProcessing-master/datasets/Panoptic/171026_pose1/Videos/00.mp4 && dst_video=/home/mio/work/project/PanopticProcessing-master/datasets/Panoptic/171026_pose1/Masks/SOLO/00.mp4 && policy=aggregate && labels= && SOLOv2_ROOT=$WRKSPCE/SOLOv2 && \ git clone https://github.com/GuillaumeRochette/SOLOv2.git $SOLOv2_ROOT && \ cd $SOLOv2_ROOT && \ exec python run_video.py \ --src_video $src_video \ --dst_video $dst_video \ --cfg $CFG_SOLOv2_X101_DCN_3x \ --ckpt $CKPT_SOLOv2_X101_DCN_3x \ --policy $policy \ --labels $labels Cloning into '/workspace/SOLOv2'... Matplotlib created a temporary config/cache directory at /tmp/matplotlib-ky3g5ll7 because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. 0it [00:00, ?it/s]/home/mio/work/project/PanopticProcessing-master/datasets/Panoptic/171026_pose1/Videos/00.mp4 /home/mio/work/project/PanopticProcessing-master/datasets/Panoptic/171026_pose1/Masks/SOLO/00.mp4 /workspace/SOLO/configs/solov2/solov2_x101_dcn_fpn_8gpu_3x.py /workspace/SOLO/checkpoints/SOLOv2_X101_DCN_3x.pth /tmp/TEMP_1692355881.2927341 Traceback (most recent call last): File "run_video.py", line 63, in (result,) = inference_detector(model, image) File "/workspace/SOLO/mmdet/apis/inference.py", line 88, in inference_detector result = model(return_loss=False, rescale=True, data) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, *kwargs) File "/workspace/SOLO/mmdet/core/fp16/decorators.py", line 49, in new_func return old_func(args, kwargs) File "/workspace/SOLO/mmdet/models/detectors/base.py", line 144, in forward return self.forward_test(img, img_meta, kwargs) File "/workspace/SOLO/mmdet/models/detectors/base.py", line 127, in forward_test return self.simple_test(imgs[0], img_metas[0], kwargs) File "/workspace/SOLO/mmdet/models/detectors/single_stage_ins.py", line 82, in simple_test x = self.extract_feat(img) File "/workspace/SOLO/mmdet/models/detectors/single_stage_ins.py", line 50, in extractfeat x = self.backbone(img) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, *kwargs) File "/workspace/SOLO/mmdet/models/backbones/resnet.py", line 499, in forward x = self.relu(x) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 94, in forward return F.relu(input, inplace=self.inplace) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 912, in relu result = torch.relu(input) RuntimeError: CUDA error: no kernel image is available for execution on the device

Perhaps because the cuda version of the built Dorcker container is 10.1 and my computer graphics card is RTX3090, it seems that cuda 10.1 cannot be used.

GuillaumeRochette commented 1 year ago

Yes indeed that's a problem ... I guess you could either run not pass the GPU in the Docker container and run these on CPU, but that would be slow or run an other, maybe more recent, semantic segmentation model on the data, such as SwinV2 or SAM. The idea there was just to segment the human from the background, by extracting binary masks. I don't think the exact same model is required.