Open danial880 opened 4 years ago
I met the same problem
@danial880 Hi, Did you solve the problem above?
@xiezhongzhao @danial880 Are you sure the name of the input video for --viz-subject (input_video.mp4) is correct, and that "squat.mp4" is the full relative path to the input video? It looks to me like "input_video.mp4" should probably be "squat.mp4." I think you just forgot to change it from the template they gave.
@xiezhongzhao @danial880 Are you sure the name of the input video for --viz-subject (input_video.mp4) is correct, and that "squat.mp4" is the full relative path to the input video? It looks to me like "input_video.mp4" should probably be "squat.mp4." I think you just forgot to change it from the template they gave.
Tried it, showing same error again.
@danial880 You may want to look into the npz file you generated from detectron step. Try to add a line and in run.py
to see what keypoints.keys()
gives you.
i met the same problem, and @yangchris11 you are correct, my keypoints
is empty . So is there anyone who has figured out why the step 4 make a empty result
met the same issue, any updates so far? I can clearly see some detection results in step 2, but they got missing after step 4
I guess the reason is that when running step 4 /path/to/detections/output_directory
doesn't direct to the output of step 3
Finally got it working after an hour or so. Here is the bash script to make it run on a custom video in one step. Install detectron2 using
git clone https://github.com/facebookresearch/detectron2
python -m pip install -e detectron2
using
torch 1.7.1
torchsummary 1.5.1
torchvision 0.8.2
#!/bin/bash
input_video=my_video.mp4
input_dir=/path/to/my/video_dir/
output_dir=${input_dir}results3D/
mkdir $output_dir
python inference/infer_video_d2.py \
--cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \
--output-dir $output_dir \
--image-ext mp4 \
$input_dir
cd data
python prepare_data_2d_custom.py -i $output_dir -o detectron2_pt_coco
cd ..
python run.py -d custom
-k detectron2_pt_coco
-arc 3,3,3,3,3
-c checkpoint \
--evaluate pretrained_h36m_detectron_coco.bin \
--render --viz-subject $input_video \
--viz-action custom \
--viz-camera 0 \
--viz-video $input_dir$input_video \
--viz-output $input_dir${input_video::-4}_results3D.mp4 \
--viz-size 6
I'm still having this issue as well. The above bash script did not work for me.
Ubuntu 18.04 torch 1.5.0 torchvision 0.6.0 Command used = "python3 run.py -d custom -k myvideos -arc 3,3,3,3,3 -c checkpoint --evaluate pretrained_h36m_detectron_coco.bin --render --viz-subject input_video.mp4 --viz-action custom --viz-camera 0 --viz-video squat.mp4 --viz-export output.mp4 --viz-size 6"
ERROR:
Namespace(actions='*', architecture='3,3,3,3,3', batch_size=1024, bone_length_term=True, by_subject=False, causal=False, channels=1024, checkpoint='checkpoint', checkpoint_frequency=10, data_augmentation=True, dataset='custom', dense=False, disable_optimizations=False, downsample=1, dropout=0.25, epochs=60, evaluate='pretrained_h36m_detectron_coco.bin', export_training_curves=False, keypoints='myvideos', learning_rate=0.001, linear_projection=False, lr_decay=0.95, no_eval=False, no_proj=False, render=True, resume='', stride=1, subjects_test='S9,S11', subjects_train='S1,S5,S6,S7,S8', subjects_unlabeled='', subset=1, test_time_augmentation=True, viz_action='custom', viz_bitrate=3000, viz_camera=0, viz_downsample=1, viz_export='output.mp4', viz_limit=-1, viz_no_ground_truth=False, viz_output=None, viz_size=6, viz_skip=0, viz_subject='input_video.mp4', viz_video='squat.mp4', warmup=1) Loading dataset... Preparing data... Loading 2D detections... Traceback (most recent call last): File "run.py", line 169, in <module> cameras_valid, poses_valid, poses_valid_2d = fetch(subjects_test, action_filter) File "run.py", line 115, in fetch for action in keypoints[subject].keys(): KeyError: 'input_video.mp4'