Open aixiaodewugege opened 1 year ago
for example, --mot_path=/data/Dataset/mot
Thanks for your quick reply. But I think that it failed to load the config file, even if I use your settings. Did I miss anything?
Here is my error log:
(mmedit-sam) wushuchen@wushuchen:~/projects/SAM_test/Grounded-Segment-Anything$ python grounded_sam_visam.py --meta_arch motr --dataset_file e2e_dance --with_box_refine --query_interaction_layer QIMv2 --num_queries 10 --det_db det_db_motrv2.json --use_checkpoint --mot_path ./ --resume motrv2_dancetrack.pth --sam_checkpoint sam_vit_h_4b8939.pth --video_path DanceTrack/test/dancetrack0003
/home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
warnings.warn(
/home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None
for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=None
.
warnings.warn(msg)
Training with Self-Cross Attention.
None aaaaaaaaaaaaaaaaaaaaa
Traceback (most recent call last):
File "grounded_samvisam.py", line 253, in
And my folder structed is as followed:
Sorry, you can use this param" --meta_arch motr --dataset_file e2e_dance --with_box_refine --batch_size 1 --sample_mode random_interval --sample_interval 10 --sampler_lengths 5 --merger_dropout 0 --dropout 0 --random_drop 0.1 --fp_ratio 0.3 --query_interaction_layer QIMv2 --query_denoise 0.05 --num_queries 10 --append_crowd --det_db det_db_motrv2.json --use_checkpoint --mot_path XXX --resume motrv2_dancetrack.pth --sam_checkpoint sam_vit_h_4b8939.pth --video_path DanceTrack/test/dancetrack0003", I will change readme
I have the same problem:TypeError: 'NoneType' object is not iterable,thank you for your replay. Other question is: what's in e2e_dance dir?
I have the same problem:TypeError: 'NoneType' object is not iterable,thank you for your replay. Other question is: what's in e2e_dance dir?
I think it is not a dir, it means only detect people.
Yes, because we use motrv2 weight now. After we will release our code
Yes, because we use motrv2 weight now. After we will release our code
Sorry to bother again. Do you actually use groundingDINO in grounded_sam_visam.py? I can't find your code using groundingDINO detector.
No.
Ok. Thanks.
Thank you for your work, it's wonderful !
Here is my error new log when using the new command: All test image data In /vdb1/dataset/visam/mot/DanceTrack/test/dancetrack0003/img1: Looking forward to your reply~
xxx.txt in --det_db det_db_motrv2.json. You can open det_db_motrv2.json, then you will see it
here. you need to delete “/vdb1/dataset/visam/mot/”
fter we release my code, this problem will no longer exist.
Thanks for your guidance, I have got the final result: "visam.avi"file~ The end result is different from the VISAM in README.md.
Sure, I'm using my own weights. Looking forward to our future code.
i just want to use demo to test my own video, so why should i download the dataset into data folder
So... did anyone actually get this to run? And on their own video file?
Can anybody tell me where is det_db_motrv2.json
det_db_motrv2.json
这个不需要。假如真的想用,可以参考MOTRv2
Can you provide a more specific guide on how to reproduce your demo with VISAM?