MVIG-SJTU / AlphaPose

Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
http://mvig.org/research/alphapose.html
Other
7.9k stars 1.96k forks source link

Change yolov3 to yolov5 #712

Open pentadotddot opened 3 years ago

pentadotddot commented 3 years ago

Hi!

Outstanding implementation, thank you!

Do you think it is possible with not much effort, to change yolov3 to yolov5 in this repo, like by copy paste it and change some lines in the code?

Thank you in advance!

Fang-Haoshu commented 3 years ago

Hi, we are currently using yolov3-spp, which is almost the same accuracy-speed comparing to yolov4 and yolov5

btalberg commented 3 years ago

@pentadotddot - You could take a look at our fork, where we're using YoloV4. Our implementation leans on a pip package we created. This package is based on another Pytorch-YoloV4 fork we maintain.

Fang-Haoshu commented 3 years ago

Wow awesome! @btalberg Did you test the speed-accuracy compared with V3-SPP? How does yolo-v4 work?

btalberg commented 3 years ago

Hi @Fang-Haoshu, I didn't test/profile against the official COCO dataset. We're using a custom dataset to train so I've been evaluating yolo-v4 based on how well it performs against our data. FWIW, I've noticed a slight improvement in BBOX AP scores. I haven't profiled speed but we haven't noticed any major drop (or improvement) in performance. What I'm seeing is similar to what you hypothesized.

Name Epochs AP @ IoU = 0.5 AP @ IoU = 0.75
YoloV3-SPP 3500 81.62% 23.49%
YoloV4 3500 83.54% 56.86%

Oh, and I'm transfer learning w/ a relatively small dataset (we're slowly adding more and more images and annotations). Right now we have Train = 149 images & 1,343 annotations, Val = 64 images & 505 annotations.

btalberg commented 3 years ago

I should add, with more boxes identified, our keypoints AP improves marginally as well:

Name AP @ IoU = 0.5
Fastpose_DUC_Resnet152-YoloV3-SPP 87.5%
Fastpose_DUC_Resnet152-YoloV4 88.3%
Fang-Haoshu commented 3 years ago

Oh I see. Thanks a lot!

gmt710 commented 3 years ago

@pentadotddot - You could take a look at our fork, where we're using YoloV4. Our implementation leans on a pip package we created. This package is based on another Pytorch-YoloV4 fork we maintain.

hello, @btalberg, thanks for your sharing. I want to know how to use your code. would you mind sharing some ideas ?

gmt710 commented 3 years ago

@Fang-Haoshu , Hello, I have finished Alphapose-yolov5 just now. If I want to release the code, Do I need to add something to explain?

gmt710 commented 3 years ago

@Fang-Haoshu , Hello, I have finished Alphapose-yolov5 just now. If I want to release the code, Do I need to add something to explain?

@btalberg ,Hello, Greatly thanks for your sharing.It's a great convenience for me. I also use your code, So it's Alphapose-yolovx.

btalberg commented 3 years ago

Hello @gmt710. It sounds like you got it running, but JIC, you should only need to update your config file (see below) to use the wfyolov4 detector. You'll need to then point at a yolov4 config and weights file. I'm using custom trained weights, but you could just download the yolov4 weights from Darknet. And make sure to specify --detector yolov4 when running inference. Apologies if any of these instructions don't work for you, it's been several months since I had to train or run alphapose.

DETECTOR:
  NAME: yolov4
  CONFIG: ./data/cfgs/yolov4.wf.0.2.cfg
  WEIGHTS: ./data/cfgs/yolov4.wf.0.2.weights
  NMS_THRES: 0.6
  CONFIDENCE: 0.05
  NUM_CLASSES: 1
gmt710 commented 3 years ago

@btalberg Wow, thx a lot. Yes, Your code is clear and easy to implement. If I want use your wfyolov4 detector, Where can I get the weight. I haven't found it in your Pytorch-YoloV4 rep.

btalberg commented 3 years ago

You have to download it manually from Darknet: https://github.com/AlexeyAB/darknet/wiki/YOLOv4-model-zoo. You can either drop it into detector/yolo_v4/data or update your config file to point to it.

wsypy commented 3 years ago

Hello @gmt710. It sounds like you got it running, but JIC, you should only need to update your config file (see below) to use the wfyolov4 detector. You'll need to then point at a yolov4 config and weights file. I'm using custom trained weights, but you could just download the yolov4 weights from Darknet. And make sure to specify --detector yolov4 when running inference. Apologies if any of these instructions don't work for you, it's been several months since I had to train or run alphapose.

DETECTOR:
  NAME: yolov4
  CONFIG: ./data/cfgs/yolov4.wf.0.2.cfg
  WEIGHTS: ./data/cfgs/yolov4.wf.0.2.weights
  NMS_THRES: 0.6
  CONFIDENCE: 0.05
  NUM_CLASSES: 1

Could you please tell me where to modify the detector in (see below)?In which file to make the changes, use yolov3 or v4?thank you

wsypy commented 3 years ago

You have to download it manually from Darknet: https://github.com/AlexeyAB/darknet/wiki/YOLOv4-model-zoo. You can either drop it into detector/yolo_v4/data or update your config file to point to it.

Hello, when I used yolov4, the following error occurred. What is the reason? Thank you for your help: python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x-dcn.yaml --checkpoint pretrained_models/fast_dcn_res50_256x192.pth --indir examples/demo/ --vis --showbox --save_img --pose_track --sp --detector yolov4 Loading YOLOv4 model.. Exception in thread Thread-2: Traceback (most recent call last): File "/media/yons/1/conda3/envs/lxyalphav4-3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/media/yons/1/conda3/envs/lxyalphav4-3.6/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/media/yons/1/code/temp/AlphaPose-Yolov4/alphapose/utils/detector.py", line 223, in image_detection dets = self.detector.images_detection(imgs, im_dim_list) File "/media/yons/1/code/temp/AlphaPose-Yolov4/detector/yolov4_api.py", line 77, in images_detection self.load_model() File "/media/yons/1/code/temp/AlphaPose-Yolov4/detector/yolov4_api.py", line 49, in load_model self.model = Detector(configfile=self.model_cfg, weightsfile=self.model_weights, conf_threshold=self.confidence, nms_threshold=self.nms_thresh, device_ids=args.gpus, default_device=args.device) File "/media/yons/1/code/temp/AlphaPose-Yolov4/detector/yolo_v4/detect.py", line 24, in init self._init_detector() File "/media/yons/1/code/temp/AlphaPose-Yolov4/detector/yolo_v4/detect.py", line 30, in _init_detector detector = Darknet(self.config_file, use_cuda=self.use_cuda) File "/home/yons/.local/lib/python3.6/site-packages/wf_pytorch_yolo_v4-0.1.12-py3.6.egg/yolov4/tool/darknet2pytorch.py", line 136, in init self.models = self.create_network(self.blocks) # merge conv, bn,leaky File "/home/yons/.local/lib/python3.6/site-packages/wf_pytorch_yolo_v4-0.1.12-py3.6.egg/yolov4/tool/darknet2pytorch.py", line 406, in create_network yolo_layer.scale_x_y = float(block['scale_x_y']) KeyError: 'scale_x_y'

Loading pose model from pretrained_models/fast_dcn_res50_256x192.pth... loading reid model from trackers/weights/osnet_ain_x1_0_msmt17_256x128_amsgrad_ep50_lr0.0015_coslr_b64_fb10_softmax_labsmth_flip_jitter.pth... 0%| | 0/3 [00:00<?, ?it/s]

hongyaohongyao commented 3 years ago

@Fang-Haoshu , Hello, I have finished Alphapose-yolov5 just now. If I want to release the code, Do I need to add something to explain?

@btalberg ,Hello, Greatly thanks for your sharing.It's a great convenience for me. I also use your code, So it's Alphapose-yolovx.

Have you published your code for Alphapose-yolov5?

samymdihi commented 3 years ago

Any news regarding the implementation with yolov5 ?

btalberg commented 3 years ago

I added support for YoloV4 in my fork but chose not to support YoloV5. I stuck with the original Yolo project that is maintained by AlexeyAB and trained using his Darknet framework. My understanding is that "YoloV5" is based on Yolo (same backbone I believe) but maintained by ultralytics and trained in Pytorch.