Open leiqing1 opened 1 year ago
Do you have any plans to open source your code about distillation?
Do you have any plans to open source your code about distillation?
Hi @Errol-golang , we plan to open the distillation code to the YOLOv7 repo in the next two weeks.
Do you have any plans to open source your code about distillation?
Hi @Errol-golang , we plan to open the distillation code to the YOLOv7 repo in the next two weeks.
Thanks. I will keep an eye on it.
Do you have any plans to open source your code about distillation?
Hi @Errol-golang , we plan to open the distillation code to the YOLOv7 repo in the next two weeks.
Thanks. I will keep an eye on it.
hi, @Errol-golang we have submitted a pr to the YOLOv7 repo, and wanting for @WongKinYiu review. https://github.com/WongKinYiu/yolov7/pull/612
@leiqing1 Hi. Can you guide me about the compression of yolo-w6-pose
model?
Hello, we have done a Source-Free compression training function, and the benefits of YOLOv7 are as follows. I want to submit a PR, is it ok?
We use the training knowledge technology and PACT quantization technology to improve the compression efficiency of YOLOv7.Knowledge distillation techniques can automatically add training logic to AI models. First, it loads the inference model file specified by the user, and copies the AI model in memory as the teacher model in knowledge distillation, and the original model as the student model. Then, the model structure is automatically analyzed to find a layer suitable for adding distillation loss, usually the last layer with trainable parameters. Finally, the teacher model supervises the sparse training or quantized training of the original model through the distillation loss. The process is shown in the figure below.