Open CC-Hi opened 3 years ago
@CC-Hi didn't try pruned darknet weights convert before. How did you prune the model?
@CC-Hi didn't try pruned darknet weights convert before. How did you prune the model?
@david8862 https://github.com/SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone I prune the model by this project.
@CC-Hi didn't try pruned darknet weights convert before. How did you prune the model?
@david8862 https://github.com/SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone I prune the model by this project.
I didn't try with this project
Hi @CC-Hi, I also pruned a network using your aforementioned repo and met the same error as you, did you manage to solve this issue in the end?
Thanks.
@CC-Hi didn't try pruned darknet weights convert before. How did you prune the model?
@david8862 https://github.com/SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone I prune the model by this project.
Hi @CC-Hi, the error is caused by the batch_normalize=0 in each of the convolutional layer before yolo layer in the pruned cfg file. Just simply remove it will resolve the error.
@CC-Hi didn't try pruned darknet weights convert before. How did you prune the model?
@david8862 https://github.com/SpursLipu/YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone I prune the model by this project.
Hi @CC-Hi, the error is caused by the batch_normalize=0 in each of the convolutional layer before yolo layer in the pruned cfg file. Just simply remove it will resolve the error.
Thank you @lyliew . I'll try it later.
@lyliew , Hi, it is worked after remove the bn=0 layer, and it can compile by the vitis-ai 1.3; However ,after I use vitis -ai 1.3 compile the pruned.cfg file (already quantization) and generated the xx.elf file for the pynq-dpu 1.2, when I deploy on the pynq-dpu 1.2 ,it can not run ; Have you succeded deploy on the pynq-dpu 1.2 ?
@lyliew , Hi, it is worked after remove the bn=0 layer, and it can compile by the vitis-ai 1.3; However ,after I use vitis -ai 1.3 compile the pruned.cfg file (already quantization) and generated the xx.elf file for the pynq-dpu 1.2, when I deploy on the pynq-dpu 1.2 ,it can not run ; Have you succeded deploy on the pynq-dpu 1.2 ?
Hi @chumingqian,
I was able to quantize and compile the model and deployed it on ZCU104, the only problem was the loss of the compiled model was quite high when running on the system. I am currently fine-tuning the model, will update you about the latest result. AFAIK, Vitis-AI v1.3 can only compile a model to xmodel file, I think PYNQ-DPU v1.2 expects a model compiled by Vitis-AI v1.2, maybe you can switch back to Vitis-AI v1.2 to compile the model and try to deploy it again?
Hi, @lyliew , I deploy the elf file(by vitist-ai 1.3) on the ultra_96_v2, I also generate the xmodel( also by vitis--ai 1.3) ,but didn't run on the ultra_96 , thanks you again.
Hi, @lyliew , I deploy the elf file(by vitist-ai 1.3) on the ultra_96_v2, I also generate the xmodel( also by vitis--ai 1.3) ,but didn't run on the ultra_96 , thanks you again.
Hi @chumingqian, Did you use the prebuilt img v1.3 for ultra96v2? From my experience, img v1.3.0 is a little weird because I was also not able to deploy my compiled xmodel on ultra96v2 and ZCU104. For your reference, my compiled elf model can work on ultra96v2 using Vitis-AI 1.2 and prebuilt img v1.2. As for the ZCU104, I tested my xmodel using img v1.3.1 and it worked.
Hi, @lyliew , I use the prebuilt image V2.6 for the ultra_96_v2 by this site http://www.pynq.io/board.html.
Hi, @lyliew , I use the prebuilt image V2.6 for the ultra_96_v2 by this site http://www.pynq.io/board.html.
Hi @chumingqian, You can try this vitis-ai v1.2 prebuilt img for ultra96v2: https://www.hackster.io/AlbertaBeef/vitis-ai-1-2-flow-for-avnet-vitis-platforms-7cb3aa Then, you need to use the v1.2 workflow to compile the model into elf format and deploy it. The pruned model should be able to run.
Hi, @lyliew : I successed deploy the elf file on the ultra_96_v2, but I failed to deploy xmodel ,thanks for your help.
Hi, @lyliew : I successed deploy the elf file on the ultra_96_v2, but I failed to deploy xmodel ,thanks for your help.
Hi @chumingqian, did your pruned elf model work on ultra96v2?
Hi, @lyliew : Yes, i deploy the pruned elf model work on Ultra96_v2 by your suggestion that comment the line bn=0, and my pruned method is channel pruning and the result compare to the unpruned and pruned at end of here.
Traceback (most recent call last): File "../keras-YOLOv3-model-set/tools/model_converter/convert.py", line 407, in
_main(parser.parse_args())
File "../keras-YOLOv3-model-set/tools/model_converter/convert.py", line 193, in _main
buffer=weights_file.read(weights_size * 4))
TypeError: buffer is too small for requested array
I have verified that cfg and weight match. My original weight size is 256.4M The weight size after pruning is 137.6M Of course,I have two cfg files for the two weight files and they are corresponding.
Does it (convert.py) support the transformation of the pruned network?