Open lorenzobattelli opened 3 years ago
Hi, @lorenzobattelli :
Your target device system is Pynq-dpu1.2 or Pynq-dpu1.3 ?
Can you paste a picture or log about the error information.
Besides, before you convert the darknet model to caffe model, did you make a copy for your original "xxx.prototxt " file, and do the following modification on the copy " your_copy.prototxt" , then do the quantization step with your "modified.prototxt" file.
STEP2: MDOEL Quantization *1.Before quantizing the model, we will need to make a minor modifcation to .prototxt file to point to the calibaration images. Make a new copy of the prototxt file and make the following edits: name: "Darkent2Caffe"
#input_dim: 1
#input_dim: 3
#input_dim: 416
#input_dim: 416
####Change input data layer to VOC validation images #####
layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: false
yolo_height:416 #change height according to Darknet model
yolo_width:416 #change width according to Darknet model
}
image_data_param {
source: "voc/calib.txt" #list of calibration imaages
root_folder: "images/" #path to calibartion images
batch_size: 1
shuffle: false
}
}
#####No changes to the below layers#####
*2. Notice that the calibration images in file.txt, the .txt file needs to be a two column format to realize the quantization.(For the quantize calibration, the images without labels are enough, but to realize the quantization we need a two column format .txt file, one column is the image_id, the other column just set as the zero)
*3. Notice that the path of the calibration images should under the Doker environment, meanwhile the workspace can be regard as computer and the vitis ai regard as a home:
vai_q_caffe quantize -model ../dpu1.3.2_caffe_model/v4_leacky_quanti.prototxt -keep_fixed_neuron -calib_iter 3 -weights ../dpu1.3.2_caffe_model/v4_leacky.caffemodel -sigmoided_layers layer133-conv,layer144-conv,layer155-conv -output_dir ../dpu1.3.2_caffe_model/ -method 1
STEP3: MODEL COMPILE
vai_c_caffe --prototxt ../dpu1.3.2_caffe_model/original_model_quanti/deploy.prototxt --caffemodel ../dpu1.3.2_caffe_model/original_model_quanti/deploy.caffemodel --arch ./u96pynq_v2.json --output_dir ../dpu1.3.2_caffe_model/ --net_name dpu1-3-2_v4_voc --options "{'mode':'normal','save_kernel':''}";
Thank you for your fast feedback, first of all.
1) our board is Xilinx Zynq Ultrascale+ MPSoC DPUCZDX8G_ISA0_B4096_MAX_BG2, we're working with cpp as tutorial does
2) the error message is simply Segmentation fault
after running the command ./test_jpeg_yolov4
onto our board
3) yes, I used my voc/yolov4.prototxt generated from the script bash script/darknet_convert.sh
that I run into the docker, as first step of the conversion process. Then I edited this file by adding those instructions at the top of the file, as you shown me.
Greetings!
Dear all, I followed this tutorial about converting a darknet trained model to a quantized model, but while trying to run the test_jpeg command into the dir /usr/share/vitis_ai_library/samples/yolov4/ , it comes to a segmentation fault, we guess it is a matter of prototxt (?). If so is there any way to generete it ?
thank you My best regards