Xilinx / CHaiDNN

HLS based Deep Neural Network Accelerator Library for Xilinx Ultrascale+ MPSoCs
Other
317 stars 151 forks source link

VGG-SSD300 quantization issue #111

Closed zjchenchujie closed 5 years ago

zjchenchujie commented 5 years ago

Hi, I have been working on CHaiDNN for a few days and known the two quantize methods mentioned in documentshttps://github.com/Xilinx/CHaiDNN/blob/master/docs/QUANTIZATION.md. The "XportDNN" tool works fine with some nets like GoogleNet_V1_NoLRN. However, I downloaded VGG-SSD300 from model_zoohttps://github.com/Xilinx/CHaiDNN/blob/master/docs/MODELZOO.md and tried to quantize "VGGSSD_6Bit_deploy_CHaiDNN.prototxt" with "XportDNN" following the recommended instructions. "precision_param" blocks has been eliminated in the downloaded prototxt file. But then comes the issue:

I1120 15:33:20.189744 23364 layer_factory.hpp:77] Creating layer conv9_2_conv9_2_relu_0_split I1120 15:33:20.189750 23364 net.cpp:100] Creating Layer conv9_2_conv9_2_relu_0_split I1120 15:33:20.189754 23364 net.cpp:480] conv9_2_conv9_2_relu_0_split <- conv9_2 I1120 15:33:20.189761 23364 net.cpp:454] conv9_2_conv9_2_relu_0_split -> conv9_2_conv9_2_relu_0_split_0 I1120 15:33:20.189767 23364 net.cpp:454] conv9_2_conv9_2_relu_0_split -> conv9_2_conv9_2_relu_0_split_1 I1120 15:33:20.189774 23364 net.cpp:454] conv9_2_conv9_2_relu_0_split -> conv9_2_conv9_2_relu_0_split_2 I1120 15:33:20.189782 23364 net.cpp:150] Setting up conv9_2_conv9_2_relu_0_split I1120 15:33:20.189787 23364 net.cpp:157] Top shape: 1 256 1 1 (256) I1120 15:33:20.189792 23364 net.cpp:157] Top shape: 1 256 1 1 (256) I1120 15:33:20.189797 23364 net.cpp:157] Top shape: 1 256 1 1 (256) I1120 15:33:20.189801 23364 net.cpp:165] Memory required for data: 235280384 I1120 15:33:20.189806 23364 layer_factory.hpp:77] Creating layer conv4_3_norm F1120 15:33:20.189828 23364 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: NormalizeRistretto (known types: AbsVal, Accuracy, AnnotatedData, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, ConvolutionRistretto, Crop, Data, Deconvolution, DeconvolutionRistretto, DetectionEvaluate, DetectionOutput, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, FcRistretto, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, LRNRistretto, LSTM, LSTMUnit, Log, MVN, MemoryData, MultiBoxLoss, MultinomialLogisticLoss, Normalize, PReLU, Parameter, Permute, Pooling, Power, PriorBox, Python, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, SmoothL1Loss, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, VideoData, WindowData) *** Check failure stack trace: ***

It seems like the layer "NormalizeRistretto" is not supported by quantization tool. Also, layer "PriorBox" in prototxt file seems not to be supported by this tools. However both of these layers have quatization paramaters, i.e. "precision_param" block in prototxt file.

I am confused that how the quantized model file "VGGSSD_6Bit_deploy_CHaiDNN.prototxt" in the model_zoo be generated with the quantization tool? Should I custom quantization tool?

Thanks in advance.

anilmartha commented 5 years ago

Hi @zjchenchujie

XportDNN supports "Normalization" and "PriorBox" layers. Could you rename "NormalizeRistretto" to "Normalize" in your prototxt and try running XportDNN with this change.

VishalX commented 5 years ago

Closing. Please reopen if required.