Closed pbk20191 closed 5 months ago
My original model's output end was not Integer and that caused this issue. After changing the model output to be integer, everything goes well.
I have the same issue and I solved it by quantizing the output node, but then verification fails. How exactly did you convert the model output to integer?
I have the same issue and I solved it by quantizing the output node, but then verification fails. How exactly did you convert the model output to integer?
Well, my solution is same as yours. I specify output quantization of my model's last layer, which will be the input of TopK Node.
forward["conv1"] = ```
forward["relu1"] = ```
forward["pool1"] = ```
forward["conv2"] =```
forward["relu2"] = ```
forward["pool2"] = ```
forward["flatten"] = ````
forward["fc1"] = ```
forward["relu3"] = ````
# output_quant is None by default in brevitas
forward["fc2"] = qnn.QuantLinear(4 * channel_multiplier, 10, bias=True, weight_bit_width=weight_bit_width, output_quant=Int8ActPerTensorFloat)
dev branch: e188b4c50955105717b223862c4e26e4777852ea
Quick summary
I have my simple mnist model, I want to have TopK post processing for it. However TopK node is not converted to LabelSelect_hls during estimation, but advanced_example does convert it as expected.
Details
Please add to the following sections to describe the bug as accurately as possible.
Steps to Reproduce
I followed the step to and pre & post processing which is explained on
4_advanced_builder_settings
add ToTensor Division and TopK post processing than I run estimation step including two custom step like below.run estimation build step with the model I attached.
Expected behavior
TopK node should be converted to LabelSelect_hls
Actual behavior
TopK node is not converted to LabelSelect_hls and discarded on further step.
step_convert_to_hw onnx
Additional context
model without pre & post processing mnist_model.zip model with pre & post processing processed_model.zip estimation result output_estimates_only.zip