Hi, while evaluating the performance of the quantized v8 models I realized that the current exporting pipeline does something slightly different from how the models were actually exported for the zoo. As you can see from the images, the slicing operation is done differently for the same model (YOLOv8m) and the consecutive conv layer is not being quantized. My guess is that it has to do something with #1497 . However, I might be wrong because it shows that 0 layers are being propagated that way, and I have even tried commenting it out.
Current:
Zoo Models:
I'm not sure why this is happening, I have tried to roll back to sparseml version 1.5 and 1.6 but the issue still remains. Are there any particular version I should rollback to? The zoo models were exported using what version of sparseml? Any help or hints on how to fix this are really appreciated! (Apologies for the spam)
Hi, while evaluating the performance of the quantized v8 models I realized that the current exporting pipeline does something slightly different from how the models were actually exported for the zoo. As you can see from the images, the slicing operation is done differently for the same model (YOLOv8m) and the consecutive conv layer is not being quantized. My guess is that it has to do something with #1497 . However, I might be wrong because it shows that 0 layers are being propagated that way, and I have even tried commenting it out.
Current:![image](https://github.com/neuralmagic/sparseml/assets/26775473/cfdb8f78-bff2-4ec4-8789-551bda0ad17b)
Zoo Models:![image](https://github.com/neuralmagic/sparseml/assets/26775473/d42309a3-07af-4389-a738-a299e9dd2288)
I'm not sure why this is happening, I have tried to roll back to sparseml version 1.5 and 1.6 but the issue still remains. Are there any particular version I should rollback to? The zoo models were exported using what version of sparseml? Any help or hints on how to fix this are really appreciated! (Apologies for the spam)