Closed pjaholkowski closed 3 years ago
This seems tobe an issue with tensorflowjs.
The reason for the difference is data loss during de-quantisation of the weights which leads to all packed bias values being zero after loading. The optimiser will then happily drop the BiasAdd
-operation, since it detects a no-op due to the bias values being zero.
I will do some further investigating and fix the problem in tensorflowjs if possible.
I have identified the problem in tensorflowjs and will submit a pull request to fix the problem.
I'll let you know once the tensorflowjs has merged the changes. I could add a workaround here, but I feel it's better to fix the problem at its root, especially since that will benefit every user of tensorflowjs.
Thanks for bringing the problem to my attention, though!
OK, thank you
The fix for tensorflowjs has been approved and a future version (maybe even the next one?) will solve the issue.
It took a while due to a US holiday, but eventually the issue got sorted. It's not merged into the master branch yet, so I keep this issue open.
That's good news. Thank you for letting me know
The bugfix has been merged into TFJS master just now: Pull Request.
It should be part of the next TFJS release, which would fix this issue. You can also install the TFJS master version to get the bug fixed immediately.
Hi
I have models (not mine) in tfjs graph model it is MobilenetV1 in float and quantized to int16 and int8. The original with float after conversion works okay but that two other ones are inproperly converted. You can see that by comparing original graph model and generated model in Netron. I put original graph model as attachment
mobilenet_quant2_075_stride16.zip