Open Atman-Kar opened 2 years ago
The conversion sounds like the easiest fix.
We convert the inputs to the batchnorm layers to a float, reconvert them back to int8 post batch norm.
Tho I am not aware of what this means in terms of training or correctness.
If we go through the conversion route, we would have to make custom layer called BunnyBatchNorm2D
which:
int8
input and converts them to float32
int8
for the next layer to process.Should we do it this way?
Note: A similar error occurs with MaxPool2D. However, Conv2D and Linear layers work perfectly with int8
input types.
Find a way to allow
batch_norm
layer to acceptint8
type input.Few options we have: