In my network, there is a depth_to_sparse operation, which contains no learn-able parameters. What I want is to skip this to do a quantization aware training with this layer NOT trained (skipped during back-propagation). Can I do retrain my model using Graffitist?
In my network, there is a depth_to_sparse operation, which contains no learn-able parameters. What I want is to skip this to do a quantization aware training with this layer NOT trained (skipped during back-propagation). Can I do retrain my model using Graffitist?