Closed dongwu92 closed 7 years ago
Did Forge support 8-bit float value for quantized model from tensorflow?
It currently does not. But you can add this yourself by writing a ParameterLoader class that un-quantizes the weights back to floats.
Good idea! Thank you!
Did Forge support 8-bit float value for quantized model from tensorflow?