Open neogyk opened 1 year ago
Usage of half-precision speed ups the training process of DNN, as it allocates less memory.
Currently, the baler can be executed only with full precision numbers. The mixed precision is not working due to the error of mismatch of the data types during the matrix multiplication in the forward pass.
Usage of half-precision speed ups the training process of DNN, as it allocates less memory.