nvidia produce a containerised version of pytorch on NGC which has native automatic mixed precision (AMP) support by default. AMP mixes full precision float32 values with half-precision float16 values to speed up training and reduce the memory cost of complex models. This could be an interesting way to speed up models.
nvidia produce a containerised version of pytorch on NGC which has native automatic mixed precision (AMP) support by default. AMP mixes full precision float32 values with half-precision float16 values to speed up training and reduce the memory cost of complex models. This could be an interesting way to speed up models.
https://ngc.nvidia.com/catalog/containers/nvidia:pytorch