Thanks for your interest in the library!
Unfortunately, Sinkhorn solvers and KeOps are really not suited to mixed precision training: most of the information in the Sinkhorn algorithm is encoded in the "tail" of the Gaussian kernel, which means that we must either work with float64 numbers or with custom log-sum-exp kernels. (GeomLoss relies on the 2nd option.) Otherwise, we quickly end up with numerical instability and underflow/overflow problems.
Hi @yxchng ,
Thanks for your interest in the library! Unfortunately, Sinkhorn solvers and KeOps are really not suited to mixed precision training: most of the information in the Sinkhorn algorithm is encoded in the "tail" of the Gaussian kernel, which means that we must either work with float64 numbers or with custom log-sum-exp kernels. (GeomLoss relies on the 2nd option.) Otherwise, we quickly end up with numerical instability and underflow/overflow problems.
Best regards, Jean