sarah-quinones / gemm

MIT License
76 stars 11 forks source link

Support for Mixed precision f32-f16 #6

Open mert-kurttutan opened 1 year ago

mert-kurttutan commented 1 year ago

Hi @sarah-ek,

I have an operation where I need to apply matrix multiplication to f32,f16 to obtain f32 matrix. The addition and multiplication should be done in f32 format.

So: Matmul(f16,f32) -> f32

To do this operation with your package, at the moment I saving( and turning) f16 matrix into f32 buffer matrix, then using gemm for f32xf32 matrix multiplication.

More specifically, I used your f16 matmul code and transformed some part of it so that it is mixed precision.

I am not sure how optimized my code is. If possible I would like to know if you will add support for this.

This type of operation is being adapted more and more in the context of large ML models.

sarah-quinones commented 1 year ago

is it an option to convert the f16 matrix to f32 outside the matrix multiplication? then multiply the two f32 matrices together? if you can spare the memory for the conversion then this shouldn't add much overhead since the conversion is O(n^2) while the multiplication is O(n^3)