ml-explore / mlx

MLX: An array framework for Apple silicon
https://ml-explore.github.io/mlx/
MIT License
15.8k stars 900 forks source link

Support for mlx.float64 #799

Open kyrollosyanny opened 4 months ago

kyrollosyanny commented 4 months ago

Describe the bug Would it be possible to support float64 types. For some numerical simulations, having float64 is important for the accuracy of the simulation. The goal is to use mlx for automatic differentiation in these types of scenarios.

awni commented 4 months ago

Double isn’t possible in Metal. In theory we could do it on the CPU only, but that is likely a lot less interesting to you?

kyrollosyanny commented 4 months ago

For some simulations and optimizations, cpu is more than enough. If it is possible to add that support in future versions, that would be great. Thanks a lot.

awni commented 4 months ago

Sounds good, I'll leave this open for now as a possible enhancement. I don't know if we will do it, but people can comment here with use cases etc to help us prioritize.

Andyuch commented 1 month ago

It would be so grateful and helpful if the float64 will be added, there is a similar issue appearing in my scientific simulation using 'mps' in pytorch.