NumPower / numpower

PHP extension for efficient scientific computing and array manipulation with GPU support
https://numpower.org
Other
192 stars 4 forks source link

Use single precision (float32) as default type #18

Closed henrique-borba closed 1 year ago

henrique-borba commented 1 year ago

Double precision time is actually too much for most ML tasks, requires more memory and CPU cycles and are not available for Tensor Cores computation.

Usage of a single-precision point type is also unique because PHP stores every float point into a double precision.

henrique-borba commented 1 year ago

Solved by 3b874e34