Open pllim opened 5 years ago
Read an interesting article (and comments) on posits (which mentions SKA is considering this alternative floating-point format). There are also other FP alternatives for Deep Learning. Among them bfloat16, and others with hardware implementations driven by AI. Are we in for some technology disruption in this pretty fundamental aspect of numerical computing?
@hcferguson volunteered. Something about 64-bit compute using 32-bit only?