Closed JobLeonard closed 5 years ago
@JobLeonard , thanks for your interest and sorry for the delayed reply. You are partially correct about the 8-bit fixed point math. In the case of normal CPUs and GPUs where FPUs are a norm, having any lower precision operation and trying to gain advantage in compute (throughput) is almost impossible. However, in the case of edge devices like Arduino Uno or ARM Cortex M0+ processor there is no dedicated FPU (and are single threaded), so all the floating point ops are simulated using software libraries over the standard integer arithmetic supported by the hardware. In general we observe that a single float op is as costly as 3-4 similar integer ops, this gives us the required gains to maintain the latency under SLA. also, storing 8-bit integers saves Flash and even working RAM, thereby reducing the memory usage while reducing the compute cost. I am not aware of posits. Let me check them out. Let me know if you this makes things clearer.
Yep, it clarifies things a lot, thank you! :)
Hope the posits will bring something interesting to the table, although from the sound of it their overhead might be too great already for your purposes.
And just as you close this, Facebook announced their posit-inspired format:
https://code.fb.com/ai-research/floating-point-math/
Anyway, thanks again for the answer, and keep up the inspiring research! :)
So, this is just a general curiosity question from someone who at best may fool with this library on his Arduino's at some point.
I found this repo through this article on Bonsai, which linked to the paper.
The part where it mentioned that 8-bit fixed point math was used to avoid floating point overhead stuck me as interesting. I can only assume that this comes with trade-offs, but I know next to nothing of Machine Learning, so have no idea of what those are.
However, it just made me wonder if you were aware of posits (which come in 8-bit variants as well), and how they would fit in this picture.
Cheers, Job