Open Harvie opened 6 years ago
The big problem is that using decimal is that it is a lot slower. I would also guess that a lot of functions that have efficient CPU instructions for float (sin, sqrt,...) will have slow CPU counterparts.
@Harvie @toktuff Where does this information come from?
file 1.py:
import math
print (math.sqrt(2))
sfinexer @ sfinexer: ~$ python 1.py 1.41421356237
accuracy higher than 1e-7
file 1.py:
import math
print (math.sqrt(2))
y=1e11
y+=1
print y
sfinexer@sfinexer:~/$ python 1.py 1.41421356237 1.00000000001e+11
accuracy by tests 1e11
accuracy higher than 1e-7
As long as you do not need to multiply the result. You need to understand, that when you do mathematical operations again and again, the error accumulates.
Currently bCNC produces g-code to 6 decimal places and we run into problems when we try to use more. For me and my machine it does not make sense to have such precision. But the code of bCNC has to work around some precision based problems. Eg. you can't directly compare two points to tell if they are the same. You have to check if they are less than 1e-7mm apart instead... (this might make sense sometimes. sometimes not so much)
@sfinexer BCNC/lib/bpath.py EPS =1E-7
I would say that you in general can't rely on equality comparisons for floating point numbers. Also, double precision is in general accurate enough. (About 15 digits).
If the accuracy is higher, then you will also compare two numbers. But I do not think that the error accumulates so significantly. It must be remembered that the step motor 2D / 3D printer will impose major restrictions.
@sfinexer i've posted some links to study. please at least watch this: https://www.youtube.com/watch?v=PZRI1IfStY0 this is not about machine precision at all.
@Harvie I think I understand what you mean.
From what I read: the calculations are being made with error 1 e-11 which is what phyton handles. Cumulative. The error that grbl handles to compare equalities is 1e-7mm. It is a physical machine. What is mine is a question, without a firm base in knowledge, just trying to reason logically with great possibilities of being wrong: There are algorithms that imply risk of accumulation of errors for say hundredths / thousandths or less, that merit to handle greater precision? own errors that have to do with the physical construction of the machine, variations by temperature, (dilatations or contractions), by friction of bearings, ball screw precision or other mechanism of transmission, 5% error in position in engine step to step ..., is not the error due to rounding with respect to those coming from hardware negligible?
I've just heard about python decimal https://docs.python.org/2/library/decimal.html People say it's way more fit for precise decimal computations than float as it doesn't do any weird stuff with smallest decimal places. This could help us with that 1e-7 problem that bCNC has with precision right now, because with decimal type you can set any decimal precision you want (number of decimal places is not limited, but you have to state it before you start computing). But i am not python expert nor floating point magician. So i would like to hear from more experienced people on this topic.
This is just an idea for very distant future.
Interesting topic: https://sixty-north.com/blog/the-folly-of-floating-point-for-robust-geometric-computation.html https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf https://www.youtube.com/watch?v=PZRI1IfStY0