Open krcurtis opened 3 weeks ago
Hi @krcurtis,
This looks to be related to floating point precision and I have seen similar differences using numpy 32-bit vs 64-bit representations, as well as differences across CPU architectures. There's a lot written on the subject, and you can find a Python specific discussion here. There are many mathematical steps in the logicle routine where these can compound. That would be my best guess for what you are seeing there.
Regards, Scott
Hi,
I've encountered some differences when calculating the logicle transform using different methods. I had seen that flowutils matches the GatingML 2.0 spec, but I get differences when computing with mpmath, a Python package for arbitrary-precision floating-point arithmetic.
When I tell mpmath to use 30 digits of decimal precision, I get these results for the test values for logicle(x,1000,1,4,1) in the GatingML 2.0 spec:
The differences seem to only occur at the negative x values, where only the first two digits after the decimal point are matching. I have also tried this with larger values of decimal precision up to 10,000 but have not seen the mpmath results match the GatingML 2.0 spec. Have you encounter this? I've pasted my mpmath python script below. Any thoughts?
Thanks!