Open snowflake01986 opened 6 years ago
I suspect the discrepancies are rooted in the Excel engine. Reading this as a CSV does not have that rounding problem as you describe.
Actually, that assessment was incorrect. The issue persists even with CSV. The crux of the problem actually lies with our internal converter to float (or numeric in general):
import pandas.util.testing as tm
import pandas._libs.lib as lib
import numpy as np
inp = np.array(["0.014", "0.984"], dtype=object)
exp = np.array([0.014, 0.984])
tm.assert_numpy_array_equal(lib.maybe_convert_numeric(inp, set(), False), exp)
...
AssertionError: numpy array are different
numpy array values are different (100.0 %)
[left]: [0.013999999999999999, 0.9840000000000001]
[right]: [0.014, 0.984]
@gfyoung
The string parsing functions seem to call a custom built xstrtod
function
which does a fine job of evaluating the string but the issue here is it's not evaluating it exactly as python (or numpy) is evaluating it. float('0.014')==0.014==np.fromstring(b'0.014',sep=' ')[0]
but the xstrtod('0.014') != 0.014
. For consistency's sake I think it makes sense that a number read in by pandas as string should be evaluated and written back out as the same number. (currently 0.014 is written back out as 0.0139999999 after being evaluated). It's a fluke that this issue isn't being picked up by any of the tests. For example, if "0.014" would make the following fail if it in the array:
https://github.com/pandas-dev/pandas/blob/bb43726e1f52a0ddee45fcf485690719f262870d/pandas/tests/dtypes/test_inference.py#L398-L405
This seems like a fairly straightforward fix of replacing xstrtod
with the Python float evaluator PyOS_string_to_double
from Python.h
, unless there is a good reason to stick with the original xstrtod
?
As a side note, the almost identical xstrtod
is defined again in parser_helper.h, which seems like an oversight.
https://github.com/pandas-dev/pandas/blob/bb43726e1f52a0ddee45fcf485690719f262870d/pandas/_libs/src/parse_helper.h#L148-L151
As a side note, the almost identical xstrtod is defined again in parser_helper.h, which seems like an oversight.
Weird...I think it would be good to see if we could unify the two.
This seems like a fairly straightforward fix of replacing xstrtod with the Python float evaluator PyOS_string_to_double from Python.h, unless there is a good reason to stick with the original xstrtod
I'm not sure what the reason was implementing our own. However, I would encourage you to investigate the consequences of doing so, both from an accuracy and performance perspective.
Have there been any progress on this issue? I experience the same issue when importing floats from CSV data. This makes it very difficult to verify that my data transform operations work as expected.
I'm using Pandas 1.1.2
Have there been any progress on this issue? I experience the same issue when importing floats from CSV data. This makes it very difficult to verify that my data transform operations work as expected.
I'm using Pandas 1.1.2
It seems like I found a solution for this, at least when using the read_csv functionality. float_precision='round_trip'
as argument to the read_csv function. Based on https://stackoverflow.com/questions/36909368/precision-lost-while-using-read-csv-in-pandas
Not sure if this solves the original issue that @snowflake01986 had though.
input_csv = StringIO('''
0.984, 1.05153
0.0134, 1.05152
''')
df = pandas.read_csv(input_csv, header=None)
print(df)
for index, row in df.iterrows():
print(row[0], row[1])
print('pandas version: ', pandas.__version__)
The above code running on https://colab.research.google.com/ shows some of the the original numbers being represented by a large number of decimal places after loading into a dataframe. The output is:
0 1
0 0.9840 1.05153
1 0.0134 1.05152
0.9840000000000001 1.0515299999999999
0.0134 1.05152
pandas version: 1.1.5
When the code is executed using a forked version of pandas source version 1.3.3 the problem does not appear. The output is:
0 1
0 0.9840 1.05153
1 0.0134 1.05152
0.984 1.05153
0.0134 1.05152
pandas version: 1.3.3
The same result is produced when reading the four numbers from a .xlsx file.
It appears that the issue has been fixed.
Code Sample, a copy-pastable example if possible
test.xlsx
Problem description
I loaded a pandas dataframe from the attached test.xlsx, of which the content is as follows: name c1 c2 0 r1 0.014 0.000-0.054 1 r2 0.984 0.025-1.785 As we can see, the c1 columns has been well rounded. For some reasons, I needed only the values numpy.darray, but the floating precision expands undesirably and changes a little as follows:
array([['r1', 0.013999999999999999, '0.000-0.054'], ['r2', 0.9840000000000001, '0.025-1.785']], dtype=object)
what is odd is that I have some other similar tables which resulted in the expected results. So this really beyond me.
Expected Output
What I wanted was the perfectly correspondance of dataframe: array([['r1', 0.0134, '0.000-0.054'], ['r2', 0.984, '0.025-1.785']], dtype=object)
Output of
pd.show_versions()