ToucanToco / fastexcel

A Python wrapper around calamine
http://fastexcel.toucantoco.dev/
MIT License
82 stars 4 forks source link

schema_sample_rows=0 results in a table filled with null values #236

Open niekrongen opened 1 month ago

niekrongen commented 1 month ago

When reading a excel file and setting the schema_sample_rows to 0 results in a table of the correct height, but all values are set to null. If the schema_sample_rows is set to n and the first n values in a column are empty, then the remainder of the values in this column are also filled with null values. This means that the values that are present in this column, are replaced with null values. This is causing a loss of data, while this can be fixed by defaulting to a column dtype of type string.

For example, xlsx2csv has this also as default.

PrettyWood commented 1 month ago

I don't really understand. schema_sample_rows sets the number of rows used to infer the type of the columns. If you set it to 0, we can't infer anything because no data is read hence the null type being generated

niekrongen commented 1 month ago

I understand that the type cannot be inferred, but why would you replace all the values with null then?

I have a table filled with values, but I do not want to have any types inferred as I want to assign them by myself. However, the columns are all filled with null values instead of the values that are present in the column. I would be fine with the fact that the column dtype is null, but then all the values are kept as strings for example. In that way I would be able to cast the columns by myself, to the values that I want them to be and prevent the extra overhead of inferring the dtype.

niekrongen commented 1 month ago

For example the following excel on the left with schema_sample_rows = 0 results in the table on the right after reading it with all dtypes being null. image

Another example is the following excel on the left with schema_sample_rows = 1 results in the table on the right after reading it with the dtypes of the first three columns being string and the last column being of dtype null. image

I do not see why this is wanted behavior to remove all data from the column if the dtype cannot be inferred. Could you please elaborate why this is the default behavior and how to prevent the replacement in way that is not casting all 16384 columns with the dtypes parameter to type string.

Tim-Kracht commented 1 month ago

I ran into this as well. I have very inconsistent excel files and the requirement is to load all data as strings. Since I don't know how many columns there are, I am currently doing a 3-step process: load the first row, build a dtype map based on the width of the row, then load the sheet using the map. The time hit for the double load is noticeable for larger worksheets. I am certainly open to better solutions. I have tried some combinations of Polars and fastexcel and always run into trouble... dropped columns/null data at one end and unsupported column type combination: {DateTime, String} at the other.

I was also going to open a feature request to pass a single dtype to apply to all columns.

ws = wb.load_sheet_by_name(name=sheet_name, header_row=None, n_rows=1, schema_sample_rows=0)
type_map = {i: "string" for i in range(ws.width + 1)}
ws = wb.load_sheet_by_name(name=sheet_name, header_row=None, dtypes=type_map)
niekrongen commented 1 month ago

The solution that I have used with polars is by creating the type_map for all 16384 columns as this is the limit of an excel file. Of course I am not creating it for the all columns but for my purpose with polars I have set it for the first 100 columns. This returns the correct dataframe. The extra columns in the type_map are not created at all, so therefore you are able to create a type_map without knowing the width of the worksheet in the first place.