pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.58k stars 17.9k forks source link

Improve type handling in read_sql and read_sql_table #13049

Open rsdenijs opened 8 years ago

rsdenijs commented 8 years ago

Problem

In _pd.readsql and _pd.read_sqltable when the chunksize parameter is set, Pandas builds a DataFrame with dtypes inferred from the data in the chunk. This can be a problem if an INTEGER colum contains null values in some chunks but not in others, leading the same column to be int64 in some cases and in others float64. A similar problem happens with strings.

In ETL processes or simply when dumping large queries to disk in HDF5 format, the user currently has the burden of explicitly having to handle the type conversions of potentially many columns.

Solution?

Instead of guessing the type from a subset of the data, it should be possible to obtain the type information from the database and map it to the appropriate dtypes.

It is possible to obtain column information from Sqlalchemy when querying a full table by inspecting its metadata, but I was unsuccessfull in findind a way to do it for a general query. Although I am unaware of all the possible type problems that can arise DBAPI does actually enforce the cursor.description to specify whether each result column is nullable. Pandas could use this information (optionally) to always interpret nullable numeric columns as floats and strings as object columns.

jreback commented 8 years ago

The _wrap_result needs to incorporate the meta-data from the table and cast as appropriate (or potentially just pass it directly to .from_records).

@jorisvandenbossche

jorisvandenbossche commented 8 years ago

There is already the _harmonize_columns method (https://github.com/pydata/pandas/blob/master/pandas/io/sql.py#L895) that is called in read_table after from_records is used. So the column information from the database is already used to some extend, but this method can possibly be improved.

However, the problem of eg possible NaNs in integer columns will not be solved by this I think? The only way to be certain to always have consistent dtype in different chunks is to convert integer columns always to float (unless in the case that a not-nullable constraint is put on the column). Which I am not sure of we should do, as in many cases we will be converting all integers without NaN unnecessarily to floats ..

rsdenijs commented 8 years ago

@jorisvandenbossche read_table could use the nullable information provided by sqlalchemy. An integer that is nullable could be casted as float in pandas. In the case of read_query i did not find the column type from sqlalchemy directly, but the type and nullable information is specified in the cursor description from the DBAPI.

Cursor attributes .description This read-only attribute is a sequence of 7-item sequences.

Each of these sequences contains information describing one result column:

name type_code display_size internal_size precision scale null_ok

The first two items ( name and type_code ) are mandatory, the other five are optional and are set to None if no meaningful values can be provided.

This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the .execute*() method yet.

The type_code can be interpreted by comparing it to the Type Objects specified in the section below.

This is supported by most major drivers with the exception of sqlite3, for reasons I dont understand because sqlite has no proper column types .

jorisvandenbossche commented 8 years ago

read_table could use the nullable information provided by sqlalchemy. An integer that is nullable could be casted as float in pandas.

IMO the problem with this is that by default columns can hold NULLs, so I suppose in many cases people will not specify this, although maybe their columns do in practice not hold NULLs. For all those cases the dtype of the returned column would now change, in many cases unnecessarily.

I am not saying the issue you raise is not a problem, because it certainly is, but I am considering what would be the best solution for all cases.

rsdenijs commented 8 years ago

I doubt that in serious environments non-nullable columns are left as nullable... but I guess we will never know. I think this could be handled by a keyword use_metadata_nulls or something with a better name.

jorisvandenbossche commented 8 years ago

@rsdenijs That is quite possible, but the fact is that there are also a lot of less experienced people using pandas/sql .. The question then of course is to what extent we have to take those into account for this issue. (and it is actually more the problem that pandas cannot have integer columns with missing values ... but that is a whole other can of worms :-))

Anyway, trying to think of other ways to deal with this issue:

Would you be interested in doing a PR for the first bullet point? This is in any case the non-controversial part I think and could already solve it for string columns (leaving only int columns to handle manually).

rsdenijs commented 8 years ago

@jorisvandenbossche Actually I might have been confused regarding the strings. String columns are always of type object, regardless of the presence of NaNs. For some reason I thought there was an actual string type in pandas. So although I would like to take a stab at it, im no longer sure what the goal would be.

Regarding the ints types, I think that read_table (not read_query) should always inspect if the column is nullable from the SqlAlchemy info. We are reading the col_type anyway, why not check if it is nullable? Specifically, I think the following part is bad when we are chunking, because we dont know if later chunks will have nulls (in fact, im not sure it is ever achieving anything, as it is being called after from_records, so pure int and pure bool columns should already have the right type)


                elif len(df_col) == df_col.count():
                    # No NA values, can convert ints and bools
                    if col_type is np.dtype('int64') or col_type is bool:
                        self.frame[col_name] = df_col.astype(
                            col_type, copy=False)

If for some reason we can not verify the column is nullable (sqlalchemy), when chunking the default behaviour should imo be that ints.

chananshgong commented 7 years ago

My problem is that even if the detection works, integers loose precision when casted to float and my values are id of records so I need full 64 bit integer precision. Any workaround?

konstantinmiller commented 6 years ago

It would be extremely helpful to be able to specify the types of columns as read_sql() input arguments! Could we maybe have at least that for the moment?

jorisvandenbossche commented 6 years ago

Yes, we can.. if somebody makes a contribution to add it! So PR welcome to add a dtype argument to read_sql

sam-hoffman commented 4 years ago

I'm interested in taking this on! Is a fix on this still welcome?

jorisvandenbossche commented 4 years ago

@sam-hoffman contributions to improve type handling in sql reading are certainly welcome, but, I am not sure there is already a clear actionable conclusion from the above discussion (if I remember correctly, didn't yet reread the whole thread). So maybe you can first propose more concretely what you would like to change?

aaronlutz commented 4 years ago

I'm interested in taking this on! Is a fix on this still welcome?

@sam-hoffman please do!

avinashpancham commented 3 years ago

@jorisvandenbossche based on the above discussion I would propose to add a dtype arg for read_sqland read_sql_table. In #37546 I already added it for the read_sql_query function. Agree?

silverdevelopper commented 2 years ago

Hello what's is the latest situation with that issue?

eirnym commented 2 months ago

As a temporal workaround for nullable int64 types I use following and prefer to specify each type for each column.

dtype={
        'column': pd.Int64Dtype()
    }
tobwen commented 1 month ago

Seems like this is stale now?