pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.57k stars 17.9k forks source link

Uniform file IO API and consolidated codebase #15008

Open dhimmel opened 7 years ago

dhimmel commented 7 years ago

There are at least three things that many of the IO methods must deal with: reading from URL, reading/writing to a compressed format, and different text encodings. It would be great if all io functions where these factors were relevant could use the same code (consolidated codebase) and expose the same options (uniform API).

In https://github.com/pandas-dev/pandas/pull/14576, we consolidated the codebase but more consolidation is possible. In io.common.py, there are three functions that must be sequentially called to get a file-like object: get_filepath_or_buffer, _infer_compression, and _get_handle. This should be consolidated into a single function, which can then delegate to sub functions.

Currently, pandas supports the following io methods. First for reading:

And then for writing:

Some of these should definitely use the consilidated/uniform API, such as read_csv, read_html, read_pickle, read_excel.

Some functions perhaps should be kept separate, such as read_feather or read_clipboard.

dhimmel commented 7 years ago

Here are my thoughts on the API.

Regarding the consolidated codebase:

jorisvandenbossche commented 7 years ago

That sounds great! If you would like to work towards this, that would be very welcome.

Regarding the py2/py3 separation, I think we should just do what is most practical here (having a certain separation makes the code more clear, too much separation can make it more complex again. In any case, having a few but scattered if PY2 statements are also rather easy to delete). But if all related code is contained in io/common.py, it should not be too difficult to find a good balance in that one file here.

One more consolidation that would be possible for read_csv is between the python and c engine. I think the c engine still has its own logic for handling compression, while I do not think this is needed to be in the cython/c code (I don't think this is the performance sensitive part?)

dhimmel commented 7 years ago

If you would like to work towards this, that would be very welcome.

Let's wait for #13317 and any other IO PRs that I don't know about to be merged. I'm hesitant to commit since I know it will cut into my other obligations. But if no one else is interested in implementing, I'll consider.

I think we should just do what is most practical here

Totally agree. There are still a few things I need to understand before I can make that call. One issue is mode in _get_handle, which currently is poorly documented. Presumably this could include t for text or b for bytes, which will have some interactions with Py 2 or 3.

I think the c engine still has its own logic for handling compression, while I do not think this is needed to be in the cython/c code

Agree the c engine implementation should be consolidated, unless there is a major performance issue. But the duplicated functionality with _get_handle appears not to be c optimized (I'm not sure as I don't know cython).

jreback commented 7 years ago

@dhimmel can you annotate the above (or maybe make it a table)

add an x/check if supports pathlib like things / compression / url

goldenbull commented 7 years ago

agree! 👍 I'm now working on #13317 and found _get_handle a bit complex to understand. _get_handle needs to deal with varies situations:

It seems to be better to spilt _get_handle into two or more functions to make each single function simpler

jreback commented 6 years ago

@gfyoung can you evaluate this issue, e.g. close, tick boxes, etc.

gfyoung commented 6 years ago

@jreback : This looks to be a much more substantial refactoring at the moment. The checkboxes were more of an enumeration of methods instead of actual tasks AFAICT.

VelizarVESSELINOV commented 2 years ago

Request for API consistency between to_sql and to_gbq: .to_sql(index=True...) vs .to_gbq(no option index is ignored all the time)

Desired solution:

  1. To have the same option in both functions
  2. To have the same default value

Do you prefer having a separate ticket?

dhimmel commented 2 years ago

Do you prefer having a separate ticket?

Yes, the index parameter is outside the scope of this issue, which is focused on specifying the input data location and the corresponding compression.