-
I keep bumping into this regardless of which connection method I use:
File "./.venv/lib/python3.11/site-packages/gsheet_pandas/adapter/connection.py", line 46, in _fix_dtypes
df = df.map(lamb…
-
### Description
The current date & datetime inference is a bit weird that it allows the format/pattern of the parsed value to change but that change can only be inside a YMD or DMY context.
Exam…
-
I want to read/write parquet files in go to read/write them in python/[polars](https://github.com/pola-rs/polars).
It seems that the nested fields (`[]int` in my example) written by one lib cannot …
-
First of all, thank you for this amazing package!
Recently, I've been loading a lot of large files and it felt like Arrow.jl loading times are greater than Python. I wanted to quantify this feeling…
-
### Problem description
As mentioned in #11006
Altering a file while scanning it may result in unexpected behavior. We should not allow writing to a file that is being scanned.
-
### Describe the bug, including details regarding any error messages, version, and platform.
The following code produces a segmentation fault with
```
plotly==5.22.0
polars==0.20.31
pyarrow==1…
tafia updated
2 weeks ago
-
Hi! So I was thinking of making a very similar project with one core difference: having the validator function as a Pandas plugin that takes a Pydantic BaseModel or Dataclass as an input.
For example…
-
### Checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pypi.org/project/polars/) of Polars.
### Repro…
-
### Checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pypi.org/project/polars/) of Polars.
### Reprodu…
ek-ex updated
2 months ago
-
Figure out the correct procedure to perform incremental writes to partitioned parquet directory. This is necessary when working with files that exceed memory capabilities. Dask dataframe supports "app…