Closed luukburger closed 10 months ago
May I ask why you write locally before uploading?
I read online this is due to the fact that the buffer gets closed automatically, and it can be worked around by overwriting the close() function
I'm pretty sure this was my recommendation when something like this came up in the past. You could also use the memory:// filesystem in fsspec, which contains file-like objects that don'y close (and fsspec rsync might then be a natural choice to send to file to remote).
I am reticent to add yet another argument to write()
, which has so many already. We could in theory tailor the code to only close files that we had opened, but that's not necessarily what the caller wants, so I thin the workaround is not that bad.
I'm pretty sure this was my recommendation when something like this came up in the past.
Could well be! I think I found it in another place on Github, or maybe on StackOverflow...
May I ask why you write locally before uploading?
To the buffer, you mean? I did not want to write the file to a local file-system and then copy the file to S3, since I'm in a Lambda here. And with limited expertise on these kind of things, I just figured writing to a buffer was the best way to do what I wanted to do.
I can live with the workaround, but I just thought it would be nice if writing to Parquet from Pandas could be identical for PyArrow and Fastparquet, though I understand that after a change on your end this would also require applying it in Pandas.
I can live with the workaround, but I just thought it would be nice if writing to Parquet from Pandas could be identical for PyArrow and Fastparquet
I don't intend to spend time on this right now, but contributions are of course welcome. It feels like this is niche enough, especially given both the workaround you found and the memory: filesystem. Furthermore, with simplecache::s3://bucket/file, you would also get the same behaviour of writing to a local file (not memory) and uploading when done.
Hi there,
I'm building an AWS Lambda that calls an API and writes the output to an AWS S3 bucket in Parquet format. The relevant code snippets look like this:
So I first create a buffer, and later write it to S3.
Now as you can see I use 'pyarrow' in this case, but I just switched to 'fastparquet' because I need to package the dependencies for the Lambda, and the footprint for 'pyarrow' is huge. When I switch the above code to 'fastparquet' though it doesn't work.
Now I'm not exactly an expert on this, but I read online this is due to the fact that the buffer gets closed automatically, and it can be worked around by overwriting the close() function.
So I did that, and now it indeed works, but seems a bit of a hack to me, and I guess it would be great if 'pyarrow' and 'fastparquet' would be compatible on this? Maybe something to look into?