Closed sadikovi closed 6 years ago
@sunchao Would you mind commenting on this issue? I would like to know your opinion and any suggestions you have on the write support and overall direction, or if there is anything that I missed.
I am happy to take the ownership of this work and my original plan was exploring this in my branch and creating incremental PRs to support write functionality.
Let me know what you think and if there are any changes you want to make. Thanks!
Thanks @sadikovi , this will be a great feature to have. I'm not very familiar with the write path so far but will take a look at the existing implementations and add my inputs. I do suggest we make this as a umbrella issue and break the task into pieces (just discovered this feature in GitHub).
Sorry for the delay, I can finally start working on this, will experiment in my branch, see what happens.
no worries at all! I was also busy with some other stuff recently and haven't done much at all on parquet-rs... do plan to start looking at arrow-parquet integration as well as understanding how write can be implemented for parquet-rs.
Initial write support is proposed in https://github.com/sunchao/parquet-rs/pull/127 and is considered WIP at the moment. The PR implements low-level write API, which means we provide some basic traits and structs for user to write data: FileWriter
, RowGroupWriter
, ColumnWriter
, PageWriter
. User must take care of converting nested values into values, definition and repetition levels.
Unfortunately, PR contains some other code for metadata conversion, etc., which might complicate the review process. Below I will give an overview of the design and how it all works, then I list gotchas and limitations.
You can have a look at src/bin/parquet-write.rs
. It contains a simple code to show workflow, but
below is an explanation what each interface does.
User creates FileWriter
from a file, input schema and writer properties. We assume that file is
newly created, and nothing has been written to it. Currently it is not enforced, let me know if this
is required.
Each FileWriter
can write 0 or more row groups. For this user asks for a new RowGroupWriter
.
Note that we return actual struct, not a reference here. This is done to ease problems with lifetimes
when using the API, but it creates some other problems like tracking row groups - I added some simple
code there, not sure if it is enough to assume that users are going to follow the convention.
All reads are sequential, when user asks for a row group, she cannot write data in parallel, it must
be row group by row group. Each RowGroupWriter
gives an access of a certain number of ColumnWriter
,
which is determined by the number of leave nodes in schema.
User is expected to write column by column; in fact, it enforced in the implementation. Every time user asks for a new column, we automatically close the previous one. User does not need to close column writer, everything is closed automatically under the hood (this applies to row groups as well).
Technically RowGroupWriter
creates ColumnWriter(PageWriter)
, PageWriter
is responsible for
low level writes of pages into the sink (Write + Position
). We write CompressedPage
which is a
mirror of Page
enum, that allows us to store compressed buffer and uncompressed length. Page
writer also maintains several metrics, same for row group writer.
The overall API resembles the structure of read path, including ColumnWriter
and ColumnWriterImpl<T>
.
The main files are:
src/bin/parquet-write.rs
shows the API in action, writes simple file.src/file/writer.rs
contains writer implementations for file, row group and page writer.src/column/writer.rs
contains code for column writer.src/column/page.rs
contains page interfaces.src/file/properties.rs
contains code for writer properties. I think we should have reader options
for read path as well, so user can set batch size, for example.write_batch
and write_mini_batch
are supported in column writer.PageWriter
.None
for it.@sunchao I opened a PR with the initial write support. Could you review when you have time? We can discuss the details and high level approaches in the PR. Thanks!
I updated description with the sub-tasks.
Prototype did not use max_row_group_size
to configure the size of the row group, which was a design problem. I think I will get back to it, once I start working on row group writer. We also need to make sure that we use all, but max statistics size in the first version of writer (statistics are not supported in writes currently).
@sunchao would it be okay to open PR with just column writer with some changes in page writer, and add tests in a separate pull request? It could be difficult to review them in one PR.
@sadikovi Sure. Please go ahead. Thanks.
@sunchao There are 2 stories left to address. Would you like to wait until we fix them, or would you like to close the write support issue and move those 2 stories into separate issues?
It's up to you :) we can tackle them in later releases if you think they do not affect the functionality of writer but just an improvement.
I moved two last tasks into separate issues, because they are merely enhancements and do not affect the core functionality of the writers.
I am going to close this issue, indicating that write support has been added and we can start writing files using column writers or building on top of it using Arrow or some other approach.
Any issues that arise as features or bugs will go as separate issues.
Thanks!
This is an RFC for write support in this crate.
Prototype PR #127. API resembles closely read support. See the design overview and implementation details below in the comments.
Sub-tasks:
Add reader propertiesUpdate the batch size (#161).