ironSource / parquetjs

fully asynchronous, pure JavaScript implementation of the Parquet file format
MIT License
345 stars 173 forks source link

parquet write to s3 is not queryable by Athena #124

Open zrizvi93 opened 3 years ago

zrizvi93 commented 3 years ago

Hello,

The parquet file generated by this library is compatible with Spark but not queryable using Athena.

I wrote a file to s3 and all array columns would break with the error GENERIC_INTERNAL_ERROR: null

AWS Premium Support told me that after doing a bit of research, the main reason for these issues is the different ways parquet files can be created. After using parquet-tools to inspect the sample data I provided they informed me that this is written in some hybrid parquet format.The parquet format generated by parquetjs allows for the final parquet file to exclude columns if that column is blank in the data. For example, if a record (row) does not have any value for the "x" column, then the "x" column is omitted from the actual parquet file itself.

Athena uses the Hive parquet SerDe (org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe). As a result, the SerDe expects that all columns will be present in the source parquet file. In this case, empty columns are not included within the record (row) of the parquet file. Unfortunately, the previously mentioned Hive parquet SerDe is the only parquet SerDe supported in Athena.

kyanet commented 3 years ago

I'm not sure of the details, but I had the same problem and this option solved it for me. useDataPageV2: false

ngocdaothanh commented 3 years ago

useDataPageV2: false

Would you please give more details about that workaround? Where/how do you set that option?

kyanet commented 3 years ago

@ngocdaothanh One way to create a valid parquet file for Athena is as follows

const parquetjs = require('parquetjs');
const data = [
    { 'col1': 'a1', 'col2': 'b1' },
    { 'col1': 'a2', 'col2': 'b2' },
    { 'col1': 'a3', 'col2': 'b3' }
];
const schema = new parquetjs.ParquetSchema({
    'col1': { type: 'UTF8' },
    'col2': { type: 'UTF8' }
});
(async () => {
    const writer = await parquetjs.ParquetWriter.openFile(
        schema,
        './sample.parquet',
        { useDataPageV2: false });
    data.forEach(d => writer.appendRow(d));
    await writer.close()
})();
kyanet commented 3 years ago

This issue is very related to this one. https://github.com/ironSource/parquetjs/issues/92