fingltd / 4mc

4mc - splittable lz4 and zstd in hadoop/spark/flink
Other
108 stars 36 forks source link

how to use 4mc/4mz when write to json or parquet #42

Open zjzzjz1979 opened 5 years ago

zjzzjz1979 commented 5 years ago

I am able to load json files compressed by 4mz/c into spark. But writing does not work. Is this expected?

In [53]: df.write.mode('overwrite').option("codec","com.hadoop.compression.fourmc.ZstdCodec").csv('foo')

IllegalArgumentException: 'Codec [com.hadoop.compression.fourmc.ZstdCodec] is not available. Known codecs are bzip2, deflate, uncompressed, lz4, gzip, snappy, none.'
machielg commented 3 years ago

Would be nice to see some comments or documentation on this, this project seems very promising.