edwardcapriolo / filecrush

Remedy small files by combining them into larger ones.
192 stars 120 forks source link

does not compress output files #12

Open alrustamov opened 8 years ago

alrustamov commented 8 years ago

`$ hadoop jar target/filecrush-2.2.2-SNAPSHOT.jar com.m6d.filecrush.crush.Crush --info --verbose --threshold=0.1 --compress=gzip /user/arustamov/crush{17,18} $(date +%Y%m%d%H%M%S) outDir is: tmp/crush-a7ea6dac-c48a-483f-a652-1be09b8cfaff/out

Using temporary directory tmp/crush-a7ea6dac-c48a-483f-a652-1be09b8cfaff16/07/01 09:56:51 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 16/07/01 09:56:51 INFO compress.CodecPool: Got brand-new compressor [.deflate] 16/07/01 09:56:51 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces

/user/arustamov/crush17 has no crushable files

Skipped 2 files /user/arustamov/crush17/part-00402 /user/arustamov/crush17/part-00397

/user/arustamov/crush17/subdir has no crushable files

Skipped 1 files /user/arustamov/crush17/subdir/part-00401

Copying crush files to /user/arustamov/crush18

Moving skipped files to /user/arustamov/crush18 /user/arustamov/crush17/subdir/part-00401 => /user/arustamov/crush18/subdir/part-00401 /user/arustamov/crush17/part-00402 => /user/arustamov/crush18/part-00402 /user/arustamov/crush17/part-00397 => /user/arustamov/crush18/part-00397

Deleting temporary directory `

alrustamov commented 8 years ago

output is uncomprssed files of the same size as input