ckolivas / lrzip

Long Range Zip
http://lrzip.kolivas.org
GNU General Public License v2.0
619 stars 76 forks source link

lrzip silently creating a 24-byte file #147

Closed a3nm closed 3 years ago

a3nm commented 4 years ago

Hi,

I'm using lrzip version 0.631+git180528-1+b1 from Debian testing. When running lrzip on a 9.8 GB mbox file, lrzip runs for around 3 minutes and returns a status code suggesting it has succeeded, but the created LRzip file only contains 24 NUL bytes.

Here's what I did specifically:

$ ls -lh local_backup_1
-rw------- 1 a3nm browser_tor 9.8G Apr 13 12:38 local_backup_1
$ time nice ionice -c 3 lrzip -L9 -z -vvvvvvvvvvvvvvvvvv local_backup_1 && echo success
Warning, unable to set nice value
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 4
Detected 8229158912 bytes ram
Compression level 9
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367075633
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1665127765 sized malloc for back end compression
Using up to 5 threads to compress up to 215585041 bytes each.
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Starting sweep for mask 15
Starting sweep for mask 31
Starting sweep for mask 63
Starting thread 0 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 97.89% of chunk, 1 Passes
Starting sweep for mask 127Q    1:0%  
Starting thread 1 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 92.22% of chunk, 1 Passes
Starting thread 2 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 52.04% of chunk, 1 Passes
Starting sweep for mask 255Q    1:10%       3:0%  
Starting thread 3 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 81.91% of chunk, 1 Passes
nice ionice -c 3 lrzip -L9 -z -vvvvvvvvvvvvvvvvvv local_backup_1  448.40s user 10.06s system 276% cpu 2:45.88 total
success
$ ls -lh local_backup_1.lrz 
-rw------- 1 a3nm browser_tor 24 Apr 13 14:55 local_backup_1.lrz

Any idea about why that might be happening? The fact that the return code is 0 is especially treacherous.

Thanks!

pete4abw commented 4 years ago

The git reference, git180528-1+b1 is Debian, not lrzip. The option -vvvvvvvvv, etc. has no meaning other than -vv. The version you are using is from late May 2018!, I suggest a few things.

  1. Check your disk space. If $TMP or $TMPDIR is not set, the current directory is used.There was a bug iirc that lrzip would fail silently if it ran out of disk space.
  2. Pull the current version from git and try in a local directory (i.e. compile, but don't do make install).
  3. lrzip uses a nice scheduler internally. No need for extra preamble.
  4. Try a lower level. -L9 will take forever on a 10GB file
  5. If you feel lucky, try my patch for zpaq 7.15.
  6. The 24 byte file is just a holder for the lrzip header that is unfilled until the run finishes

Good luck.

Debian maintainer, @gcsideal has been notified that he's behind last year.

a3nm commented 4 years ago

Hi,

Thanks for your help. I have compiled the latest version from master and I am seeing exactly the same behavior.

To clarify, what I think is the main issue here is not that lrzip fails (be it because of the high compression level, available disk space, etc.), but that it fails silently, without an error message and with a return value indicating success.

$ time nice ionice -c 3 ~/apps/lrzip/lrzip -L9 -z -vvvvvvvvvvvvvvvvvv local_backup_1 && echo success      
Warning, unable to set nice value 19...Resetting to 10
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 4
Detected 8229158912 bytes ram
Compression level 9
Nice Value: 10
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367078429
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1665127765 sized malloc for back end compression
Using up to 5 threads to compress up to 215585041 bytes each.
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Starting sweep for mask 15
Starting sweep for mask 31
Starting sweep for mask 63
Starting thread 0 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 97.89% of chunk, 1 Passes
Starting sweep for mask 127Q    1:0%  
Starting thread 1 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 92.22% of chunk, 1 Passes
Starting thread 2 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 52.04% of chunk, 1 Passes
Starting sweep for mask 255Q            3:0%  
Starting thread 3 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 81.91% of chunk, 1 Passes
nice ionice -c 3 ~/apps/lrzip/lrzip -L9 -z -vvvvvvvvvvvvvvvvvv local_backup_1  1047.89s user 15.37s system 323% cpu 5:28.96 total
success
$ ls -lh local_backup_1.lrz
-rw------- 1 a3nm browser_tor 24 Apr 13 18:01 local_backup_1.lrz

I can try other ways to invoke lrzip following what you suggested, but I believe there is a bug here nevertheless.

Wrt your point 1, there is no $TMP or $TMPDIR set, but the current directory has 1.2 TB available.

pete4abw commented 4 years ago

As @ckolivas likes to say "cowardly failing"! This is not good.Why do you use -vvvvvvv? Without seeing exactly where the failure is, it's hard to detect where the failure is. Please provide your system info including processor specs. I notice only 4 cores. Now, I have a few extra hoops for you to jump through.

  1. Make sure you pull the updated master here, and test.
  2. Apply this PR #146 and test.
  3. Try my local fork which has many changes to lrzip, including the lzma SDK 19 and other enhancements. Local Fork, and test.
  4. Try lower levels, -L1,2,3, etc.
  5. Try lower thread values, -p1,2,3, etc.

It's entirely possible that the overhead memory requirement for zpaq is failing at this high level. Let's eliminate other programs. Don't use nice and ionice. As explained, lrzip does this already. but can be overridden using -N## Default is 19. Use -vv, not -vvvvvvv.

a3nm commented 4 years ago

Hi,

Here is the relevant part of the output of lshw:

mu                          
    description: Desktop Computer
    product: H87M-HD3 (To be filled by O.E.M.)
    vendor: Gigabyte Technology Co., Ltd.
    version: To be filled by O.E.M.
    serial: To be filled by O.E.M.
    width: 64 bits
    capabilities: smbios-2.7 dmi-2.7 smp vsyscall32
    configuration: administrator_password=disabled boot=normal chassis=desktop family=To be filled by O.E.M. frontpanel_password=disabled keyboard_password=disabled power-on_password=disabled sku=To be filled by O.E.M. uuid=9402DE03-8004-BA05-7406-950700080009
  *-core
       description: Motherboard
       product: H87M-HD3
       vendor: Gigabyte Technology Co., Ltd.
       physical id: 0
       version: x.x
       serial: To be filled by O.E.M.
       slot: To be filled by O.E.M.
     *-firmware
          description: BIOS
          vendor: American Megatrends Inc.
          physical id: 0
          version: F3
          date: 05/09/2013
          size: 64KiB
          capacity: 8MiB
          capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer acpi usb biosbootspecification uefi
     *-cache:0
          description: L1 cache
          physical id: 4
          slot: CPU Internal L1
          size: 256KiB
          capacity: 256KiB
          capabilities: internal write-back
          configuration: level=1
     *-cache:1
          description: L2 cache
          physical id: 5
          slot: CPU Internal L2
          size: 1MiB
          capacity: 1MiB
          capabilities: internal write-back unified
          configuration: level=2
     *-cache:2
          description: L3 cache
          physical id: 6
          slot: CPU Internal L3
          size: 6MiB
          capacity: 6MiB
          capabilities: internal write-back unified
          configuration: level=3
     *-memory
          description: System Memory
          physical id: 7
          slot: System board or motherboard
          size: 8GiB
        *-bank:0
             description: DIMM [empty]
             product: [Empty]
             vendor: [Empty]
             physical id: 0
             serial: [Empty]
             slot: ChannelA-DIMM0
        *-bank:1
             description: DIMM [empty]
             product: [Empty]
             vendor: [Empty]
             physical id: 1
             serial: [Empty]
             slot: ChannelA-DIMM1
        *-bank:2
             description: DIMM DDR3 Synchronous 1333 MHz (0.8 ns)
             product: BLT8G3D1608DT1TX0.
             vendor: Conexant (Rockwell)
             physical id: 2
             serial: 16011924
             slot: ChannelB-DIMM0
             size: 8GiB
             width: 64 bits
             clock: 1333MHz (0.8ns)
        *-bank:3
             description: DIMM [empty]
             product: [Empty]
             vendor: [Empty]
             physical id: 3
             serial: [Empty]
             slot: ChannelB-DIMM1
     *-cpu
          description: CPU
          product: Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz
          vendor: Intel Corp.
          physical id: 41
          bus info: cpu@0
          version: Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz
          slot: SOCKET 0
          size: 2290MHz
          capacity: 3600MHz
          width: 64 bits
          clock: 100MHz
          capabilities: lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp x86-64 constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm cpuid_fault invpcid_single pti tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm xsaveopt dtherm ida arat pln pts cpufreq
          configuration: cores=4 enabledcores=1

I am using the latest master. I have tried without nice or ionice and with -vv, same problem:

$ time ~/apps/lrzip/lrzip -L9 -z -vv local_backup_1 && echo SUCCESS       
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 4
Detected 8229158912 bytes ram
Compression level 9
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367079251
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1665127765 sized malloc for back end compression
Using up to 5 threads to compress up to 215585041 bytes each.
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Starting sweep for mask 15
Starting sweep for mask 31
Starting sweep for mask 63
Starting thread 0 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 97.89% of chunk, 1 Passes
Starting sweep for mask 127Q    1:0%  
Starting thread 1 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 92.22% of chunk, 1 Passes
Starting thread 2 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 52.04% of chunk, 1 Passes
Starting sweep for mask 255Q            3:0%  
Starting thread 3 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 81.91% of chunk, 1 Passes
Starting thread 4 to compress 215585041 bytes from stream 10%  
lzo testing OK for chunk 215585041. Compressed size = 96.32% of chunk, 1 Passes
~/apps/lrzip/lrzip -L9 -z -vv local_backup_1  2474.24s user 40.25s system 335% cpu 12:29.53 total
SUCCESS
# but it creates a 24-byte output file

I have tried with -L1 instead, and this time it creates a file of a reasonable size:

$ time ~/apps/lrzip/lrzip -L1 -z -vv local_backup_1 && echo SUCCESS
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 4
Detected 8229158912 bytes ram
Compression level 1
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367079490
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1665127765 sized malloc for back end compression
Using up to 5 threads to compress up to 215585041 bytes each.
Beginning rzip pre-processing phase
hashsize = 131072.  bits = 17. 2MB
Starting sweep for mask 15
Starting sweep for mask 31
Starting sweep for mask 63
Starting sweep for mask 127
Starting sweep for mask 255
Starting sweep for mask 511
Starting sweep for mask 1023
Starting sweep for mask 2047
Starting thread 0 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 94.51% of chunk, 1 Passes
Starting sweep for mask 4095    1:40%  
Starting thread 1 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 93.25% of chunk, 1 Passes
Starting thread 2 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 92.04% of chunk, 1 Passes
Writing initial chunk bytes value 5 at 24   3:0%  
Writing EOF flag as 0
Writing initial header at 31
Compthread 0 seeking to 27 to store length 5
Compthread 0 seeking to 32 to write header
Thread 0 writing 147630458 compressed bytes from stream 1
Compthread 0 writing data at 48
Starting thread 3 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 84.75% of chunk, 1 Passes
Starting sweep for mask 8191                4:0%  
Starting thread 4 to compress 215585041 bytes from stream 10%  
lzo testing OK for chunk 215585041. Compressed size = 73.63% of chunk, 1 Passes
Compthread 1 seeking to 43 to store length 50%      4:20%   5:0%  
Compthread 1 seeking to 147630506 to write header
Thread 1 writing 112533251 compressed bytes from stream 1
Compthread 1 writing data at 147630522      3:50%  
Starting thread 0 to compress 215585041 bytes from stream 1 5:10%  
lzo testing OK for chunk 215585041. Compressed size = 82.98% of chunk, 1 Passes
Compthread 2 seeking to 147630517 to store length 500%  4:70%   5:60%  
Compthread 2 seeking to 260163773 to write header
Thread 2 writing 107718151 compressed bytes from stream 1
Compthread 2 writing data at 260163789
Starting thread 1 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 86.82% of chunk, 1 Passes
Compthread 3 seeking to 260163784 to store length 5 4:100%  5:90%  
Compthread 3 seeking to 367881940 to write header
Thread 3 writing 123562290 compressed bytes from stream 1
Compthread 3 writing data at 367881956
Starting thread 2 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 88.07% of chunk, 1 Passes
Compthread 4 seeking to 367881951 to store length 5%        5:100%  
Compthread 4 seeking to 491444246 to write header
Thread 4 writing 105142061 compressed bytes from stream 1
Compthread 4 writing data at 491444262
Starting sweep for mask 16383
Starting thread 3 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 92.20% of chunk, 1 Passes
Compthread 0 seeking to 491444257 to store length 50%   4:0%  
Compthread 0 seeking to 596586323 to write header
Thread 0 writing 110189263 compressed bytes from stream 1
Compthread 0 writing data at 596586339
Starting thread 4 to compress 215585041 bytes from stream 10%  
lzo testing OK for chunk 215585041. Compressed size = 89.69% of chunk, 1 Passes
Sliding main buffer to offset 27430502402:80%   3:50%   4:40%   5:30%  
Compthread 1 seeking to 596586334 to store length 50%   4:50%   5:40%  
Compthread 1 seeking to 706775602 to write header
Thread 1 writing 119573838 compressed bytes from stream 1
Compthread 1 writing data at 706775618
Starting thread 0 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 66.31% of chunk, 1 Passes
Compthread 2 seeking to 706775613 to store length 500%  4:80%   5:70%  
Compthread 2 seeking to 826349456 to write header
Thread 2 writing 108745491 compressed bytes from stream 1
Compthread 2 writing data at 826349472
Starting thread 1 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 60.25% of chunk, 1 Passes
Compthread 3 seeking to 826349467 to store length 5 4:100%  5:80%  
Compthread 3 seeking to 935094963 to write header
Thread 3 writing 128615042 compressed bytes from stream 1
Compthread 3 writing data at 935094979
Starting thread 2 to compress 215585041 bytes from stream 1 5:90%  
lzo testing OK for chunk 215585041. Compressed size = 84.57% of chunk, 1 Passes
Compthread 4 seeking to 935094974 to store length 5%        5:100%  
Compthread 4 seeking to 1063710021 to write header
Thread 4 writing 118293647 compressed bytes from stream 1
Compthread 4 writing data at 1063710037
Starting thread 3 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 61.22% of chunk, 1 Passes
Compthread 0 seeking to 1063710032 to store length 5%   4:30%  
Compthread 0 seeking to 1182003684 to write header
Thread 0 writing 116529185 compressed bytes from stream 1
Compthread 0 writing data at 1182003700
Starting thread 4 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 94.60% of chunk, 1 Passes
Compthread 1 seeking to 1182003695 to store length 5%   4:60%   5:20%  
Compthread 1 seeking to 1298532885 to write header
Thread 1 writing 113234903 compressed bytes from stream 1
Compthread 1 writing data at 1298532901
Starting thread 0 to compress 215585041 bytes from stream 1 5:30%  
lzo testing OK for chunk 215585041. Compressed size = 69.12% of chunk, 1 Passes
Compthread 2 seeking to 1298532896 to store length 50%  4:80%   5:40%  
Compthread 2 seeking to 1411767804 to write header
Thread 2 writing 120401215 compressed bytes from stream 1
Compthread 2 writing data at 1411767820
Starting thread 1 to compress 215585041 bytes from stream 10%   5:50%  
lzo testing OK for chunk 215585041. Compressed size = 73.49% of chunk, 1 Passes
Compthread 3 seeking to 1411767815 to store length 5    4:100%  
Compthread 3 seeking to 1532169035 to write header
Thread 3 writing 106852761 compressed bytes from stream 1
Compthread 3 writing data at 1532169051
Starting sweep for mask 32767   1:30%               5:60%  
Starting thread 2 to compress 215585041 bytes from stream 1
lzo testing OK for chunk 215585041. Compressed size = 82.53% of chunk, 1 Passes
Starting thread 3 to compress 215585041 bytes from stream 1 5:90%  
lzo testing OK for chunk 215585041. Compressed size = 57.75% of chunk, 1 Passes
Compthread 4 seeking to 1532169046 to store length 5%   4:10%   5:100%  
Compthread 4 seeking to 1639021812 to write header
Thread 4 writing 118527355 compressed bytes from stream 1
Compthread 4 writing data at 1639021828
Starting thread 4 to compress 215585041 bytes from stream 10%  
lzo testing OK for chunk 215585041. Compressed size = 83.03% of chunk, 1 Passes
Sliding main buffer to offset 54861004802:60%           5:0%  
87380 total hashes -- 3 in primary bucket (0.003%)
Malloced 2743050240 for checksum ckbuf
~/apps/lrzip/lrzip -L1 -z -vv local_backup_1  1840.16s user 33.00s system 308% cpu 10:06.39 total
SUCCESS
$ ls -lh local_backup_1.lrz
-rw-------  1 a3nm browser_tor 1.7G Apr 13 19:22 local_backup_1.lrz

Trying with -L9 and -p1 causes the problem again, so it's not a threading issue:

$ time ~/apps/lrzip/lrzip -L9 -p1 -z -vv local_backup_1 && echo SUCCESS
The following options are in effect for this COMPRESSION.
Threading is DISABLED. Number of CPUs detected: 1
Detected 8229158912 bytes ram
Compression level 9
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367079681
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1430246741 sized malloc for back end compression
Using only 1 thread to compress up to 1312806229 bytes
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Starting sweep for mask 15
Starting sweep for mask 31
Starting sweep for mask 63
Starting sweep for mask 127
Starting sweep for mask 255
Starting thread 0 to compress 1312806229 bytes from stream 1
lzo testing OK for chunk 1312806229. Compressed size = 97.89% of chunk, 1 Passes
Starting sweep for mask 511Q    1:0%  
Sliding main buffer to offset 2743050240
~/apps/lrzip/lrzip -L9 -p1 -z -vv local_backup_1  2180.83s user 46.91s system 104% cpu 35:36.76 total
SUCCESS
# but it creates a 24-byte output file

Checking out PR #146, the problem is still here:

$ time ~/apps/lrzip_pr146/lrzip -L9 -p1 -z -vv local_backup_1 && echo SUCCESS
The following options are in effect for this COMPRESSION.
Threading is DISABLED. Number of CPUs detected: 1
Detected 8229158912 bytes ram
Compression level 9
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367080336
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1430246741 sized malloc for back end compression
Using only 1 thread to compress up to 1312806229 bytes
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Starting sweep for mask 15
Starting sweep for mask 31
Starting sweep for mask 63
Starting sweep for mask 127
Starting sweep for mask 255
Starting thread 0 to compress 1312806229 bytes from stream 1
lzo testing OK for chunk 1312806229. Compressed size = 97.89% of chunk, 1 Passes
ZPAQ: Method selected: 511,25,0: level=5, bs=11, easy=25, type=0
~/apps/lrzip_pr146/lrzip -L9 -p1 -z -vv local_backup_1  76.56s user 10.43s system 104% cpu 1:23.36 total
SUCCESS
# but it creates a 24-byte output file

As for your fork, I checked it out but couldn't compile it:

mu:~/apps$ git clone 'git@github.com:pete4abw/lrzip.git' lrzip_pete4abw
Cloning into 'lrzip_pete4abw'...
remote: Enumerating objects: 248, done.
remote: Counting objects: 100% (248/248), done.
remote: Compressing objects: 100% (180/180), done.
remote: Total 4081 (delta 133), reused 165 (delta 68), pack-reused 3833
Receiving objects: 100% (4081/4081), 2.65 MiB | 3.75 MiB/s, done.
Resolving deltas: 100% (2621/2621), done.
mu:~/apps$ cd lrzip_pete4abw 
mu:~/apps/lrzip_pete4abw$ ./autogen.sh 
Running autoreconf -if...
libtoolize: putting auxiliary files in '.'.
libtoolize: copying file './ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
configure.ac:25: installing './compile'
configure.ac:27: installing './config.guess'
configure.ac:27: installing './config.sub'
configure.ac:23: installing './install-sh'
configure.ac:23: installing './missing'
Makefile.am: installing './depcomp'
Configuring...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether make supports nested variables... (cached) yes
checking whether make supports the include directive... yes (GNU style)
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking minix/config.h usability... no
checking minix/config.h presence... no
checking for minix/config.h... no
checking whether it is safe to define __EXTENSIONS__... yes
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for ar... ar
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for a working dd... /bin/dd
checking how to truncate binary pipes... /bin/dd bs=4096 count=1
checking for mt... mt
checking if mt is a manifest tool... no
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking whether gcc understands -c and -o together... (cached) yes
checking dependency style of gcc... (cached) gcc3
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking how to run the C++ preprocessor... g++ -E
checking for ld used by g++... /usr/bin/ld -m elf_x86_64
checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking for g++ option to produce PIC... -fPIC -DPIC
checking if g++ PIC flag -fPIC -DPIC works... yes
checking if g++ static flag -static works... yes
checking if g++ supports -c -o file.o... yes
checking if g++ supports -c -o file.o... (cached) yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking dynamic linker characteristics... (cached) GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether ln -s works... yes
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... no
checking for _LARGEFILE_SOURCE value needed for large files... no
checking for size_t... yes
checking for working alloca.h... yes
checking for alloca... yes
checking for gcc option to accept ISO C99... none needed
checking for pod2man... yes
checking for nasm... no
checking for yasm... no
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking sys/time.h usability... yes
checking sys/time.h presence... yes
checking for sys/time.h... yes
checking for unistd.h... (cached) yes
checking sys/mman.h usability... yes
checking sys/mman.h presence... yes
checking for sys/mman.h... yes
checking ctype.h usability... yes
checking ctype.h presence... yes
checking for ctype.h... yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking sys/resource.h usability... yes
checking sys/resource.h presence... yes
checking for sys/resource.h... yes
checking endian.h usability... yes
checking endian.h presence... yes
checking for endian.h... yes
checking sys/endian.h usability... no
checking sys/endian.h presence... no
checking for sys/endian.h... no
checking arpa/inet.h usability... yes
checking arpa/inet.h presence... yes
checking for arpa/inet.h... yes
checking alloca.h usability... yes
checking alloca.h presence... yes
checking for alloca.h... yes
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for off_t... yes
checking for size_t... (cached) yes
checking for __attribute__... yes
checking size of int... 4
checking size of long... 8
checking size of short... 2
checking for large file support... yes
checking for inline... inline
checking whether byte ordering is bigendian... no
checking for pthread_create in -lpthread... yes
checking for sqrt in -lm... yes
checking for compress2 in -lz... yes
checking for BZ2_bzBuffToBuffCompress in -lbz2... yes
checking for lzo1x_1_compress in -llzo2... yes
checking for mmap... yes
checking for strerror... yes
checking for getopt_long... yes
checking whether gcc is Clang... no
checking whether pthreads work with -pthread... yes
checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
checking whether more special flags are required for pthreads... no
checking for PTHREAD_PRIO_INHERIT... yes
checking whether to build documentation... yes
checking for doxygen... no
WARNING:
The doxygen program was not found in your execute path.
You may have doxygen installed somewhere not covered by your path.

If this is the case make sure you have the packages installed, AND
that the doxygen program is in your execute path (see your
shell manual page on setting the $PATH environment variable), OR
alternatively, specify the program to use with --with-doxygen.
configure: WARNING: no doxygen detected. Documentation will not be built
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating lrzip.pc
config.status: creating lzma/Makefile
config.status: creating lzma/C/Makefile
config.status: creating doc/Makefile
config.status: creating man/Makefile
config.status: creating config.h
config.status: executing depfiles commands
config.status: executing libtool commands

------------------------------------------------------------------------
lrzip 0.710beta
------------------------------------------------------------------------

Configuration Options Summary:

  ASM................: yes, using no  Assembler
  Static binary......: no
  Host system........: x86_64-pc-linux-gnu
  Build system.......: x86_64-pc-linux-gnu

Documentation..........: no

Compilation............: make (or gmake)
  CPPFLAGS.............: 
  CFLAGS...............: -g -O2 -pthread
  CXXFLAGS.............: -g -O2 -pthread
  LDFLAGS..............: 

Installation...........: make install (as root if needed, with 'su' or 'sudo')
  prefix...............: /usr/local

mu:~/apps/lrzip_pete4abw$ make
make  all-recursive
make[1]: Entering directory '/mnt/fah/apps/lrzip_pete4abw'
Making all in lzma
make[2]: Entering directory '/mnt/fah/apps/lrzip_pete4abw/lzma'
Making all in C
make[3]: Entering directory '/mnt/fah/apps/lrzip_pete4abw/lzma/C'
  CC       7zCrc.lo
  CC       Alloc.lo
  CC       Bra86.lo
  CC       Bra.lo
  CC       BraIA64.lo
  CC       Delta.lo
  CC       LzFind.lo
  CC       LzFindMt.lo
  CC       Lzma2Dec.lo
  CC       Lzma2DecMt.lo
  CC       Lzma2Enc.lo
  CC       LzmaDec.lo
  CC       LzmaEnc.lo
  CC       LzmaLib.lo
  CC       MtCoder.lo
  CC       MtDec.lo
  CC       Threads.lo
no  -I../ASM/x86/ -Dx64 -f elf64 -o 7zCrcOpt_asm.o ../ASM/x86/7zCrcOpt_asm.asm
/bin/bash: no: command not found
make[3]: *** [Makefile:744: 7zCrcOpt_asm.lo] Error 127
make[3]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw/lzma/C'
make[2]: *** [Makefile:431: all-recursive] Error 1
make[2]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw/lzma'
make[1]: *** [Makefile:960: all-recursive] Error 1
make[1]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw'
make: *** [Makefile:558: all] Error 2

Any idea where this bug might be coming from, based on this? How is it possible that lrzip exits with return code 0 while having created an empty output file?

Thanks!

pete4abw commented 4 years ago

Thanks for your efforts. Obviously, configure.ac did not detect your system did NOT have a nasm or yasm assembler (really speeds up lzma). My bad. Anyway, for now, just do

./configure --disable-asm make

and try.

gcsideal commented 4 years ago

./configure --disable-asm

# apt install nasm would do as well.

gcsideal commented 4 years ago

Debian maintainer, @gcsideal has been notified that he's behind last year.

Thanks for the heads-up. It would help if from time to time you tag milestones that should be packaged.

gcsideal commented 4 years ago

Any idea about why that might be happening? The fact that the return code is 0 is especially treacherous.

I know it's personal but if you might share (in private) that file with us we might help with more detailed bug hunting.

pete4abw commented 4 years ago

@gcsideal , the test file is 10G. I am fairly certain the issue is a memory one caused by an incorrect computation of the overhead required by zpaq. The logic was developed long ago. It needs updating because of variable block sizes, hash sizes, and internal needs of the zpaq library.

From util.c

    } else if (ZPAQ_COMPRESS)
        control->overhead = 112 * 1024 * 1024;

This is only 112 MB per thread. However, as seen from above

lzo testing OK for chunk 1312806229. Compressed size = 97.89% of chunk, 1 Passes
ZPAQ: Method selected: 511,25,0: level=5, bs=11, easy=25, type=0

The block size requested will be 2^11 MB = 2GB per thread. I'm just surprised it did not segfault. @a3nm , don't use echo SUCCESS.

I'll also take a look at stream.c and see if zpaq_compress_buf has sufficient error checking.

This may take a little time. ITMT, try using lower levels. The default works just fine and it's about 3x faster than L9.

Be patient, please.

gcsideal commented 4 years ago

Be patient, please.

OK, please ping us if there's something to test and if that works it will be packaged soon.

a3nm commented 4 years ago

@gcsideal thanks ! doing apt install nasm and recompiling worked (though I guess this should be tweaked so that the compilation failure is less mysterious...), but I'm afraid the problem is the same with your version:

$ time ~/apps/lrzip_pete4abw/lrzip -L9 -p1 -z -vv local_backup_1; echo $?
The following options are in effect for this COMPRESSION.
Threading is DISABLED. Number of CPUs detected: 1
Detected 8229158912 bytes ram
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: Compression level 9 
ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367082837
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1488966997 sized malloc for back end compression
Using only 1 thread to compress up to 1371526485 bytes
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Total:  0%  Chunk:  1%
Starting sweep for mask 15
Total:  1%  Chunk:  2%
Starting sweep for mask 31
Total:  2%  Chunk:  3%
Starting sweep for mask 63
Total:  4%  Chunk:  9%
Starting sweep for mask 127
Total:  9%  Chunk: 18%
Starting sweep for mask 255
Total: 17%  Chunk: 34%
Starting thread 0 to compress 1371526485 bytes from stream 1
lzo testing OK for chunk 1371526485. Compressed size = 97.89% of chunk, 1 Passes
ZPAQ: Method selected: 511,25,0: level=5, bs=11, easy=25, type=0
~/apps/lrzip_pete4abw/lrzip -L9 -p1 -z -vv local_backup_1  77.78s user 11.37s system 101% cpu 1:28.13 total
0
$ ls -lh local_backup_1.lrz 
-rw------- 1 a3nm browser_tor 24 Apr 13 22:55 local_backup_1.lrz

The file is 10 GB of personal email backups so I'm afraid I can't share it. If you think there's some hope to reproduce the crash by synthetizing similar data that I could share, do let me know.

@pete4abw No worries, I'm not in a hurry at all.I just want to compress this eventually but I have plenty of hard drive space for the time being. :)

Huge thanks to you for looking into this issue. If you need help from me (in particular to try out stuff that could help you reproduce it on your end), don't hesitate.

pete4abw commented 4 years ago

@a3nm @gcsideal The easy variable above implies the data is highly random. The lzo testing OK for chunk 1371526485. Compressed size = 97.89% of chunk confirms it. This will take zpaq a lot of time. I did some comparisons. LZMA is so fast relative to ZPAQ and because of the rzip compression, makes it questionable as to its benefit. As a final test, before I try and duplicate your error,

  1. remove the ;, or & echo Success. I'm wondering if as thread 0 starts, it passes control to the next statement and away from lrzip. And
  2. try just plain lrzip -L9 -vv local_backup_1, which will be an LZMA compression. See if it completes!

On a test file of moderate complexity, here are some metrics.

Type Level Comp % MB/s Time
LZMA 7 5.086 10.143 00:00:41.42
ZPAQ 7 5.308 3.908 00:01:48.69
LZMA 8 5.101 10.650 00:00:40.70
ZPAQ 8 5.689 1.645 00:04:18.66
LZMA 9 5.114 9.467 00:00:45.10
ZPAQ 9 5.693 1.632 00:04:21.16

While ZPAQ certainly will perform better compression-wise, its speed and throughput is markedly worse. Is it worse an extra 10% compression savings? This is a choice you have to make. ITMT, I'll press on trying to determine what happens. I have a 10GB test file.

pete4abw commented 4 years ago

@a3nm . At high compression levels, zpaq eats away at memory and swap space rapidly. It gets worse as additional threads kick in. How much swap space does your system have? Show the output of: cat /proc/meminfo |grep "Mem\|Swap". The overhead computation of zpaq definitely needs review. When lrzip uses swap space, it slows down incredibly and also when completed, your system will perform worse until swap space is cleared.

IMHO, levels 8 and 9 and the -U (unlimited option) should be avoided.

a3nm commented 4 years ago

Hi @pete4abw ,

  1. I really have no idea why having && echo success could change anything to lrzip's operation as this should be managed by zsh, but sure:
$ ~/apps/lrzip_pete4abw/lrzip -L9 -z -vv local_backup_1          
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 4
Detected 8229158912 bytes ram
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: Compression level 9 
ZPAQ. LZO Compressibility testing enabled
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367099490
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Succeeded in testing 1958729045 sized malloc for back end compression
Using up to 5 threads to compress up to 274305297 bytes each.
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Total:  0%  Chunk:  1%
Starting sweep for mask 15
Total:  1%  Chunk:  2%
Starting sweep for mask 31
Total:  2%  Chunk:  3%
Starting sweep for mask 63
Total:  3%  Chunk:  7%
Starting thread 0 to compress 274305297 bytes from stream 1
lzo testing OK for chunk 274305297. Compressed size = 97.89% of chunk, 1 Passes
ZPAQ: Method selected: 59,25,0: level=5, bs=9, easy=25, type=0
$ echo $?
0
$ ls -lh local_backup_1.lrz 
-rw------- 1 a3nm browser_tor 24 Apr 14 17:26 local_backup_1.lrz
  1. without -z, same problem:
$ ~/apps/lrzip_pete4abw/lrzip -L9 -vv local_backup_1
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 4
Detected 8229158912 bytes ram
Nice Value: 19
Show Progress
Max Verbose
Temporary Directory set as: ./
Compression mode is: LZMA. LZO Compressibility testing enabled
Compression level 9 
Dictionary Size: 134217728
Heuristically Computed Compression Window: 52 = 5200MB
Storage time in seconds 1367099554
Output filename is: local_backup_1.lrz
File size: 10497691808
Enabling sliding mmap mode and using mmap of 2743050240 bytes with window of 5486104576 bytes
Succeeded in testing 2743050240 sized mmap for rzip pre-processing
Will take 2 passes
Chunk size: 5486104576
Byte width: 5
Dictionary Size reduced to 110100480
Threads reduced to 1
Succeeded in testing 2643989845 sized malloc for back end compression
Using only 1 thread to compress up to 1371526485 bytes
Beginning rzip pre-processing phase
hashsize = 4194304.  bits = 22. 64MB
Starting sweep for mask 1
Starting sweep for mask 3
Starting sweep for mask 7
Total:  0%  Chunk:  1%
Starting sweep for mask 15
Total:  1%  Chunk:  2%
Starting sweep for mask 31
Total:  2%  Chunk:  3%
Starting sweep for mask 63
Total:  4%  Chunk:  9%
Starting sweep for mask 127
Total:  9%  Chunk: 18%
Starting sweep for mask 255
Total: 17%  Chunk: 34%
Starting thread 0 to compress 1371526485 bytes from stream 1
lzo testing OK for chunk 1371526485. Compressed size = 97.89% of chunk, 1 Passes
Starting lzma back end compression thread...
Total: 19%  Chunk: 36%
Starting sweep for mask 511
$ ls -lh local_backup_1.lrz 
-rw------- 1 a3nm browser_tor 24 Apr 14 17:30 local_backup_1.lrz

Here are the memory details:

$ cat /proc/meminfo |grep "Mem\|Swap" 
MemTotal:        8036288 kB
MemFree:          313360 kB
MemAvailable:    3274976 kB
SwapCached:            0 kB
SwapTotal:             0 kB
SwapFree:              0 kB

I understand about the performance tradeoffs and all, but as far as I understand lrzip shouldn't fail silently even with these options.

pete4abw commented 4 years ago

You're out of memory. You have no swap space. lrzip computes available memory based on MemAvailable. But you have only 313MB free and NO SWAP. You've exposed an important bug! Thank you.

lrzip should evaluate memory completely, not just max available ram! And it should anticipate potential problems beforehand.

At the higher levels, lrzip will almost always use swap. With so little memory, your compressions will fail often!

a3nm commented 4 years ago

Alright, well, this looks quite bad :) and I guess it shouldn't be complicated to replicate? I hope this can be fixed. Thanks for your work on this and do let me know if you need additional input from me.

pete4abw commented 4 years ago

Having only 313MB of memory and no swap. I'm not sure lrzip will work for any but the smallest files.

pete4abw commented 4 years ago

This error does not apply to the main git repo for lrzip. It applies to my fork which allows multiple assemblers and checks for them. It has been corrected.

AC_CHECK_PROGS macro is very fussy about a missing argument. Added an extra comma to correct.

no  -I../ASM/x86/ -Dx64 -f elf64 -o 7zCrcOpt_asm.o ../ASM/x86/7zCrcOpt_asm.asm
/bin/bash: no: command not found
make[3]: *** [Makefile:744: 7zCrcOpt_asm.lo] Error 127
make[3]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw/lzma/C'
make[2]: *** [Makefile:431: all-recursive] Error 1
make[2]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw/lzma'
make[1]: *** [Makefile:960: all-recursive] Error 1
make[1]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw'
make: *** [Makefile:558: all] Error 2
a3nm commented 4 years ago

(To clarify, "This error" is the compilation error "/bin/bash: no: command not found" from one of my earlier messages. It's not about the bug I reported, which does apply to lrzip from master of the main git repo.)

-- Antoine Amarilli

On Wed, Apr 15, 2020 at 05:18:29AM -0700, Peter Hyman wrote:

This error does not apply to the main git repo for lrzip. It applies to my fork which allows multiple assemblers and checks for them. It has been corrected.

AC_CHECK_PROGS macro is very fussy about a missing argument. Added an extra comma to correct.

no  -I../ASM/x86/ -Dx64 -f elf64 -o 7zCrcOpt_asm.o ../ASM/x86/7zCrcOpt_asm.asm
/bin/bash: no: command not found
make[3]: *** [Makefile:744: 7zCrcOpt_asm.lo] Error 127
make[3]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw/lzma/C'
make[2]: *** [Makefile:431: all-recursive] Error 1
make[2]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw/lzma'
make[1]: *** [Makefile:960: all-recursive] Error 1
make[1]: Leaving directory '/mnt/fah/apps/lrzip_pete4abw'
make: *** [Makefile:558: all] Error 2

-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/ckolivas/lrzip/issues/147#issuecomment-614003042

pete4abw commented 4 years ago

(To clarify, "This error" is the compilation error "/bin/bash: no: command not found" from one of my earlier messages. It's not about the bug I reported, which does apply to lrzip from master of the main git repo.)

Yes, and the compilation error has been fixed in my fork. I don't know what a solution is for a system with no swap and so little free ram. Maybe try and find out what is eating all your available ram and consider adding swap space which is really a must for most linux systems.

a3nm commented 4 years ago

Yes but no matter what, lrzip should fail with a visible error in that case, not silently, right?

pete4abw commented 4 years ago

I agree. It should. Tracing where the failure is occurring will take time. It's not easy to duplicate your environment, 8GB system, 300MB free, no swap. My guess is zpaq is silently failing where it should throw an error. I still recommend you take a look at the programs and processes that are eating away at your free ram. An 8GB system should have no limitations with lrzip. The zpaq code is complex with JIT code inserted. I'm going to pass on this for right now. Maybe @ckolivas would take a look. Sorry.

a3nm commented 4 years ago

Sure, no worries! For my part, you don't need to suggest possible workarounds -- I won't be using lrzip at all if it has such basic corruption issues. Anyways, I hope the bug can be fixed!

pete4abw commented 4 years ago

Not being defensive, but it's not corruption. It's a crash due to a system with very limited resources. But I'm glad we uncovered the issue. ITMT, why not try p7zip, zpaq, lzma, rar or other archivers. Maybe they'll work for you. Good luck.

ckujau commented 4 years ago

FWIW, this can be reproduced pretty easily with ulimit, limiting ourselves to 512K to compress a 2 MB file:

$ dd if=/dev/urandom of=test.img bs=1M count=2

$ ulimit -Sv 512000
$ ./lrzip -L9 -zvvf test.img
[...]
Output filename is: test.img.lrz
File size: 2097152
Succeeded in testing 2097152 sized mmap for rzip pre-processing
Will take 1 pass
Chunk size: 2097152
Byte width: 3

This may or may not complete (running under strace shows lots of ENOMEM errors) and when interrupted with CTRL-C the notorious 24 byte file will appear:

^C
$ ls -gotr | tail -2
-rw-r----- 1 2097152 May 11 20:03 test.img
-rw-r----- 1      24 May 11 20:15 test.img.lrz

Other tools may have safeguards implemented (?) and will exit before creating an unfinished output file:

$ xz -9efkv test.img 
test.img (1/1)
xz: test.img: Cannot allocate memory
ckolivas commented 3 years ago

Fixed in git master.