Open Intensity opened 7 years ago
Hey, are you sure your drive is not full? It seems strange that fwrite would not complete writing. I put a diff here to maybe address EINTR if that could happen. https://github.com/mathieuchartier/mcm/commit/c5a86b1f54f90aa63953c5b683ba92c417e52f0c
I've come across that error pretty frequently in a variety of cases. The input is pretty large - about 20GB to 30GB. This is on Linux x86_64 (statically compiled, but this is built in a Debian 7 environment) - and the build was based off of commit 02b4e7d50a39f94d17ab3bf3d49a597a10b1f4a1 from Sat Apr 4 22:02:17 2015 -0700.
I've mostly had success with mcm, but I'm wondering what might cause this, and how I'd provide more helpful information here to debug moving forward. I'll go ahead and recompile an updated mcm (including latest git commits), but since segfaults or core dumps can lead to data loss (under certain conditions) I wanted to point this out. In one case I even think I did a successful "-x11 -test" but yet the request to decompress afterward yielded this check failure.
Is the check a hard requirement (that is, if that assert fires is there necessarily something wrong)? Is there a way to mitigate or reduce the likelihood that it'll happen? Is there a recovery path? Just want to avoid the situation where I've compressed large amounts of data and I'm not able to decompress in the future.