mikaku / Fiwix

A UNIX-like kernel for the i386 architecture
https://www.fiwix.org
Other
407 stars 32 forks source link

Cannot persist large files to ext2 disk correctly #14

Closed rick-masters closed 1 year ago

rick-masters commented 1 year ago

There is a bug that prevents writing large files to an ext2 disk correctly.

This can be demonstrated with the following commands:

dd if=/dev/random of=bigfile bs=1024 count=150000
sha256sum bigfile
sync
reboot
# After login:
sha256sum bigfile

Result: you will either get an I/O error of a different checksum.

The problem is in the calculation of a block index for a triple-indirect block in fs/ext2/inode.c: https://github.com/mikaku/Fiwix/blob/6e036aa14f856b6310ab60dd7ff718b03b04c8a2/fs/ext2/inode.c#L326-L333

Here, tblock has not been adjusted to account for the number of blocks skipped by the last traversal.

I believe this is the appropriate code to adjust tblock before calculating the block index:

tindblock = (__blk_t *)buf3->data;
tblock -= BLOCKS_PER_DIND_BLOCK(i->sb) * block;
block = tindblock[tblock / BLOCKS_PER_IND_BLOCK(i->sb)];

Without this adjustment, tblock / BLOCKS_PER_IND_BLOCK(i->sb) will exceed the bounds (0..255) for tindblock and will write into memory beyond the size of the disk block. The block numbers stored in these indexes may be readable while in memory, but they cannot be persisted to disk through a reboot because the disk blocks only hold 256 entries. So, after rebooting and reloading the blocks from disk, indexing beyond 255 will produce an invalid block number.

mikaku commented 1 year ago

I was unable reproduce this bug with 256MB of memory, probably because the file was not created completely and the part of the file saved on disk was valid.

With 1024MB the bug appeared immediately.

mikaku commented 1 year ago

I think Minix v2 is also affected by this bug.

https://github.com/mikaku/Fiwix/blob/d281bcc7dc51e35578cd4680d9ea86fe3665a041/fs/minix/v2_inode.c#L356-L357

mikaku commented 1 year ago

Thank you.