Open ORESoftware opened 7 years ago
@ORESoftware this is the intended/desired behavior on UNIX
All hardlinks
point to the same inode
so the same spot on the disk.
see:
http://www.farhadsaberi.com/linux_freebsd/2010/12/hard-link-soft-symbolic-links.html
https://www.freebsd.org/cgi/man.cgi?query=ln
If your linux distro is deviating from this it is not following the UNIX
standard.
Hardlinks and symlinks act differently. If you delete a symlink
eg: ln -s $source $target
and then rm $target
, you will still have $source
, but the symlink is just a moveable pointer.
Often with symlinks
if you delete $source
you will end up with $target
symlinks lying around "dead"
So yes, I can confirm deleting a hardlink
deletes the inode
so thus the original data. This is the desired behavior though.
@BenjaminHCCarr @selkhateeb is there a way to 'remove/undo the hardlink' without deleting the original files?
Do you know if hln
will work on Linux? Or just MacOS?
@BenjaminHCCarr: Unix standard? I don't think that's true. I don't know about FreeBSD, but both on my Linux box and my Macbook, deleting one of the references of a hard link (created with ln
) leaves the others intact. Deleting the last reference deletes the inode. It uses reference counting.
I'd love to get that functionality here too.
This is a bit of a necropost, but I wanted to put this out there since this is coming up in Google searches.
rm -rf
is working as expected. For links, you want unlink
@ORESoftware The idiomatic way would be to use unlink
, however I'm not sure if that applies to how this repo achieves hardlinked directories, and there are protections in place that try to prevent you from unlinking directories. It is expected that rm -rf
will delete the directory and its contents, by nature of how the command works. -r
works by first specifically purging files recursively until the directory is empty, and then deleting the directory from the filesystem.
@mhelvens Not for the contents of a directory, if the directory itself is unlinked, but not recursively. The behavior that @BenjaminHCCarr is describing is exactly correct, actually. And that can be disastrous on a larger scale, which is why hardlinked directories are generally discouraged.
When a directory is unlinked, the references inside of that directory aren't checked. The only requirement is that the directory is empty, because doing this type of recursive checking would be way too performance intensive. So, instead, unlink
simply denies you from being able to unlink something if its a directory. And rm
denies you from unlinking a directory without -r
if it isn't empty.
This is explicitly to prevent orphaned inodes. This isn't completely unavoidable, but that's why we have fsck
. But imagine having to run fsck
on an in-use filesystem any time you deleted a directory.
That's actually the origin of lost+found
. Dangling/orphaned inodes are put in lost+found
if their hardlink was destroyed, but their inode and data wasn't cleaned up. Instead of purging the inode and its data, the assumption is that whatever happened wasn't supposed to, so fsck
creates a hardlink to the inode
throws it in lost+found
so that it can either be recovered or permanently destroyed with rm -rf
.
The only realistic path forward for this would be a custom wrapper for unlink
that performs the white-glove checks to make sure that things are in order:
rm -rf
must be used instead.unlink
on it, so we don't change the way it works.
It appears that
rm -rf
on hard link deletes original filesthis is dangerous and on Linux if you rm the hard link the original file is still intact.
Can you confirm / deny this behavior with your lib on MacOS?