Closed ProjectInitiative closed 4 months ago
Not sure how this happened. Usually reading the link stops whenever it meets a non-symlink node. I tried creating two circular links and it looks fine:
root@dev:/mnt/jfs/d# ls -al
total 9
drwxr-xr-x 2 root root 4096 May 28 16:33 .
drwxrwxrwx 4 root root 4096 May 28 16:33 ..
lrwxrwxrwx 1 root root 2 May 28 16:33 l1 -> l2
lrwxrwxrwx 1 root root 2 May 28 16:33 l2 -> l1
root@dev:/mnt/jfs/d# cat l1
cat: l1: Too many levels of symbolic links
root@dev:/mnt/jfs/d# cd l1
-bash: cd: l1: Too many levels of symbolic links
root@dev:/mnt/jfs/d# stat l1
File: l1 -> l2
Size: 2 Blocks: 1 IO Block: 65536 symbolic link
Device: 34h/52d Inode: 22540 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2024-05-28 16:33:47.447860132 +0800
Modify: 2024-05-28 16:33:47.447860132 +0800
Change: 2024-05-28 16:33:47.447860132 +0800
Birth: -
root@dev:/mnt/jfs/d# rm l1
root@dev:/mnt/jfs/d# ls -al
total 9
drwxr-xr-x 2 root root 4096 May 28 16:33 .
drwxrwxrwx 4 root root 4096 May 28 16:33 ..
lrwxrwxrwx 1 root root 2 May 28 16:33 l2 -> l1
We probably need more details to address this issue.
To clean up the broken files, if unlink
and rmdir
don't work, you have to do it by manipulating Redis keys directly (refer to the code and this link), which is dangerous.
I tried to reproduce the problem. Could not. Also no tool/command could fix or break the link. I just restored the meta data and cleaned up the dangling objects. So far it is working. Thanks for the info.
What happened:
Trying to setup a backup directory and accidentally created a nested symlink loop:
/topdir/dir1/topdir
Where dir1/topdir references /topdir. I can't
rm -rf
orunlink
or change the link to another location to break the link. I just getToo many levels of symbolic links
What you expected to happen:
Be able to break a circular symlink similar to a normal FS.
How to reproduce it (as minimally and precisely as possible): Not sure how to reproduce honestly, as creating these links manually does not reproduce the issue. At this point. I am trying to figure out how to clean up these broken files/directories
Anything else we need to know?
Environment:
juicefs --version
) or Hadoop Java SDK version: juicefs version 1.1.2+2024-02-04.8dbd89acat /etc/os-release
): Proxmox and Pop-osuname -a
): Linux pve1 6.5.13-3-pve and Linux pop-os 6.8.0-76060800daily20240311-generic