sfjro / aufs-standalone

27 stars 14 forks source link

FMODE_CAN_ODIRECT causes hard lockup #39

Closed jamesbond3142 closed 2 months ago

jamesbond3142 commented 3 months ago

Dear Junjiro-san,

The latest fix for aufs 6.1 ( aufs: bugfix, copy FMODE_CAN_ODIRECT ) can cause a hard-lockup if the underlying read/write branch does not support DIO (e.g. tmpfs).

I can't submit dmesg log because I can't get it, the system simply locks up when it happens; and to make it happen I need to do a build process that takes about 30 mins (and the final step of is the one that makes the crashes). I'm trying to find an easier way to reproduce the problem, but for now I'm sure this is the patch is the problem: as I build two kernels, with identical configuration - one with the FMODE_CAN_ODIRECT patch, and one without the patch; and the one with the patch consistently locks up the computer.

This is possibly affecting other versions of the kernels too.

I understand that the FMODE_CAN_ODIRECT patch is meant to fix certain problems, but is there a possible way to fix this problem without forcing DIO? Because not all the underlying r/w filesystems supports DIO.

Happy to provide you with more information and/or test patches as needed.

Thank you.

sfjro commented 3 months ago

jamesbond3142:

The latest fix for aufs 6.1 ( aufs: bugfix, copy FMODE_CAN_ODIRECT ) can cause a hard-lockup if the underlying read/write branch does not support DIO (e.g. tmpfs). ::: I understand that the FMODE_CAN_ODIRECT patch is meant to fix certain problems, but is there a possible way to fix this problem without forcing DIO? Because not all the underlying r/w filesystems supports DIO.

Thanks for the report. First, the latest fix is NOT to force O_DIRECT. It makes open(O_DIRECT) succeeds even if the topmost writable branch doesn't support it, and the suceeding write(2) will return an error. The previous (before this fix) behaviour is open(O_DIRECT) would fail.

From the point of view of user space, the difference is which systemcall would return an error.

fd = open(argv[1], O_RDWR | O_APPEND | O_DIRECT);
assert(fd >= 0);
ssz = write(fd, "A", 1);
assert(ssz == 1);

Previous aufs6.1 behaves

I did test aufs6.1 with RW tmpfs and O_DIRECT, and it worked expectedly (write(2) failed). I don't know how you mouned aufs and added branches and which commands you ran, but do you think this change of behaviour caused the problem? Can you identify which command/systemcall stopped working?

J. R. Okajima

jamesbond3142 commented 3 months ago

Thanks for the report. First, the latest fix is NOT to force O_DIRECT. It makes open(O_DIRECT) succeeds even if the topmost writable branch doesn't support it, and the suceeding write(2) will return an error. The previous (before this fix) behaviour is open(O_DIRECT) would fail. From the point of view of user space, the difference is which systemcall would return an error. fd = open(argv[1], O_RDWR | O_APPEND | O_DIRECT); assert(fd >= 0); ssz = write(fd, "A", 1); assert(ssz == 1); Previous aufs6.1 behaves - open(O_DIRECT) return an error but the latest aufs6.1 behaves - open(O_DIRECT) succeeds - write(2) return an error I did test aufs6.1 with RW tmpfs and O_DIRECT, and it worked expectedly (write(2) failed).

Ah, noted. I misunderstood the purpose of the patch. Thank you for the explanation.

I don't know how you mouned aufs and added branches and which commands you ran, but do you think this change of behaviour caused the problem? Can you identify which command/systemcall stopped working?

This is how mount stacking order looks like:

aufs on /tmp/chroot-build-pkg.Ly12KcbL type aufs (rw,relatime,si=b8aa343dff3a31dd)
sysfs on /tmp/chroot-build-pkg.Ly12KcbL/sys type sysfs (rw,relatime)
proc on /tmp/chroot-build-pkg.Ly12KcbL/proc type proc (rw,relatime)
devtmpfs on /tmp/chroot-build-pkg.Ly12KcbL/dev type devtmpfs (rw,relatime,size=15808104k,nr_inodes=3952026,mode=755)
devpts on /tmp/chroot-build-pkg.Ly12KcbL/dev/pts type devpts (rw,relatime,gid=3,mode=620,ptmxmode=000)

and the aufs branches

# grep .  /sys/fs/aufs/si_b8aa343dff3a31dd/br*
/sys/fs/aufs/si_b8aa343dff3a31dd/br0:/mnt/sdd1/fd900/900/build-pkg.Ly12KcbL=rw
/sys/fs/aufs/si_b8aa343dff3a31dd/br1:/mnt/sdd1/fd900/900/chroot=ro
/sys/fs/aufs/si_b8aa343dff3a31dd/brid0:64
/sys/fs/aufs/si_b8aa343dff3a31dd/brid1:65

Actually I reported wrongly, in my case the topmost rw branch (=/mnt/sdd1/fd900/900/build-pkg.Ly12KcbL in the example above) is an ext4 filesystem, not tmpfs.

I chrooted into the aufs mount to build packages. The problem happens when I'm building mariadb. The build process itself finishes successfully, but at the end of the build, I run the command

mysql_install_db --basedir=/usr --datadir=/srv/mysql --user=mysql

And this command completely locks up the computer (hard lockup - event sysrq failed to respond, I had to power cycle the machine).

This is repeatable with the FMODE_CAN_ODIRECT patch. I have two kernels, identically configured - one with the patch, one without the patch - and the one with the patch always locks up.

I'm trying to find a simpler way to to do the test, as the mariadb build takes about 30 mins to complete on my lousy laptop, and I understand mysql_install_db does a lot of things, so it's really difficult to figure out which particular system calls cause the problem.

jamesbond3142 commented 3 months ago

Ok, I can reproduce it, using fio https://fio.readthedocs.io/en/latest/fio_doc.html https://git.kernel.dk/cgit/fio/ (I used version 3.3.7 for testing).

Launching fio with the following options:

fio --name=test --ioengine=libaio --direct=1 --size=64m --rw=randwrite

definitely causes the crash.

If I use --direct=0 (no O_DIRECT) then no crash.

If I run strace fio --name=test --ioengine=libaio --direct=1 --size=64m --rw=randwrite, at the point of crash, I see the following (I'm running this test on qemu but this is the same kernel that would cause the same crash if I run it on my laptop).

As usual, my topmost rw layer is ext4 filesystem, while the bottom layers are squashfs marked as "rr".

xscreenshot-20240608T164316

Running identical test on an unpatched kernel will not crash it, no matter what --direct setting is. Hopefully this can help you to pinpoint where the problem is.

I'm using 6.1.90 by the way.

Appreciate your help as as always.

sfjro commented 3 months ago

jamesbond3142:

Ok, I can reproduce it, using fio https://fio.readthedocs.io/en/latest/fio_doc.html https://git.kernel.dk/cgit/fio/ (I used version 3.3.7 for testing).

I tried, but I couldn't reproduce the problem.

(mounts) /dev/ram1 /dev/shm/ro ext2 ro,relatime,errors=continue,user_xattr,acl 0 0 /dev/ram0 /dev/shm/rw ext4 rw,relatime 0 0 none /dev/shm/u aufs rw,relatime,si=961f89b25e9d961a 0 0 /dev/shm/rw=rw /dev/shm/ro=ro

(with direct=1) /dev/shm/u$ fio --name=test --ioengine=libaio --direct=1 --size=64m --rw=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.37-51-gfbf9 Starting 1 process test: Laying out IO file (1 file / 64MiB)

test: (groupid=0, jobs=1): err= 0: pid=5138: Sat Jun 8 17:58:49 2024 write: IOPS=35.1k, BW=137MiB/s (144MB/s)(64.0MiB/467msec); 0 zone resets slat (usec): min=21, max=2698, avg=26.46, stdev=39.77 clat (nsec): min=1282, max=2428.3k, avg=1564.61, stdev=18967.98 lat (usec): min=22, max=2700, avg=28.02, stdev=44.15 clat percentiles (nsec): | 1.00th=[ 1320], 5.00th=[ 1336], 10.00th=[ 1336], 20.00th=[ 1352], | 30.00th=[ 1368], 40.00th=[ 1384], 50.00th=[ 1384], 60.00th=[ 1400], | 70.00th=[ 1400], 80.00th=[ 1416], 90.00th=[ 1432], 95.00th=[ 1448], | 99.00th=[ 1528], 99.50th=[ 1624], 99.90th=[14400], 99.95th=[15936], | 99.99th=[19584] bw ( KiB/s): min=130810, max=130810, per=93.21%, avg=130810.00, stdev= 0.00, samples=1 iops : min=32702, max=32702, avg=32702.00, stdev= 0.00, samples=1 lat (usec) : 2=99.67%, 4=0.02%, 10=0.19%, 20=0.11% lat (msec) : 4=0.01% cpu : usr=3.22%, sys=96.78%, ctx=3, majf=0, minf=9 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,16384,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs): WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=64.0MiB (67.1MB), run=467-467msec

(with direct=0) /dev/shm/u$ fio --name=test --ioengine=libaio --direct=0 ---size=64m --rw=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.37-51-gfbf9 Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=5151: Sat Jun 8 18:00:05 2024 write: IOPS=74.5k, BW=291MiB/s (305MB/s)(64.0MiB/220msec); 0 zone resets slat (usec): min=10, max=643, avg=11.73, stdev= 6.12 clat (nsec): min=1179, max=1838.8k, avg=1377.97, stdev=14364.58 lat (usec): min=11, max=1849, avg=13.10, stdev=15.66 clat percentiles (nsec): | 1.00th=[ 1192], 5.00th=[ 1208], 10.00th=[ 1208], 20.00th=[ 1224], | 30.00th=[ 1224], 40.00th=[ 1240], 50.00th=[ 1240], 60.00th=[ 1240], | 70.00th=[ 1256], 80.00th=[ 1256], 90.00th=[ 1272], 95.00th=[ 1304], | 99.00th=[ 1336], 99.50th=[ 1448], 99.90th=[ 7392], 99.95th=[15040], | 99.99th=[22400] lat (usec) : 2=99.69%, 4=0.01%, 10=0.21%, 20=0.08%, 50=0.01% lat (msec) : 2=0.01% cpu : usr=3.20%, sys=96.80%, ctx=0, majf=0, minf=8 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,16384,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs): WRITE: bw=291MiB/s (305MB/s), 291MiB/s-291MiB/s (305MB/s-305MB/s), io=64.0MiB (67.1MB), run=220-220msec

If I run strace fio --name=test --ioengine=libaio --direct=1 --size=64m --rw=randwrite, at the point of crash, I see the following (I'm running this test on qemu but this is the same kernel that would cause the same crash if I run it on my laptop).

Unfortunately this strace output doesn't seem helpful. The process accesses sysfs only. It might be better to add "-f" option to your strace. Or could you try MagicSysRq - A?

The situation you wrote strongly suggests me that I should revert the commit on the kernels before v6.10-rc1. But I'd like to see the scenario clearly.

J. R. Okajima

jamesbond3142 commented 3 months ago

Thank you looking into this.

Unfortunately this strace output doesn't seem helpful. The process accesses sysfs only. It might be better to add "-f" option to your strace. Or could you try MagicSysRq - A?

I couldn't do MagicSysRq - A, as the machine was fully locked up. I tried the strace -f, the output is attached below, but it doesn't seem to be different from before.

xscreenshot-20240608T200258

I tried, but I couldn't reproduce the problem.

Hmm, that's odd. Your mounts looks identical to mine, but mine always crash.

What I can do is that I can pass you the qemu image containing the tests I have (+with instructions on how to reproduce the issue), as well as the kernel sources (the kernel is vanilla + aufs patch) + .config. Would that help?

sfjro commented 3 months ago

jamesbond3142:

I couldn't do MagicSysRq - A, as the machine was fully locked up. I tried the strace -f, the output is attached below, but it doesn't seem to be different from before.

This stace gave me a little more.

The caller of do_usleep() is run_threads() only, and it issues clone(2) and starts a child process. I guess that is pid=18 in your strace. The child process calls thread_main() --> fio_idle_prof_init(), and fio_idle_prof_init() handles pthread which probably issues futex(2) and wait for something.

What the child process is doing is just my guess and I won't be surprised if I made mistake. Anyway I cannot see the proceses using aufs files. Is it really an aufs issue? Hmm probably yes since you already found out the latest commit is the cause. Or your strace might not show us the trace fully. The very last part might be missing because the system died.

Anyway I'm afraid there is very little I can investigate. While I am not sure it will help, send me your .config.

J. R. Okajima

jamesbond3142 commented 3 months ago

Wow, that's quite a great insight! To the untrained eyes (=me), the first and second looks almost identical. I'm glad that you can see the difference.

Or your strace might not show us the trace fully. The very last part might be missing because the system died.

Most likely.

Is it really an aufs issue? Hmm probably yes since you already found out the latest commit is the cause.

I just want to say again that the key difference between crash and no crash (in the case of the patched kernel) is whether --direct=0 or --direct=1. And it also happens in mysql_install_db - which I'm quite sure uses O_DIRECT behind the scene. I can try to get the strace from mysql_install_db if it helps, but I think it is a much bigger code base than fio. Since you say it's in fio_backend.c, what I can do is also to try to take look at what it is doing. Hopefully I can come up with a simple C program that can trigger the bug.

On the other hand, I've looked at the patch, and I agree, that little change should not cause something so drastic like a hard lock up, so this must not be due to the patch alone. It has to be the patch + something else that causes the effect to be super bad, because even when I run "dmesg -w" in the other window (when I run in Xorg), I didn't see anything - the whole thing went dead before it had the chance to dump any error message. It's really weird.

Anyway I'm afraid there is very little I can investigate. While I am not sure it will help, send me your .config.

I'm really grateful that you're looking into this already; and I understand that problems are really difficult to solve if you cannot reproduce it on your end. I'm happy to help you in anyway that I can.

The config is attached here. The kernel source is here: http://distro.ibiblio.org/fatdog/sfs/900/kernel-source-6.1.90-debtest3.sfs (this is plan vanilla kernel + aufs + one small tmpfs patch, attached here too). My gcc is version 12.2.0 (hopefully it's not a known bad version ...).

And if you want to see it in action, I can upload the qemu image somewhere. It's a 1 GB disk image. All you need to do is qemu-system-x86_64 -enable-kvm -m 1024 -hda disk.img, and once it has booted up, open a terminal and run indirect.sh (fio test with --direct=0) and direct.sh (fio test with --direct=1 ==> will crash).

Thanks again Junjiro-san.

config-6.1.90-debtest3.txt z-shmem-user-xattr.txt

sfjro commented 3 months ago

jamesbond3142:

I just want to say again that the key difference between crash and no crash (in the case of the patched kernel) is whether --direct=0 or --direct=1. And it also happens in mysql_install_db - which I'm quite sure uses O_DIRECT behind the scene. I can try to get the strace from mysql_install_db if it helps, but I think it is a much bigger code base than fio. Since you say it's in fio_backend.c, what I can do is also to try to take look at what it is doing. Hopefully I can come up with a simple C program that can trigger the bug.

If such C program can reproduce the problem on my side, it must be a great help.

According to your config, I've found you are running v6.1.90 instead of v6.1.0. I tried v6.1.90, but failed reproducing. Unfortunately qemu doesn't suit my test environment.

Anyway I decided to revert the commit in aufs6.1..aufs6.9. aufs6.10 (current aufs6.x-rcN) still keep it. Someday when you upgrade your system to v6.10, the problem MAY happen again. If so, I will have to dive much deeper.

J. R. Okajima

sfjro commented 3 months ago

------- Blind-Carbon-Copy

From: "J. R. Okajima" @.> To: @. Subject: aufs6 GIT release (v6.10-rc2) MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: @.> Date: Mon, 10 Jun 2024 00:33:33 +0900 Message-ID: @.>

o Bugfix on github, jamesbond3142 reported that the commit, "aufs: bugfix, copy FMODE_CAN_ODIRECT" seems to cause a problem. I tried investigating but failed finding out the root cause. I decided to revert the commit in aufs6.1..aufs6.9.

J. R. Okajima


------- End of Blind-Carbon-Copy

jamesbond3142 commented 3 months ago

If such C program can reproduce the problem on my side, it must be a great help.

I found the program! The original source is from here https://github.com/littledan/linux-aio, but I have removed the google cruft. I'm attaching this in case you are still interested to test (github doesn't allow me to upload a .cpp file, so I appended .txt; please rename it back to .cpp for compilation).

You can just compile it g++ aiotest.cpp -laio -o aiotest. Run this as ./aiotest /path/to/file where the file must reside on aufs mount where the rw layer must be ext4.

aiotest.cpp.txt

Running without fail will cause a kernel crash. I managed to capture some of these, by forcing a text-mode console (if I use a framebuffer console, the crash corrupts the kernel so badly that it doesn't even have the chance to write it out).

Failure point: (in this text, the "/" is an aufs mountpoint) failure-points

Some straces of the crash that I can capture, but it seems that all I can see is the crash happening at fbcon or vgacon, so I'm not sure how useful these are.

xscreenshot-20240610T025729

xscreenshot-20240610T024324

xscreenshot-20240610T023004

According to your config, I've found you are running v6.1.90 instead of v6.1.0. I tried v6.1.90, but failed reproducing.

Correct, I use 6.1.90.

Unfortunately qemu doesn't suit my test environment.

Noted.

Anyway I decided to revert the commit in aufs6.1..aufs6.9. aufs6.10 (current aufs6.x-rcN) still keep it. Someday when you upgrade your system to v6.10, the problem MAY happen again. If so, I will have to dive much deeper.

Thank you. I may try this on 6.10 and see if the problem still happens (I hope not).

cheers!

sfjro commented 3 months ago

jamesbond3142:

I found the program! The original source is from here https://github.com/littledan/linux-aio, but I have removed the google cruft. I'm attaching this in case you are still interested to test (github doesn't allow me to upload a .cpp file, so I appended .txt; please rename it back to .cpp for compilation).

Thanks for the test program. I tried but failed reproducing again. sigh... I can see this program issues open(O_DIRECT) and AIO systemcalls. All these work fine on my test environment.

Running without fail will cause a kernel crash. I managed to capture some of these, by forcing a text-mode console (if I use a framebuffer console, the crash corrupts the kernel so badly that it doesn't even have the chance to write it out).

The call trace in those images show that write(2) to the console (not aufs file) caused the panic, regardless it is a framebuffer one or a VGA one. I'm afraid there might be more important log produced before these images.

J. R. Okajima

jamesbond3142 commented 3 months ago

Thanks for the test program. I tried but failed reproducing again. sigh... I can see this program issues open(O_DIRECT) and AIO systemcalls. All these work fine on my test environment.

I managed to reproduce the crash the unpatched kernel crash as well, if I specify "dio" as part of the mount options. So you're right, it's not the patch, but somewhere deeper. The patch just exposes it because it forces "dio", which I usually never enable.

I also managed to reproduce the scenario where it would continue without error, just like you. So yes, this is very confusing. I'm sorting out the scenarios where/when it will crash, and when it doesn't. Because this seems to be a different bug altogether, do you want me to start a new ticket (so we can close this one, since you already undid the patch), or do you want to continue here?

Here's some of the kernel logs that I finally managed to capture, in situations where the kernel doesn't immediately lock up (but calling "sync" will eventually lock it up).

[  170.394445] BUG: kernel NULL pointer dereference, address: 0000000000000102
[  170.394552] #PF: supervisor read access in kernel mode
[  170.394652] #PF: error_code(0x0000) - not-present page
[  170.394761] PGD 405a067 P4D 405a067 PUD 39ce7067 PMD 0 
[  170.394863] Oops: 0000 [#1] PREEMPT SMP NOPTI
[  170.394965] CPU: 0 PID: 38 Comm: kworker/u2:1 Not tainted 6.1.90-debtest3 #15
[  170.395071] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org 04/01/2014
[  170.395184] Workqueue: loop2 loop_rootcg_workfn
[  170.395296] RIP: 0010:__queue_work+0x16/0x3c0
[  170.395408] Code: 49 89 6d 00 48 89 68 08 49 8b 5c 24 20 e9 61 ff ff ff 66 90 41 57 41 56 41 89 fe 41 55 41 89 fd 41 54 49 89 f4 55 53 48 89 d3 <f6> 86 02 01 00 00 01 0f 85 7e 02 00 00 e8 e8 32 05 00 41 f6 84 24
[  170.395657] RSP: 0018:ffff888003dbfcc8 EFLAGS: 00010002
[  170.395793] RAX: ffff888008458000 RBX: ffff888039dfe9a0 RCX: ffff88803911ea98
[  170.395928] RDX: ffff888039dfe9a0 RSI: 0000000000000000 RDI: 0000000000000020
[  170.396058] RBP: ffff88803b917700 R08: 0000000000000c00 R09: ffff888003469000
[  170.396180] R10: 0000000000001000 R11: ffff88803cd77d98 R12: 0000000000000000
[  170.396303] R13: 0000000000000020 R14: 0000000000000020 R15: ffff88803cd77c80
[  170.396430] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000) knlGS:0000000000000000
[  170.396559] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  170.396685] CR2: 0000000000000102 CR3: 000000000b1e0000 CR4: 00000000000006f0
[  170.396821] Call Trace:
[  170.396956]  <TASK>
[  170.397081]  ? __die+0x50/0x92
[  170.397207]  ? page_fault_oops+0x65/0x1b0
[  170.397337]  ? kernelmode_fixup_or_oops+0x7f/0x110
[  170.397467]  ? exc_page_fault+0x287/0x570
[  170.397594]  ? asm_exc_page_fault+0x22/0x30
[  170.397728]  ? __queue_work+0x16/0x3c0
[  170.397867]  queue_work_on+0x1f/0x30
[  170.397992]  iomap_dio_bio_end_io+0x87/0x140
[  170.398126]  blk_update_request+0x164/0x3d0
[  170.398253]  blk_mq_end_request+0x13/0x30
[  170.398377]  loop_process_work+0x139/0x990
[  170.398503]  process_one_work+0x1c6/0x310
[  170.398627]  worker_thread+0x45/0x3b0
[  170.398761]  ? process_one_work+0x310/0x310
[  170.398887]  kthread+0xd5/0x100
[  170.399011]  ? kthread_complete_and_exit+0x20/0x20
[  170.399135]  ret_from_fork+0x22/0x30
[  170.399260]  </TASK>
[  170.399381] Modules linked in: snd_pcm_oss snd_mixer_oss bnep bluetooth ecdh_generic ecc cfg80211 rfkill 8021q mrp ipv6 bochs drm_vram_helper drm_ttm_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops snd_pcm drm snd_timer snd e1000 psmouse soundcore input_leds parport_pc pcspkr floppy qemu_fw_cfg i2c_piix4 parport [last unloaded: battery]
[  170.399830] CR2: 0000000000000102
[  170.399970] ---[ end trace 0000000000000000 ]---
[  170.400109] RIP: 0010:__queue_work+0x16/0x3c0
[  170.400250] Code: 49 89 6d 00 48 89 68 08 49 8b 5c 24 20 e9 61 ff ff ff 66 90 41 57 41 56 41 89 fe 41 55 41 89 fd 41 54 49 89 f4 55 53 48 89 d3 <f6> 86 02 01 00 00 01 0f 85 7e 02 00 00 e8 e8 32 05 00 41 f6 84 24
[  170.400558] RSP: 0018:ffff888003dbfcc8 EFLAGS: 00010002
[  170.400712] RAX: ffff888008458000 RBX: ffff888039dfe9a0 RCX: ffff88803911ea98
[  170.400887] RDX: ffff888039dfe9a0 RSI: 0000000000000000 RDI: 0000000000000020
[  170.401042] RBP: ffff88803b917700 R08: 0000000000000c00 R09: ffff888003469000
[  170.401199] R10: 0000000000001000 R11: ffff88803cd77d98 R12: 0000000000000000
[  170.401357] R13: 0000000000000020 R14: 0000000000000020 R15: ffff88803cd77c80
[  170.401518] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000) knlGS:0000000000000000
[  170.401696] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  170.401920] CR2: 0000000000000102 CR3: 000000000b1e0000 CR4: 00000000000006f0
[  170.402104] note: kworker/u2:1[38] exited with irqs disabled

There are other messages flying through as well, I'm trying to capture it too and when I can, I will post it to you.

sfjro commented 3 months ago

jamesbond3142:

I managed to reproduce the crash the unpatched kernel crash as well, if I specify "dio" as part of the mount options. So you're right, it's not the patch, but somewhere deeper. The patch just exposes it because it forces "dio", which I usually never enable.

What do you call "the unpatched kernel"? Do you mean "without aufs"? If so, what is "specify "dio" as part of the mount option"?

The crash message you posted is a gooe step forward. In my test, I didn't use the loopback block device. Now I started my local test using loopback mount which will take a few hours. (Currently it is running good.)

The call trace shows

A kthread requests to another kthread? I don't know whether it is a normal or wrong behaviour on loopback mount.

About the ticket, you don't have to close this.

J. R. Okajima

jamesbond3142 commented 3 months ago

What do you call "the unpatched kernel"? Do you mean "without aufs"?

Sorry I was not clear. "unpatched kernel" meant the aufs-kernel kernel without the FMODE_CAN_ODIRECT patch, that is, the patch that forces "dio".

If so, what is "specify "dio" as part of the mount option"?

I mean that the "aufs-kernel without FMODE_CAN_ODIRECT patch" can be forced to behave like the one patched with "FMODE_CAN_ODIRECT patch", by passing the "dio" mount option, just as you said on the first reply, that the patch only makes "dio" enabled by default and didn't do anything else.

The crash message you posted is a good step forward. In my test, I didn't use the loopback block device. Now I started my local test using loopback mount which will take a few hours. (Currently it is running good.)

The crash message came from this setup:

truncate -s 128M /tmp/x.img
mkfs.ext4 /tmp/x.img
mount  /tmp/x.img /mnt/flash # -o loop is implied
mount -t aufs aufs -o br:/mnt/flash=rw,dio /mnt/data
aiotest /mnt/data/xxx 10000

The aiotest process stops when it's "Submitting a write to 3762" or so (when it should finish up to 9999). I can login on another VT, but if I issue a sync from that other VT, the kernel will lock up hard.

My earlier setup (which crash so hard that I don't even have the crash log) looks like this:

mount /dev/sda2 /aufs/devbase # /dev/sda2 is ext4
mount /aufs/devbase/fd64.sfs /aufs/pup_ro # -o loop is implied, sfs is "squashfs" filesystem
mount -o bind /aufs/devbase/xxx /mnt/sb/sandbox 
mount -t aufs aufs -o br:/mnt/sb/sandbox=rw:/aufs/pup_ro=ro,dio /mnt/sb/fakeroot
# and a few other mounts to attach proc, sysfs, devtmpfs etc to /mnt/sb/fakeroot/{proc,sys,dev}
chroot /mnt/sb/fakeroot
# now inside the chroot, run aiotest
aiotest xxx # CRASH

The call trace shows - (an application writes something to a file, and the file is in the loopback mount) - the loopback device starts a workqueue which is another thread in kernel. - the kernel thread issues an actual I/O request or recives a notification telling the I/O is done. - the kernel thread handles the notification, and requests antoher kthread to do something. - and accesses to NULL address (Bang!) A kthread requests to another kthread? I don't know whether it is a normal or wrong behaviour on loopback mount.

Thanks for the analysis. From a kernel outsider like me, this error looks like it's either stack-smashing, or array/memcpy overrun somewhere and destroys the kernel internal data structure. It seems to mess up the kernel very quickly that any further filesystem work (e.g. "sync") will kill it immediately. In my original case where "/" was aufs, the kernel gets corrupted so that it can't even print anything.

If you want me to re-compile the kernel and use additional CONFIG parameters so you can see additional log messages, just let me know. I'm doing all this test on qemu so make the test faster but the same problem happens on real hardware too.

cheers, James

sfjro commented 3 months ago

jamesbond3142:

Sorry I was not clear. "unpatched kernel" meant the aufs-kernel kernel without the FMODE_CAN_ODIRECT patch, that is, the patch that forces "dio".

I see. So the problem can happen without the FMODE_CAN_ODIRECT commit. And it means the cause is either an old hidden aufs bug or linux bug. Right?

If you want me to re-compile the kernel and use additional CONFIG parameters so you can see additional log messages, just let me know. I'm doing all this test on qemu so make the test faster but the same problem happens on real hardware too.

I tried testing loopback mounted ext4 as aufs RW branch, and couldn't reproduce. But that was the case of ext4-img file is on shmem (RAM). In a few days I will try testing ext4-img file is HDD case.

Thinking about there is something more (other than aufs), and digging drivers/block/loop.c, I see these changes after v6.1.

7c98f7cb8fda 2024-04-15 remove call_{read,write}_iter() functions 473516b36193 2024-02-13 loop: use the atomic queue limits update API 02aed4a1f2c3 2024-02-13 loop: pass queue_limits to blk_mq_alloc_disk 65bdd16f8c72 2024-02-13 loop: cleanup loop_config_discard 27e32cd23fed 2024-02-13 block: pass a queue_limits argument to blk_mq_alloc_disk baa7d536077d 2024-01-18 loop: fix the the direct I/O support check when used on top of block devices 3d77976c3a85 2023-12-27 loop: don't abuse BLK_DEF_MAX_SECTORS 34c7db44b4ed 2023-12-27 loop: don't update discard limits from loop_set_status 269aed7014b3 2023-11-24 fs: move file_start_write() into vfs_iter_write() ab6860f62bfe 2023-08-21 block: simplify the disk_force_media_change interface bb5faa99f0ce 2023-07-21 loop: do not enforce max_loop hard limit by (new) default 23881aec85f3 2023-07-21 loop: deprecate autoloading callback loop_probe() 05bdb9965305 2023-06-12 block: replace fmode_t with a block-specific type for block open flags ae220766d87c 2023-06-12 block: remove the unused mode argument to ->release 0718afd47f70 2023-06-05 block: introduce holder ops bb430b694226 2023-03-27 loop: LOOP_CONFIGURE: send uevents for partitions 9b0cb770f5d7 2023-03-14 loop: Fix use-after-free issues 9f6ad5d533d1 2023-02-22 loop: loop_set_status_from_info() check before assignment e152a05fa054 2023-02-01 loop: Improve the hw_queue_depth kernel module parameter implementation 292a089d78d3 2022-12-25 treewide: Convert del_timer() to timer_shutdown() 85c50197716c 2022-12-14 loop: Fix the max_loop commandline argument treatment when it is set to 0 de4eda9de2d9 2022-11-25 use less confusing names for iov_iter direction initializers

Most of them look unrelated to our problem. But I see the commit baa7d536077d 2024-01-18 loop: fix the the direct I/O support check when used on top of block devices is a little shining. And these commits too, but not so bright. 02aed4a1f2c3 2024-02-13 loop: pass queue_limits to blk_mq_alloc_disk 3d77976c3a85 2023-12-27 loop: don't abuse BLK_DEF_MAX_SECTORS 9b0cb770f5d7 2023-03-14 loop: Fix use-after-free issues

So I'd ask you to apply baa7d536077d and test. If the problem is not fixed, then try "not so bright" three commits.

J. R. Okajima

jamesbond3142 commented 3 months ago

I see. So the problem can happen without the FMODE_CAN_ODIRECT commit. And it means the cause is either an old hidden aufs bug or linux bug. Right?

Correct.

I tried testing loopback mounted ext4 as aufs RW branch, and couldn't reproduce.

I forgot to say that the problem seems to be also related to "racing" condition. In this setup:

truncate -s 128M /tmp/x.img
mkfs.ext4 /tmp/x.img
mount  /tmp/x.img /mnt/flash # -o loop is implied
mount -t aufs aufs -o br:/mnt/flash=rw,dio /mnt/data
aiotest /mnt/data/xxx 10000

If instead of doing aiotest /mnt/data/xxx 10000 on the last line, I replaced it with aiotest /mnt/data/xxx 10000 > run.txt 2>&1, then it does not crash. Weird.

Most of them look unrelated to our problem. But I see the commit baa7d536077d 2024-01-18 loop: fix the the direct I/O support check when used on top of block devices is a little shining. And these commits too, but not so bright. 02aed4a1f2c3 2024-02-13 loop: pass queue_limits to blk_mq_alloc_disk 3d77976c3a85 2023-12-27 loop: don't abuse BLK_DEF_MAX_SECTORS 9b0cb770f5d7 2023-03-14 loop: Fix use-after-free issues So I'd ask you to apply baa7d536077d and test. If the problem is not fixed, then try "not so bright" three commits. J. R. Okajima

Yes, I will test those commits and get back to you.

jamesbond3142 commented 3 months ago

I'm checking linux-stable: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git

But I see the commit baa7d536077d 2024-01-18 loop: fix the the direct I/O support check when used on top of block devices is a little shining.

This commit is already applied in 6.1.90. Do you want me to reverse it?

And these commits too, but not so bright. 02aed4a1f2c3 2024-02-13 loop: pass queue_limits to blk_mq_alloc_disk

This commit does not exist in 6.1.90. Do you want me to apply it?

3d77976c3a85 2023-12-27 loop: don't abuse BLK_DEF_MAX_SECTORS

This commit does not exist in 6.1.90. Do you want me to apply it?

9b0cb770f5d7 2023-03-14 loop: Fix use-after-free issues

This commit is already applied in 6.1.90. Do you want me to reverse it?

sfjro commented 3 months ago

jamesbond3142:

I'm checking linux-stable: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git

Ah, I didn't check v6.1.90. In the four commits I wrote, two commits are already applied to v6.1.90. So I'd ask you to apply the rest two commits and test. Please note that I'm not sure if this fix the problem. It's just a quick and easy trial.

J. R. Okajima

jamesbond3142 commented 3 months ago

Ah, I didn't check v6.1.90.

So you have been testing on 6.1.0. I think I should do that too, see if the problems shows up on pristine 6.1?

In the four commits I wrote, two commits are already applied to v6.1.90. So I'd ask you to apply the rest two commits and test.

3d77976c3a85 2023-12-27 loop: don't abuse BLK_DEF_MAX_SECTORS

This commit is harmless, because it only changes BLK_DEF_MAX_SECTORS to its numeric equivalent (256u). It may have effect in the future if BLK_DEF_MAX_SECTORS is changed, but for now, it has no effect.

And these commits too, but not so bright. 02aed4a1f2c3 2024-02-13 loop: pass queue_limits to blk_mq_alloc_disk

I cannot apply this. The function signature has changed between 6.1 and the the patch. From 6.1, the function is blk_mq_alloc_disk(&lo->tag_set, lo); (that is, two parameters), while in the patch it has blk_mq_alloc_disk(&lo->tag_set, &lim, lo);, that is, 3 parameters.

I think I will build plain 6.1 and see if I can reproduce the problem there. If not, then something must have gone wrong between 6.1 and 6.1.90. If even 6.1 has problems, then I don't know where else to go ... we need to check when this wasn't a problem, and then try to bisect it?

sfjro commented 3 months ago

jamesbond3142:

So you have been testing on 6.1.0. I think I should do that too, see if the problems shows up on pristine 6.1?

No. I meant I did "git log v6.1..master drivers/block.c" instead of "v6.1.90..master". It was my mistake. I'm testing kernel v6.1.90 + aufs6.1 + FMODE_CAN_ODIRECT commit.

Thanks for trying the two commits I mentioned previously.

I don't think you need to move to v6.1.0, but bisection is a good approach I think.

J. R. Okajima

sfjro commented 3 months ago

"J. R. Okajima":

I tried testing loopback mounted ext4 as aufs RW branch, and couldn't reproduce. But that was the case of ext4-img file is on shmem (RAM). In a few days I will try testing ext4-img file is HDD case.

I tried

J. R. Okajima

jamesbond3142 commented 3 months ago

I don't know what else to say. I just tried this again:

truncate -s 128M /tmp/x.img
mkfs.ext4 /tmp/x.img
mount  /tmp/x.img /mnt/flash # -o loop is implied
mount -t aufs aufs -o br:/mnt/flash=rw,dio /mnt/data
aiotest /mnt/data/xxx 10000

on a real machine (a desktop, different from my laptop that I used to test before) and it too locked up straight away (not even the chance to get the dmesg output, unlike in qemu where I still had the chance to get it).

I tried a few variations, instead of mkfs.ext4, I tried ext2, ext3 and btrfs when creating the x.img above. ext2 and ext3 don't work (the test failed at fallocate stage and refused to run further), but btrfs also locks up straight away.

I guess the only way to know is by bisecting, but for that I need to find out the "good" version first. Will need to compile a few kernels ...

sfjro commented 3 months ago

jamesbond3142:

on a real machine (a desktop, different from my laptop that I used to test before) and it too locked up straight away (not even the chance to get the dmesg output, unlike in qemu where I still had the chance to get it).

Then I'd suggest you to try same test without aufs. i.e. aiotest /mnt/flash/xxx 10000 instead of /mnt/data/xxx

J. R. Okajima

jamesbond3142 commented 3 months ago

Then I'd suggest you to try same test without aufs. i.e. aiotest /mnt/flash/xxx 10000 instead of /mnt/data/xxx

Yes, I did, and it didn't crash.

Testing with old kernels I built before:

To try the variety, I will try kernels built by other people (from Puppy Linux community mostly) and see if I can reproduce the problem on those kernels too.

sfjro commented 3 months ago

jamesbond3142:

Then I'd suggest you to try same test without aufs. i.e. aiotest /mnt/flash/xxx 10000 instead of /mnt/data/xxx

Yes, I did, and it didn't crash.

Testing with old kernels I built before:

  • 5.4.60 is good
  • 5.10.63 is good
  • 5.19.17 crash

Let me make sure. Without aufs, running "aiotest /mnt/flash/xxx 10000" instead of /mnt/data/xxx, linux-v5.19.17 did crashed?

J. R. Okajima

jamesbond3142 commented 3 months ago

Let me make sure. Without aufs, running "aiotest /mnt/flash/xxx 10000" instead of /mnt/data/xxx, linux-v5.19.17 did crashed?

No crash. I ran it twice to make sure.

I tested 6.6.32 from https://www.forum.puppylinux.com/viewtopic.php?t=11745 - it also crash (when run on /mnt/data/xxx). I will try to find older kernels, perhaps from 5.15.

jamesbond3142 commented 3 months ago

More data points:

I tested both, aiotest on /mnt/flash/xxx did not crash, but the (/mnt/data/xxx on aufs), crashed.

jamesbond3142 commented 3 months ago

I tested more kernels from here: https://archive.org/download/Puppy_Linux_Huge-Kernels

huge-5.10.55-slac64oz.tar.bz2 - crash huge-5.10.208-slac64oz-ao.tar.bz2 - crash huge-5.11.15-slac64oz.tar.bz2 - crash huge-5.12.7-lxpup64.tar.bz2 - crash huge-5.13.8-slac64oz.tar.bz2 - crash huge-5.14.5-slac64oz.tar.bz2 - crash

But this is interesting. In this kernel, 5.10.55 crash, while my 5.10.63 doesn't crash. What's the difference? I need to review the configuration between these two, but I don't have the kernel sources for these huge kernels ...

By the way, I run all the aiotest as root.

jamesbond3142 commented 3 months ago

i tried huge--5.10.61-64oz.tar.bz2 - which also crashes. This is the closest to the mine (5.10.63), which did not crash. Comparing the configs, after eliminating all the drivers, the difference is not much (from good 5.10.63 to broken 5.10.61):

CONFIG_CC_VERSION_TEXT="gcc (GCC) 9.3.0"
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_EFI_MIXED=y
CONFIG_GCC_VERSION=90300
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_HZ=300
# CONFIG_HZ_1000 is not set
CONFIG_HZ_300=y
CONFIG_KVM_WERROR=y
CONFIG_LD_VERSION=233010000
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PRIME_NUMBERS is not set
CONFIG_PVPANIC=y
CONFIG_SND_HDA_PREALLOC_SIZE=0
CONFIG_SYMBOLIC_ERRNAME=y
# CONFIG_TASK_XACCT is not set
CONFIG_TIME_NS=y
# CONFIG_TMPFS_POSIX_ACL is not set
# CONFIG_VDPA is not set
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_POWERNOW_K8=m
CONFIG_X86_UMIP=y
CONFIG_AS_TPAUSE=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
# CONFIG_KASAN is not set

Most of the difference seems to that my 5.10.63 was compiled with gcc 7.3.0, while this 5.10.61-64oz was compiled with the newer gcc 9.3.0. Could the gcc be the cause?

To test that theory, I'm now re-compiling the known good 5.10.63 with gcc 12.2.0, the same compiler I used for 6.1.90. We'll see ...

jamesbond3142 commented 3 months ago

Most of the difference seems to that my 5.10.63 was compiled with gcc 7.3.0, while this 5.10.61-64oz was compiled with the newer gcc 9.3.0. Could the gcc be the cause?

To test that theory, I'm now re-compiling the known good 5.10.63 with gcc 12.2.0, the same compiler I used for 6.1.90. We'll see ...

That's not it. The newly compiled 5.10.63 with gcc 12.2.0 does not crash. It has to be something else ... I will attempt to compile 5.10.63 with the configuration of 5.10.61 and see.

jamesbond3142 commented 3 months ago

Okay, we've got some progress here.

The same 5.10.63 kernel sources, but configured with the .config from huge--5.10.61-64oz.tar.bz2 (which crashes), also crashes.

So the configuration does make a difference, and perhaps I failed to see the difference earlier when I did the comparison between the two configs. Need to investigate further.

jamesbond3142 commented 3 months ago

Ok, reporting back after 10 kernel compiles ... after bisecting the config between working 5.10.63 and the non-working 5.10.61, it turns out the difference was between crash and no-crash configuration was CONFIG_PREEMPT. The working configuration (no crash) has CONFIG_PREEMPT=y, while the crashing one has CONFIG_PREEMPT_VOLUNTARY=y.

But this is not making sense, because my other kernels (including 6.1.90) all had CONFIG_PREEMPT=y and yet they still crashed :(

This looks more and more like buffer overrun somewhere, and changing the CONFIG_PREEMPT just masked the problem. And that's probably why you can't replicate this problem, because if the config is different, the memory layout is different, so the crash may not happen.

I'm going to do one more test. I'm going to compile 6.1.90 with PREEMPT_DYNAMIC turned off. That's one PREEMPT tunables that didn't exist in 5.10.x. We'll see if this helps (I'm thinking it probably wouldn't), but then I might get lucky.

Actually I will also try to enable some of the INIT config parameters (init stack, init malloc, etc) to see if things get corrupted, and perhaps I will see something. Perhaps. Fingers crossed.

jamesbond3142 commented 3 months ago

I'm going to do one more test. I'm going to compile 6.1.90 with PREEMPT_DYNAMIC turned off. That's one PREEMPT tunables that didn't exist in 5.10.x. We'll see if this helps (I'm thinking it probably wouldn't), but then I might get lucky.

Nope, still crashed :(

Actually I will also try to enable some of the INIT config parameters (init stack, init malloc, etc) to see if things get corrupted, and perhaps I will see something. Perhaps. Fingers crossed.

Didn't help either.

This looks more and more like buffer overrun somewhere, and changing the CONFIG_PREEMPT just masked the problem. And that's probably why you can't replicate this problem, because if the config is different, the memory layout is different, so the crash may not happen.

But here is interesting bit: if I run the test without loading the kernel modules (6.1.90 kernel have sufficient built-ins to boot inside qemu without require additional modules), then running the test does not crash. Once I load the modules, however, the crash is back.

sfjro commented 3 months ago

jamesbond3142:

I don't know what else to say. I just tried this again:


truncate -s 128M /tmp/x.img
mkfs.ext4 /tmp/x.img
mount  /tmp/x.img /mnt/flash # -o loop is implied

And your /tmp is ext4 too? We know the loopback mounted /mnt/flash is ext4 and it supports O_DIRECT, but how about the backend /tmp/x.img?

Currenting I'm thinking about scheduling timing.

J. R. Okajima

jamesbond3142 commented 3 months ago

jamesbond3142: And your /tmp is ext4 too? We know the loopback mounted /mnt/flash is ext4 and it supports O_DIRECT, but how about the backend /tmp/x.img?

/tmp is tmpfs, but in my test it doesn't matter where x.img is located. As long as /mnt/flash has support for O_DIRECT (I tested with ext4 and btrfs), it will crash. Surprisingly, aiotest refuses to run with ext2 and ext3 (so these filesystems don't support O_DIRECT, I suppose).

Currenting I'm thinking about scheduling timing.

Thanks for continuing to look into this. Right now I'm out of options and don't know what else I can do to help you to try to pinpoint the problem. If you have anything else you want me to do / test, just let me know.

cheers, James

sfjro commented 3 months ago

jamesbond3142:

Thanks for continuing to look into this. Right now I'm out of options and don't know what else I can do to help you to try to pinpoint the problem. If you have anything else you want me to do / test, just let me know.

You don't have to say thank me since the problem seems to be an aufs bug. Here is a rather dirty approach to try fixing the bug. Please test and report.

J. R. Okajima

diff --git a/fs/aufs/f_op.c b/fs/aufs/f_op.c index 57b2e897f484..b0fd8d94d4da 100644 --- a/fs/aufs/f_op.c +++ b/fs/aufs/f_op.c @@ -278,11 +278,20 @@ static ssize_t au_do_iter(struct file h_file, int rw, struct kiocb kio, lockdep_off(); err = iter(kio, iov_iter); lockdep_on();

jamesbond3142 commented 3 months ago

You don't have to say thank me since the problem seems to be an aufs bug. Here is a rather

I still have to thank you for continuing to support aufs :)

Here is a rather dirty approach to try fixing the bug. Please test and report.

I will, but before that:

  1. This is to be applied to 6.1.90?

  2. I cannot extract the patch cleanly from github/email. Do you mind attaching the patch file instead (in either email or github).

cheers, James

sfjro commented 3 months ago

jamesbond3142:

I will, but before that:

  1. This is to be applied to 6.1.90?

Yes. But the aufs source files are identical to aufs6.1.

  1. I cannot extract the patch cleanly from github/email. Do you mind attaching the patch file instead (in either email or github).

Github eats/discards a mail attachemtn. Anyway I will try here. If you cannot get, send me your mail address to gmailto:hooanon05g.

jamesbond3142 commented 3 months ago

Email sent :)

sfjro commented 2 months ago

------- Blind-Carbon-Copy

From: "J. R. Okajima" @.> To: @. Subject: aufs6 GIT release (v6.10-rc4) MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: @.> Date: Mon, 24 Jun 2024 09:31:56 +0900 Message-ID: @.>

o Bugfix

o aufs-util

J. R. Okajima


------- End of Blind-Carbon-Copy

jamesbond3142 commented 2 months ago

This fixes it. Thank you again Junjiro-san! :+1:

Closing the ticket now.