Closed ANHDY closed 1 month ago
Is this stable and reproducible?
The cause of this painc is that the cache directory was accidentally deleted, causing the cache directory to no longer exist when the lock file was read.
/data1/jfs/83f08d30-0069-4d18-b0d9-4650f31a8704/
Why doesn't it exist? Did you manually delete that?
Is this stable and reproducible? The cause of this painc is that the cache directory was accidentally deleted, causing the cache directory to no longer exist when the lock file was read.
/data1/jfs/83f08d30-0069-4d18-b0d9-4650f31a8704/
Why doesn't it exist? Did you manually delete that?
Yes, this problem occurs every time, /data1/jfs/83f08d30-0069-4d18-b0d9-4650f31a8704/
I used the command to check this folder and it exists. I don't know why I still report this error every time.
Which user did you use to run hadoop command? And did the user has the permission to mkdir under /data1/jfs
Which user did you use to run hadoop command? And did the user has the permission to mkdir under /data1/jfs
I used the hive user to run Hadoop commands and set 777 permissions for /data1/jfs
What happened:
This error occurred during distributed testing. Execute command:
hadoop jar ./juicefs-hadoop-1.2.0.jar dfsio -write -files 50 -size 1MB -bufferSize 1048576 -baseDir jfs://jfs/tmp/benchmarks/DFSIO
log:
View yarn logs:
yarn logs --applicationId=application_1727071698603_0010
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?
Environment:
juicefs --version
) or Hadoop Java SDK version: juicefs-hadoop-1.2.0.jarcat /etc/os-release
):uname -a
):