Open mwqgithub opened 1 week ago
Usually it's caused by heavy write workloads. JuiceFS will retry the transaction as expected.
Could you please explain what you mean by a 'heavy' workload? Are you referring to high bandwidth usage or a high level of concurrency? In our case, we are writing about 260GB of data into JFS using a single thread. Would this be considered a heavy workload? Could this issue be related to S3 throttling?
What happened:
We were writing some files to the mount point. A line that says database is locked appeared in the log:
May 18 01:51:16 ip-172-31-4-1 juicefs[6074]: juicefs[6074] : Read transaction succeeded after 4 tries (15.108270368s), last error: database is locked [sql.go:768]
. And the access to the mount point became slow.What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
It only happened once, so far we don't know how to reproduce it.
Anything else we need to know?
Environment:
juicefs --version
) or Hadoop Java SDK version: juicefs version 1.1.0+2023-09-04.08c4ae6cat /etc/os-release
):uname -a
):