owncloud / core

:cloud: ownCloud web server core (Files, DAV, etc.)
https://owncloud.com
GNU Affero General Public License v3.0
8.36k stars 2.06k forks source link

Stray locks not being cleaned: server replied: Locked (ajax cron) #20380

Closed klausguenter closed 7 years ago

klausguenter commented 8 years ago

Hi there,

I have problems uploading some files via windows 7 client (Version 2.0.2, build 5569) connected with an owncloud 8.2 stable-Server.

The files exist on the client, not on the server. The log file on the client says:

06.11.2015 23:07:35 folder1/xxx.MDB   F:\Cloud1                     Error downloading http://xxx/owncloud/remote.php/webdav/folder1/xxx.MDB - server replied: Locked ("folder1/xxx.MDB" is locked)4,3 MB

1) I wonder why the client has problems to download - it should try to upload.

2) At first I thougt that the file on the client could be in use by another program. But the server says, that the file ist locked, not the client.

Can anyone help me please?

Regards, klausguenter

MorrisJobke commented 8 years ago

cc @icewind1991 for the locking topic

PVince81 commented 8 years ago

I believe the error message in the client says "download" even when uploading, it's another issue.

The question here is why the file is locked in the first place. Are there other users accessing that folder ?

I suspect a stray lock.

klausguenter commented 8 years ago

It's possible that the files to upload were in use by another program when the sync-client tried to upload them for the first time. When the problem occured I did a restart of the client pc to make sure that these files are not in use by another program any more. But the files kept being not uploadable.

apramhaas commented 8 years ago

I have exact the same problem. It suddenly occured for one file and for the first time. I'm the only one who is syncing to this directory (3 PCs, 2 mobile devices). I can not overwrite or delete it. Came here from https://forum.owncloud.org/viewtopic.php?t=31270&p=100790 and tried this procedure:

Server configuration

Operating system: Raspbian 8 Web server: Nginx Database: Mysql PHP version: 5.6.14 ownCloud version: (see ownCloud admin page) 8.2.0.12 List of activated apps:

The content of config/config.php:

"system": {
        "instanceid": "oc788abd2781",
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "datadirectory": "\/var\/ocdata",
        "dbtype": "mysql",
        "version": "8.2.0.12",
        "installed": true,
        "config_is_read_only": false,
        "forcessl": true,
        "loglevel": 2,
        "theme": "",
        "maintenance": false,
        "trashbin_retention_obligation": "30, auto",
        "trusted_domains": [
            "***REMOVED SENSITIVE VALUE***"
        ],
        "mail_smtpmode": "php",
        "dbname": "owncloud",
        "dbhost": "localhost",
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "forceSSLforSubdomains": true,
        "memcache.local": "\\OC\\Memcache\\APCu"
    }

Error message from logfile:

{"reqId":"5h4sJPhlw0mjlWNp5wdl","remoteAddr":"94.87.129.34","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"safe.kdbx\\\" is locked\",\"Exception\":\"OC\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(179): OC\\\\Connector\\\\Sabre\\\\File->delete()\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(287): Sabre\\\\DAV\\\\Tree->delete('safe.kdbx')\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpDelete(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(469): Sabre\\\\Event\\\\EventEmitter->emit('method:DELETE', Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(254): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/owncloud\\\/apps\\\/files\\\/appinfo\\\/remote.php(55): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/var\\\/www\\\/owncloud\\\/remote.php(137): require_once('\\\/var\\\/www\\\/ownclo...')\\n#8 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/lib\\\/private\\\/connector\\\/sabre\\\/file.php\",\"Line\":300}","level":4,"time":"2015-11-09T22:34:35+00:00","method":"DELETE","url":"\/remote.php\/webdav\/safe.kdbx"}
klausguenter commented 8 years ago

Server configuration

Operating system: Debian 7 stable Web server: Apache 2.2.22 Database: Mysql 5.5.46 PHP version: 5.4.45 ownCloud version: 8.2.0.12 (stable) List of activated apps:

activity: 2.1.3
deleted files: 0.7.0
first run wizard 1.1
Gallery 14.2.0
Mail Template Editor 0.1
Notifications 0.1.0
Provisioning API 0.3.0
Share Files 0.7.0
Text Editor 2.0
Updater 0.6
Versions 1.1.0
Video Viewer 0.1.3

The content of config/config.php:

$CONFIG = array ( 'instanceid' => '_', 'passwordsalt' => '', 'secret' => '**', 'trusteddomains' => array ( 0 => '**', 1 => '_', ), 'datadirectory' => '', 'overwrite.cli.url' => '**', 'dbtype' => 'mysql', 'version' => '8.2.0.12', 'dbname' => 'owncloud1', 'dbhost' => 'localhost', 'dbtableprefix' => 'oc', 'dbuser' => '**', 'dbpassword' => '***', 'logtimezone' => 'UTC', 'installed' => true, 'filelocking.enabled' => 'true', 'memcache.locking' => '\OC\Memcache\Redis', 'memcache.local' => '\OC\Memcache\Redis', 'redis' => array ( 'host' => 'localhost', 'port' => 6379, 'timeout' => 0, ), );

icewind1991 commented 8 years ago

Do you also get files locked errors when trying to upload trough the web interface?

apramhaas commented 8 years ago

Yes

Siddius commented 8 years ago

i had the same problem. My workaround : Enable maintaince mode Deleted every entry in table "oc_file_locks" in database Disable Maintaince mode

Dirty but solved the problem ... for now

apramhaas commented 8 years ago

I've found some additional files which can not be deleted because they are locked. If you need additional debug data let me know....

icewind1991 commented 8 years ago

Are there any errors in the logs before the locking error shows up?

apramhaas commented 8 years ago

I see no other errors before the locking error. It occures just in the moment I want to modify or delete a file. All this "problem" files were present before the update to Owncloud 8.2. Maybe the error came with this version.

Here is my owncloud.log https://gist.github.com/unclejamal3000/2aba05cd32cc53771256

jbouv55151 commented 8 years ago

I do have the same problem with a fresh installation of 8.2 I did not have this problem on older version on the same server.

DavidShepherdson commented 8 years ago

This is happening to me (on both 8.2 and 8.2.1, with MySQL), particularly (I think) since I added Dropbox external storage to one of my users (another user already had Dropbox set up previously with no problems).

Possibly of note: I just tried cleaning things up, by turning on maintenance mode, deleting everything from oc_file_locks, then running occ files:scan --all. After doing the latter, and with maintenance mode still turned on, there are now 10002 rows in oc_file_locks. Is that expected? I assumed there would only be locks if something was still using the files (which no clients would be, since it's in maintenance mode, and since the files:scan process finished, it wouldn't still be holding onto locks, would it?).

icewind1991 commented 8 years ago

After doing the latter, and with maintenance mode still turned on, there are now 10002 rows in oc_file_locks. Is that expected?

For performance reasons (since 8.2.1) rows are not cleaned up directly but re-used in further requests

DavidShepherdson commented 8 years ago

Fair enough, so that's probably not related to the issue, then. For what it's worth, I've removed the Dropbox external storage from this particular user, and haven't had any file locking problems so far since then. That may be coincidence, of course, or just that the particular files being synched with the Dropbox folder were the ones likely to cause the locking issue.

stevespaw commented 8 years ago

All of our s3 files are locked. We cannot delete or rename any files that were there previous to 8.2 update. UGGG is this fixable? we have thousands of files on s3.

bcutter commented 8 years ago

Same on OC v8.2.1 with TFL and Memcaching via Redis as recommended. Anyway, there are a few entries in oc_file_locks (although through using Redis there shouldn´t be any locks?). No idea how to fix this. Only one specific file affected, making me and the never-ending, logfile-filling desktop clients going crazy.

Thankful for every tip or workaround! No idea how to "unlock" the file...

PVince81 commented 8 years ago

@icewind1991 are you able to reproduce this issue ?

For DB-based locking it might be possible to remove the locks by cleaning the "oc_file_locks" table. If you're using redis exclusively for ownCloud, you might be able to clear it using the command flushall using redis-cli.

PVince81 commented 8 years ago

Are you guys using php-fpm ? I suspect that if the PHP process gets killed due to timeouts, the locks might not get cleared properly. However I thought that locks now have a TTL, @icewind1991 ?

bcutter commented 8 years ago

Yes, php-fpm is in the game too. @PVince81 perfect! That was what I was looking for (at http://redis.io/commands). For the moment syncing works fine again.

Do you know the cli for listing all keys/locked files on redis-cli too?

And I still don´t get why oc_file_locks has entries although using redis...

AntonioGMuriana commented 8 years ago

I've been experiencing the same issue.

Operating system: Ubuntu 14.04.3 LTS Web server: Apache 2.4.7 Database: MySQL 5.5.46 PHP version: 5.5.9 (running as Apache Module) ownCloud version: 8.2.1-1.1 MemChache: APCu 4.0.7

After entering on Maintenance Mode, I have seen that the table oc_file_locks has lost of entries with lock > 0 (even > 10) and about 150 entries with a future ttl value.

Solved by deleting all rows and leaving the maintenance mode.

pdesanex commented 8 years ago

Same issue here.

all-inkl.com shared hosting PHP 5.6.13 mySQL 5.6.27 ownCloud 8.2.1 stable

Flushing oc_file_locks resolves all issues.

Cybertinus commented 8 years ago

I was hit by this bug too. My system:

PHP 5.6.14 MariaDB 10.0.21 Nginx 1.9.5 (thus using php-fpm) FreeBSD 10.2-RELEASE-p8 OwnCloud 8.2.1 stable

The flushing of oc_file_locks seams to fix this issue indeed. So I wrote a little script to remove all the stale locks from the file_locks table:

#!/usr/bin/env bash

##########
# CONFIG #
##########

# CentOS 6: /usr/bin/mysql
# FreeBSD: /usr/local/bin/mysql
mysqlbin='/usr/local/bin/mysql'

# The location where OwnCloud is installed
ownclouddir='/var/www/owncloud'

#################
# ACTUAL SCRIPT #
#################

dbhost=$(grep dbhost "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbuser=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbpass=$(grep dbpassword "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbname=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbprefix=$(grep dbtableprefix "${ownclouddir}/config/config.php" | cut -d "'" -f 4)

"${mysqlbin}" --silent --host="${dbhost}" --user="${dbuser}" --password="${dbpass}" --execute="DELETE FROM ${dbprefix}file_locks WHERE ttl < UNIX_TIMESTAMP();" "${dbname}"

Just configure where the mysql command can be found (hint: which mysql will tell you) and configure where OwnCloud itself is installed. It needs this location in order to find the config.php inside your owncloud install. It extracts the needed database information from it and uses that to connect to MySQL. This has the advantage that when you change the password of the owncloud MySQL user, this scripts automatically uses the new information. And it saves you from having another file on your filesystem stating your password. You don't need to edit anything below the "ACTUAL SCRIPT" comment. When it has connection with MySQL it removes all the locks from the database that are already expired. It doesn't remove all locks as suggested in the rest of this issue, because there can be valid locks in the database which are still in the future. This script leaves those alone, to prevent bad stuff from happening.

And of course you can run this script as a cronjob every night, so you don't have to think about these stale locks anymore.

Hopefully this workaround script is useful for someone else except just me :)

Atanamo commented 8 years ago

Hi, recently I had the same problem (Using the database as locking system). The file_locks table was full of stray locks (>10k). Most data rows had set the "lock" field to 1, some hundreds to 2 and so on.

As I read the post of @PVince81 here, the "ttl" was introduced for removing old or stray locks? But... The "ttl" of most of the entries in my table was more than 12 hours old. So the locks should have been expired, right?

Well, I tested the expire mechanism and it seems not to work as expected.

In the last case I would expect the file can be renamed successfully. But the file lock is respected although it is expired.

By looking into the code of the DBLockingProvider, I cannot find anything that checks the ttl for the locks - except by method cleanEmptyLocks(). But this method is only removing expired entries having "lock"=0.

So I wonder, if this is the only purpose of the ttl: Only to clean up valid old and fully released locks? If not, this might cause the bug.

But in any case, it seems to be useful to introduce a timestamp like the ttl, which is regarded when a lock should be acquired. For example let's call this timestamp "stray_timeout"

Well, hope these thoughts are not totally nonsense and may help ;-)


ownCloud version: 8.2.1 (stable) Operating system: Raspbian 8 Web server: Nginx Database: MySQL 5.5.44 PHP version: 5.6.14 - Using PHP-FPM

PVince81 commented 8 years ago

The "ttl" of most of the entries in my table was more than 12 hours old. So the locks should have been expired, right?

@icewind1991 can you have a look why the expiration is not working ?

PVince81 commented 8 years ago

Setting to 8.2.2 because stray locks are nasty

PVince81 commented 8 years ago

CC @cmonteroluque

MiltonQuirino commented 8 years ago

I have the same problem, when trying to create a directory for WebDav this occurring the error 423 File Locked

P.S. I'm using external storage

icewind1991 commented 8 years ago

Partial fix is here https://github.com/owncloud/core/pull/21072 (only for the db locking backend)

icewind1991 commented 8 years ago

And here for redis based locking

PVince81 commented 8 years ago

Fix for DB is here https://github.com/owncloud/core/pull/21072 and redis here https://github.com/owncloud/core/pull/21073 Will be in 8.2.2 and 9.0.

AntonioGMuriana commented 8 years ago

I have noticed that this bug (generating stray locks) is caused by an interrupted occ file:scan command.

The occ file:scan is creating locks on the database and only are cleared after finish. If the scan is interrupted with Ctrl+C, the locks are left on the database.

Cybertinus commented 8 years ago

I don't know if this is the only reason stray locks will be generated. A user of mine came to me 2 months ago that I noticed the stray locks, and I only learned about the occ file:scan command 2 weeks ago (in one of my attempts to "fix" this problem). In other words: If Owncloud itself doesn't start that command, then this issue is wider then only the occ file:scan command.

PVince81 commented 8 years ago

Stray locks could also happen if a PHP timeout happens or if the connection is closed/lost. I believe that some environments like php5-fpm will automatically kill the PHP process if the connection is lost, while others (mod_php?) will leave it running.

This is why the TTL is important because it seems that it's not possible to catch a killed PHP process and run code at this moment.

shorinjin commented 8 years ago

I just had this happen, as well. Mine is a relatively new installation. Could I just delete the database and start over? If so, how would I do that?

Cybertinus commented 8 years ago

@shorinjin I would just run the query that is stated in my workaround script. This solves the problem, without having to start over again.

But: this is an issue tracker, not a support forum. For support related questions I would suggest you open a topic on the forums (https://forum.owncloud.org).

PVince81 commented 8 years ago

Stray locks should not happen any more in 8.2.2 (which was released shortly)

Deleting the contents of oc_file_locks table should be enough (do this in maintenance mode just to be sure)

pdesanex commented 8 years ago

Confirming - no more stray locks with 8.2.2. Thanks for the fix!!

stormsh commented 8 years ago

@pdesanex: I get the files locked problem now that I updated to 8.2.2. Is there a fix or work around or do I have to wait for 9.0?

PVince81 commented 8 years ago

@stormsh try clearing your oc_file_locks table. Maybe you had stray locks from before the update.

stormsh commented 8 years ago

@PVince81 Thanks for the quick reply. That worked. Although the stray locks never occurred to me before the update to 8.2.2.

bobbolous commented 8 years ago

had the same problem with owncloud 9, upgraded from 8.2.2. Only noticed it today. For now i solved it with TRUNCATE oc_file_locks

PVince81 commented 8 years ago

@icewind1991 maybe we need a repair step to clear stray locks at update time, just in case ?

JetUni commented 8 years ago

@PVince81 I think that would be good! I just barely started using owncloud and the first version that I installed was 9.0.1 and after upgrading to 9.0.2 I'm having this problem.

PVince81 commented 8 years ago

Raised https://github.com/owncloud/core/issues/24494 for a repair step

simsala commented 8 years ago

Seeing this issue with 9.0.2 as well. Does not seem fixed.

PVince81 commented 8 years ago

Possibly related: https://github.com/owncloud/core/issues/24507

Would be good if you could add more info about your setup there because so far this is not reproducible. It could be a very specific use case (sharing/ext storage/other) that triggers a specific code path where the locks aren't cleared.

Could also be timeouts.

simsala commented 8 years ago

Getting 21034 lock records during one night. ttl is over, but I do think that the records should be deleted from the databases otherwise this thing will just blow up.

It's just a basic setup. Config is:

Ubuntu 16.04 LTS PHP 7.0.4-7ubuntu2 (cli) ( NTS ) No external storage

Apps: Enabled:

Disabled:

occonfig.txt

icewind1991 commented 8 years ago

Note that the lock cleanup is done in a background job, so cron needs to be configured

simsala commented 8 years ago

cron is configured and run every 15 minutes.