Open posita opened 2 years ago
I have verified the values for upload_max_filesize
, post_max_size
, and memory_limit
:
For whatever it is worth: I have tested ownCloud server 10.11.0 with files_primary_s3 1.4.0 sitting on a scality/cloudserver backend:
UPDATE: I am still experiencing this issue after upgrading to 10.11.0 and can reproduce it with both Wasabi and DreamObjects as S3 back-ends. This is from my latest attempt to sync a 1GB file from the web interface:
{"reqId":"[redacted]","level":3,"time":"2022-11-14T22:47:56+00:00","remoteAddr":"[redacted]","user":"[user]","app":"PHP","method":"MOVE","url":"\/remote.php\/dav\/uploads\/[user]\/3300640080\/.file","message":"Error executing \"GetObject\" on \"https:\/\/objects-us-east-1.dream.io\/[redacted]\/urn%3Aoid%3A12345\"; AWS HTTP error: Client error: `GET https:\/\/objects-us-east-1.dream.io\/[redacted]\/urn%3Aoid%3A12345` resulted in a `404 Not Found` response NotFound (client): 404 Not Found (Request-ID: tx[redacted]-us-east-1-iad1) - <?xml version=\"1.0\" encoding=\"UTF-8\"?><Error><Code>NoSuchKey<\/Code><BucketName>[redacted]<\/BucketName><RequestId>tx[redacted]-us-east-1-iad1<\/RequestId><HostId>[redacted]-us-east-1-iad1-us-east-1<\/HostId><\/Error> at \/[path]\/apps-external\/files_primary_s3\/lib\/streamwrapper.php#721"}
{"reqId":"[redacted]","level":3,"time":"2022-11-14T22:47:56+00:00","remoteAddr":"[redacted]","user":"[user]","app":"PHP","method":"MOVE","url":"\/remote.php\/dav\/uploads\/[user]\/3300640080\/.file","message":"fopen(s3:\/\/[redacted]\/urn:oid:12345): failed to open stream: "OCA\\Files_Primary_S3\\StreamWrapper::stream_open" call failed at \/[path]\/apps-external\/files_primary_s3\/lib\/s3storage.php#267"}
{"reqId":"[redacted]","level":3,"time":"2022-11-14T22:47:56+00:00","remoteAddr":"[redacted]","user":"[user]","app":"PHP","method":"MOVE","url":"\/remote.php\/dav\/uploads\/[user]\/3300640080\/.file","message":"fread() expects parameter 1 to be resource, bool given at \/[path]\/lib\/private\/Files\/Storage\/Wrapper\/Encryption.php#932"}
{"reqId":"[redacted]","level":3,"time":"2022-11-14T22:47:56+00:00","remoteAddr":"[redacted]","user":"[user]","app":"PHP","method":"MOVE","url":"\/remote.php\/dav\/uploads\/[user]\/3300640080\/.file","message":"fclose() expects parameter 1 to be resource, bool given at \/[path]\/lib\/private\/Files\/Storage\/Wrapper\/Encryption.php#933"}
{"reqId":"[redacted]","level":3,"time":"2022-11-14T22:47:56+00:00","remoteAddr":"[redacted]","user":"[user]","app":"PHP","method":"MOVE","url":"\/remote.php\/dav\/uploads\/[user]\/3300640080\/.file","message":"Error executing \"GetObject\" on \"https:\/\/objects-us-east-1.dream.io\/[redacted]\/urn%3Aoid%3A12345\"; AWS HTTP error: Client error: `GET https:\/\/objects-us-east-1.dream.io\/[redacted]\/urn%3Aoid%3A12345` resulted in a `404 Not Found` response NotFound (client): 404 Not Found (Request-ID: tx[redacted]-us-east-1-iad1) - <?xml version=\"1.0\" encoding=\"UTF-8\"?><Error><Code>NoSuchKey<\/Code><BucketName>[redacted]<\/BucketName><RequestId>tx[redacted]-us-east-1-iad1<\/RequestId><HostId>[redacted]-us-east-1-iad1-us-east-1<\/HostId><\/Error> at \/[path]\/apps-external\/files_primary_s3\/lib\/streamwrapper.php#721"}
{"reqId":"[redacted]","level":3,"time":"2022-11-14T22:47:56+00:00","remoteAddr":"[redacted]","user":"[user]","app":"PHP","method":"MOVE","url":"\/remote.php\/dav\/uploads\/[user]\/3300640080\/.file","message":"fopen(s3:\/\/[redacted]\/urn:oid:12345): failed to open stream: "OCA\\Files_Primary_S3\\StreamWrapper::stream_open" call failed at \/[path]\/apps-external\/files_primary_s3\/lib\/s3storage.php#267"}
And after an attempt via sync:
Issues ,File ,Folder ,Size ,Account ,Time ,Status ,
Server replied "423 Locked" to "MOVE https://[redacted]/remote.php/dav/uploads/[user]/414185728/.file" ("[filename]" is locked) (skipped due to earlier error, trying again in 2 minute(s)),[filename] ,ownCloud , ,[user]@[redacted],2022-11-14T18:00:35.101,Blacklisted ,
Thanks @jnweiger, did you try with a single 21GB file? Is this the S3 backend you're using?
Okay, digging into this a bit more, I am able to upload all the blocks successfully, at least via the web interface. I believe this is also true via the Desktop client. Either way, I am able to confirm that the blocks are uploaded successfully to, e.g., uploads/web-file-upload-c337...3e55-166...959
. In fact, I can retrieve them from there via davs://<host>/remote.php/dav/uploads/<user>/web-file-upload-c337...3e55-166...959
, and reassemble them to get a byte-for-byte identical copy to the original file. I can also confirm that the blocks are making it to S3 by querying oc_filecache
and looking for those blocks:
SELECT fc.fileid
FROM oc_filecache fc
JOIN oc_storages s ON s.numeric_id = fc.storage
WHERE fc.path LIKE 'uploads/web-file-upload-c337...3e55-166...959/%'
AND s.id = 'object::user:<user>' ORDER BY fileid ASC
#!/usr/bin/env bash
set -eux -o pipefail
for fileid in <results-from-above-sql-query> ; do
aws s3 \
--profile dreamobjects \
--endpoint-url 'https://objects-us-east-1.dream.io/' \
cp "s3://<redacted>/urn:oid:${fileid}" ".../s3-blocks/${fileid}"
done
( IFS=$'\n' ; for f in $( \ls .../s3-blocks | sort -n ) ; do cat ".../s3-blocks/${f}" ; done ) | sha256sum
:point_up: That will give me the same checksum as ( IFS=$'\n' ; for f in $( \ls .../web-file-upload-c337...3e55-166...959 | sort -n ) ; do cat ".../web-file-upload-c337...3e55-166...959/${f}" ; done ) | sha256sum
, which will give me the same checksum as sha256sum <original-file>
.
So whatever is failing is failing after the initial (chunked) upload.
Okay, here's the rub. I can create a one-byte placeholder file at the desired path (e.g., folder/BIGFILE
). I can find the fileid
as follows:
SELECT fc.fileid
FROM oc_filecache fc
JOIN oc_storages s ON s.numeric_id = fc.storage
WHERE fc.path = 'files/folder/BIGFILE'
AND s.id = 'object::user:<user>' ORDER BY fileid ASC
Let's say that gives me fileid
1234. Now I can go into maintenance mode and copy the actual big file to that S3 bucket:
aws s3 ... cp .../BIGFILE "s3://<redacted>/urn:oid:1234"
I can go back and hand-edit the oc_filecache
entry to update the size
, mtime
, storage_mtime
, and checksum
.
storage_mtime
:
date --date "$( aws s3 ... ls "s3://<redacted>/urn:oid:1234" | awk '{ print $1, $2; }' )" +%s
checksum
:
o=.../BIGFILE
printf 'SHA1:%s MD5:%s ADLER32:%08x\n' \
"$( sha1sum "${o}" | awk '{ print $1; }' )" \
"$( md5sum "${o}" | awk '{ print $1; }' )" \
"$( python3 -c 'import io, sys, zlib ; print(zlib.adler32(open(sys.argv[1], "rb").read()))' "${o}" )"
I just don't know how to compute a new etag
value, so I have to do something else (like a move or delete/restore) to trigger a Desktop client refresh if I forgot to shut it down before it grabbed the one-byte placeholder file. But, once all those operations are complete, I can turn maintenance mode off and there's folder/BIGFILE
in all its glory, downloadable via the web interface and the Desktop client, and verifiable as the same file that I started with.
Obviously that's not a viable workflow, but I'm hoping it helps narrow down where the problem lives.
Migrated from owncloud/core#40100.
Errant behaviour
All uploads of files >1GB fail. More specifically, the attempt creates a server-side entry, but the hash does not match, which results in several destructive behaviors:
This has been a consistent behavior for me ever since first trying ownCloud almost two years ago. This report was from when I was running 10.9.1, but I have confirmed the behavior remains after upgrading to 10.10.0.3.
Server configuration
Operating system:
Web server: Dreamhost
Database: MySQL 8
PHP version: 7.4
.htaccess
:PHP INI:
ownCloud version: (see ownCloud admin page)
Updated from an older ownCloud or fresh install: fresh install
Where did you install ownCloud from: tarball
Signing status (ownCloud 9.0 and above):
The content of config/config.php:
List of activated apps:
Are you using external storage, if yes which one: files_primary_s3
Are you using encryption: no
Are you using an external user-backend, if yes which one: no
Client configuration
Browser: Happens with both desktop client and web
Operating system: OS X Big Sur 11.6.5
Logs
Web server error log
ownCloud log (data/owncloud.log)
Attempt to download the inchoate server-side entry after a failed sync attempt:
Sync failure:
Client side error once errant server-side entry is deleted and re-sync attempt is made: