nextcloud / server

☁️ Nextcloud server, a safe home for all your data
https://nextcloud.com
GNU Affero General Public License v3.0
27.36k stars 4.06k forks source link

[Bug]: not show files s3 primary storage #34407

Closed edwinosky closed 1 year ago

edwinosky commented 2 years ago

⚠️ This issue respects the following points: ⚠️

Bug description

Hello guys, I have more than a week dealing with a serious problem with my nextcloud installation, I tried to test many things before writing here I have a nextcloud installation for months that I use to share content with family, friends and clients, I have been using cloudflare's R2 as primary S3 storage, everything worked great for months, until a week ago the installation is completely broken and no shows the files, if it tries to open a previously shared url if it shows the list of files but when trying to play or load them an error is shown, I already did a new installation, but when configuring S3 as primary storage it returns and breaks, it does not load the css styles of the installation, I will leave some screenshots of how it is shown

Steps to reproduce

  1. create a new installation of nextcloud version 24.0.5
  2. configure s3 as primary storage
  3. the installation is completely broken

Expected behavior

that everything works correctly as this great software does

Installation method

Community Web installer on a VPS or web space

Operating system

Debian/Ubuntu

PHP engine version

PHP 8.0

Web server

Apache (supported)

Database engine version

MariaDB

Is this bug present after an update or on a fresh install?

Updated to a major version (ex. 22.2.3 to 23.0.1)

Are you using the Nextcloud Server Encryption module?

No response

What user-backends are you using?

Configuration report

{
    "system": {
        "instanceid": "***REMOVED SENSITIVE VALUE***",
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "trusted_domains": [
            "158.69.0.213"
        ],
        "datadirectory": "***REMOVED SENSITIVE VALUE***",
        "dbtype": "mysql",
        "version": "24.0.5.1",
        "overwrite.cli.url": "http:\/\/158.69.0.213",
        "dbname": "***REMOVED SENSITIVE VALUE***",
        "dbhost": "***REMOVED SENSITIVE VALUE***",
        "dbport": "",
        "dbtableprefix": "oc_",
        "mysql.utf8mb4": true,
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "installed": true,
        "skeletondirectory": "",
        "memcache.locking": "\\OC\\Memcache\\Redis",
        "memcache.local": "\\OC\\Memcache\\APCu",
        "memcache.distributed": "\\OC\\Memcache\\Redis",
        "redis": {
            "host": "***REMOVED SENSITIVE VALUE***",
            "port": 6379,
            "dbindex": 5,
            "password": "***REMOVED SENSITIVE VALUE***",
            "timeout": 1.5
        },
        "theme": "",
        "loglevel": 0,
        "logfile": "\/var\/www\/vhosts\/files.log",
        "default_phone_region": "CO",
        "mail_smtpmode": "smtp",
        "mail_smtpsecure": "ssl",
        "mail_sendmailmode": "smtp",
        "mail_from_address": "***REMOVED SENSITIVE VALUE***",
        "mail_domain": "***REMOVED SENSITIVE VALUE***",
        "mail_smtpauthtype": "LOGIN",
        "mail_smtpauth": 1,
        "mail_smtphost": "***REMOVED SENSITIVE VALUE***",
        "mail_smtpport": "465",
        "mail_smtpname": "***REMOVED SENSITIVE VALUE***",
        "mail_smtppassword": "***REMOVED SENSITIVE VALUE***",
        "updater.release.channel": "stable",
        "objectstore": {
            "class": "\\OC\\Files\\ObjectStore\\S3",
            "arguments": {
                "bucket": "nextcloud",
                "autocreate": false,
                "key": "***REMOVED SENSITIVE VALUE***",
                "secret": "***REMOVED SENSITIVE VALUE***",
                "hostname": "7ea0e644ac8bc2e985b18c6d7953ef47.r2.cloudflarestorage.com",
                "port": 443,
                "use_ssl": true,
                "region": "auto",
                "use_path_style": true
            }
        },
        "filelocking.enabled": false,
        "htaccess.RewriteBase": "\/",
        "forwarded_for_headers": [
            "HTTP_X_FORWARDED_FOR"
        ],
        "enabledPreviewProviders": [
            "OC\\Preview\\MP3",
            "OC\\Preview\\TXT",
            "OC\\Preview\\MarkDown",
            "OC\\Preview\\OpenDocument",
            "OC\\Preview\\Krita",
            "OC\\Preview\\MP4",
            "OC\\Preview\\Movie",
            "OC\\Preview\\Imaginary"
        ],
        "preview_imaginary_url": "http:\/\/158.69.0.213:9000\/",
        "preview_max_x": 500,
        "preview_max_y": 500,
        "maintenance": false
    }
}

List of activated Apps

Enabled:
  - activity: 2.16.0
  - bruteforcesettings: 2.4.0
  - cloud_federation_api: 1.7.0
  - csp_editor: 1.1.0
  - dashboard: 7.4.0
  - dav: 1.22.0
  - federatedfilesharing: 1.14.0
  - federation: 1.14.0
  - files: 1.19.0
  - files_external: 1.16.1
  - files_rightclick: 1.3.0
  - files_sharing: 1.16.2
  - files_trashbin: 1.14.0
  - files_videoplayer: 1.13.0
  - logreader: 2.9.0
  - lookup_server_connector: 1.12.0
  - nextcloud_announcements: 1.13.0
  - notifications: 2.12.1
  - oauth2: 1.12.0
  - password_policy: 1.14.0
  - photos: 1.6.0
  - privacy: 1.8.0
  - provisioning_api: 1.14.0
  - serverinfo: 1.14.0
  - settings: 1.6.0
  - theming: 1.15.0
  - twofactor_backupcodes: 1.13.0
  - updatenotification: 1.14.0
  - user_status: 1.4.0
  - viewer: 1.8.0
  - workflowengine: 2.6.0
Disabled:
  - accessibility: 1.10.0
  - admin_audit
  - announcementcenter: 6.3.1
  - backup: 1.1.3
  - cfg_share_links: 2.0.0
  - circles: 24.0.1
  - comments: 1.14.0
  - contactsinteraction: 1.5.0
  - encryption
  - external: 4.0.0
  - files_pdfviewer: 2.5.0
  - files_versions: 1.17.0
  - firstrunwizard: 2.13.0
  - impersonate: 1.11.0
  - previewgenerator: 5.0.0
  - recommendations: 1.3.0
  - registration: 1.5.0
  - sharebymail: 1.14.0
  - support: 1.7.0
  - survey_client: 1.12.0
  - systemtags: 1.14.0
  - text: 3.5.1
  - user_ldap
  - user_migration: 1.1.0
  - weather_status: 1.4.0

Nextcloud Signing status

No errors have been found.

Nextcloud Logs

{"reqId":"Yzr07ZAl16LwBjxqKRxiSQAAAA8","level":0,"time":"2022-10-03T14:42:54+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/dashboard/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr07ZAl16LwBjxqKRxiSQAAAA8","level":0,"time":"2022-10-03T14:42:54+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/dashboard/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr07ZAl16LwBjxqKRxiSQAAAA8","level":0,"time":"2022-10-03T14:42:54+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/dashboard/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr07ZAl16LwBjxqKRxiSQAAAA8","level":0,"time":"2022-10-03T14:42:54+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/dashboard/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr07ZAl16LwBjxqKRxiSQAAAA8","level":0,"time":"2022-10-03T14:42:54+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/dashboard/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr07ZAl16LwBjxqKRxiSQAAAA8","level":0,"time":"2022-10-03T14:42:54+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/dashboard/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::isCached 9b08-0323-icons.css isCachedCache is expired or unset","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::isCached 9b08-0323-icons.css dependencies successfully cached for 5 minutes","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr08pAl16LwBjxqKRxiTAAAABc","level":0,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"scss_cacher","method":"GET","url":"/apps/files/","message":"SCSSCacher::process ordinary check follows","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"scss_cacher"}}
{"reqId":"Yzr0800Mag8KBHrb-gRt2wAAAQ0","level":3,"time":"2022-10-03T14:42:59+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"PHP","method":"GET","url":"/css/files_sharing/9b08-0323-icons.css?v=7b9e9931979bc025b39295473ae35fe7-9b088ce7-16","message":"fopen(httpseek://): Failed to open stream: "OC\\Files\\Stream\\SeekableHttpStream::stream_open" call failed at /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/Stream/SeekableHttpStream.php#67","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","exception":{"Exception":"Error","Message":"fopen(httpseek://): Failed to open stream: "OC\\Files\\Stream\\SeekableHttpStream::stream_open" call failed at /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/Stream/SeekableHttpStream.php#67","Code":0,"Trace":[{"function":"onError","class":"OC\\Log\\ErrorHandler","type":"::"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/Stream/SeekableHttpStream.php","line":67,"function":"fopen"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/ObjectStore/S3ObjectTrait.php","line":88,"function":"open","class":"OC\\Files\\Stream\\SeekableHttpStream","type":"::"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/ObjectStore/ObjectStoreStorage.php","line":312,"function":"readObject","class":"OC\\Files\\ObjectStore\\S3","type":"->"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/Storage/Common.php","line":197,"function":"fopen","class":"OC\\Files\\ObjectStore\\ObjectStoreStorage","type":"->"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/Storage/Wrapper/Wrapper.php","line":247,"function":"file_get_contents","class":"OC\\Files\\Storage\\Common","type":"->"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Files/Storage/Wrapper/Availability.php","line":264,"function":"file_get_contents","class":"OC\\Files\\Storage\\Wrapper\\Wrapper","type":"->"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/AppFramework/App.php","line":172,"function":"dispatch","class":"OC\\AppFramework\\Http\\Dispatcher","type":"->"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Route/Router.php","line":298,"function":"main","class":"OC\\AppFramework\\App","type":"::"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/lib/base.php","line":1030,"function":"match","class":"OC\\Route\\Router","type":"->"},{"file":"/var/www/vhosts/158.69.0.213/nextcloud/index.php","line":36,"function":"handleRequest","class":"OC","type":"::"}],"File":"/var/www/vhosts/158.69.0.213/nextcloud/lib/private/Log/ErrorHandler.php","Line":92,"CustomMessage":"--"}}
{"reqId":"Yzr09ZAl16LwBjxqKRxiUQAAABM","level":1,"time":"2022-10-03T14:43:02+00:00","remoteAddr":"181.63.21.109","user":"admin","app":"theming","method":"GET","url":"/apps/theming/favicon/files?v=16","message":"The image was requested to be no SVG file, but converting it to PNG failed: Zero size image string passed","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36","version":"24.0.5.1","data":{"app":"theming"}}

Additional info

image

image

mrAceT commented 2 years ago

Hi,

Do you see the file-listings in your instance? (via web-interface) Is the data there in your S3 storage? (all 'numbered files') Do you see the previews? (via web-interface)

take a good hard look at your sql-table oc_storages. When you are working completely in S3 all entries should be 'object:...' I think you may have some history with 'local:' folders.. DO NOT simply remove those entries!!

Could you post the oc_storages info? (do anonymize the data)

I have quite a bit of experience with NextCloud->S3, so I might just be able to help you out..

edwinosky commented 2 years ago

Hi,

Do you see the file-listings in your instance? (via web-interface) Is the data there in your S3 storage? (all 'numbered files') Do you see the previews? (via web-interface)

take a good hard look at your sql-table oc_storages. When you are working completely in S3 all entries should be 'object:...' I think you may have some history with 'local:' folders.. DO NOT simply remove those entries!!

Could you post the oc_storages info? (do anonymize the data)

I have quite a bit of experience with NextCloud->S3, so I might just be able to help you out..

hi I really appreciate your help, here I show you my oc_storage table image

image

mrAceT commented 2 years ago

please answer:

1) Do you see the file-listings in your instance? (via web-interface) 2) Is the data there in your S3 storage? (all 'numbered files') 3) Do you see the previews? (via web-interface)

edwinosky commented 2 years ago

the web interface does not show the files, not previews, it does not even try to upload them, in cloudflare R2 I have more than 1TB of data but I cannot access it without nextcloud

mrAceT commented 2 years ago

you have 28: object::user::sonya => GOOD 49: home::sonya => local storage! => NOT good

you have 1: local::/var.... => NOT good! 29: object::store.. => GOOD!

check your oc_filecache and take a good look at at the storage ID's

I have had trouble with migrating to S3 storage.. and have written a migration tool to local (take a look at: https://github.com/lukasmu/nextcloud-s3-to-disk-migration/issues/6) and later have built the reverse back to s3..

what you will need to do is first make backups! Of your data.. of your SQL.

Did you have a separate storage config? (that was part of my trouble..)

Is there data in your LOCAL data folder? (besides the default stuff)

check your oc_filecache, are there storage id's attached to number 1? and 49?

This BRUTE FORCE approach might work (but do make backups..........):

This Q & D approach might work.. (but do make backups..........)

The problem is you are now "in limbo" things are set to local AND S3.. Nextcloud seems to prefer local.. resulting in you not seeing S3..

edwinosky commented 2 years ago

If I already have a backup of my database, I use plesk so it is very easy, and I also have the R2 bucket backed up in another bucket I will try to carry out the steps that you indicate in oc_storage and in oc_filecache as soon as I finish I come back here to talk to you about the results

edwinosky commented 2 years ago

hi @mrAceT I already deleted the oc_storage entries, the error is still not fixed, now I ask you in oc_filecache I have more than half a million entries, will I be able to delete them all?

image

image

mrAceT commented 2 years ago

oc_filecache I have more than half a million entries, will I be able to delete them all?

NO!!

In contrary to local storage, S3 storage can NOT rebuild itself via the data in S3 (that should be possible, but for some reason NextCloud has decided to use the cryptic numbers, I have asked about this in the forum, but to my knowledge no one has ever given a good answer)

check your oc_filecache, are there storage id's attached to number 1? and 49? remove all the local storage references in your oc_filecache

With that last sentence I ment the numbers of all those 'home & local' ID's.. if you would remove those, all data in there should refer to S3-data.. So for Sonya.. how many entries are there in oc_filecache with storage ID 28 ? Does that "feel right"?

edwinosky commented 2 years ago

delete the oc_filecache entries with ID 1 ID 28 and ID 49, it does not solve the problem either, I remember that it happened to me about three months ago and I managed to solve it with a command that I execute in the occ binary, that command was written by someone in the official forum but I couldn't find it I have been thinking about using the script that you recommended above but I think I will need enough storage space, that implies changing vps

edwinosky commented 2 years ago

i'm pretty sure my problem lies with the database because i install a clean instance with s3 and it works fine, it just breaks when i import the database

mrAceT commented 2 years ago

delete the oc_filecache entries with ID 1 ID 28 and ID 49, it does not solve the problem either, I remember that it happened to me about three months ago and I managed to solve it with a command that I execute in the occ binary, that command was written by someone in the official forum but I couldn't find it I have been thinking about using the script that you recommended above but I think I will need enough storage space, that implies changing vps

Oh dear.. number 28 are the entries for OBJECT->S3.. do you understand the structure of the numbers? You need to remove the NOT object-numbers

edwinosky commented 2 years ago

The truth is that all the data that matters to me is in the admin account, and since I share it with other users, I can delete the other user accounts without problems.

Is there any way to do a kind of cleanup to the database without touching the admin data?

mrAceT commented 2 years ago

The truth is that all the data that matters to me is in the admin account, and since I share it with other users, I can delete the other user accounts without problems.

Is there any way to do a kind of cleanup to the database without touching the admin data?

Then I'dd keep it simple.. and remove all the other accounts via the web-interface.. then you would have a nice clean oc_storage (the basis S3 entry and that one user?. Then remove all but those two numbers form the oc_filecache .. If nothing else has been broken, you should see all your files you'dd want to rescue in your filecache..

Also, if the other users are "simply shares".. then you'dd not have any other files lingering in your S3 storage..

edwinosky commented 2 years ago

I am going to eliminate all the accounts, I will try to clean the cache of the database and now later I write here to inform

edwinosky commented 2 years ago

hi remove all users, just leave admin, run command ./occ files:cleanup the problem was not fixed in my desperation I changed to the beta channel and tried to perform an update by command lines and look at the error that I got, I had never seen it

root@plesk:/var/www/vhosts/158.69.0.213/nextcloud/# sudo -u www-data /opt/plesk/php/8.0/bin/php /var/www/vhosts/158.69.0.213/nextcloud/updater/updater.phar
Nextcloud Updater - version: v24.0.0beta3-1-g67bf13b dirty

Current version is 24.0.5.

Update to Nextcloud 24.0.6 RC1 available. (channel: "beta")
Following file will be downloaded automatically: https://download.nextcloud.com/server/prereleases/nextcloud-24.0.6rc1.zip

Steps that will be executed:
[ ] Check for expected files
[ ] Check for write permissions
[ ] Create backup
[ ] Downloading
[ ] Verify integrity
[ ] Extracting
[ ] Enable maintenance mode
[ ] Replace entry points
[ ] Delete old files
[ ] Move new files in place
[ ] Done

Start update? [y/N] y

Info: Pressing Ctrl-C will finish the currently running step and then stops the updater.

[✔] Check for expected files
[✔] Check for write permissions
[✔] Create backup
[✔] Downloading
[✔] Verify integrity
[✔] Extracting
[✔] Enable maintenance mode
[✔] Replace entry points
[✔] Delete old files
[✔] Move new files in place
[✔] Done

Update of code successful.

Should the "occ upgrade" command be executed? [Y/n]
Nextcloud or one of the apps require upgrade - only a limited number of commands are available
You may use your browser or the occ upgrade command to do the upgrade
Setting log level to debug
Repair step: Repair MySQL collation
Repair info: All tables already have the correct collation -> nothing to do
Repair step: Repair SQLite autoincrement
Repair step: Copy data from accounts table when migrating from ownCloud
Repair step: Drop account terms table when migrating from ownCloud
Updating database schema
Updated database
An unhandled exception has been thrown:
TypeError: Cannot assign bool to property OC\Security\CertificateManager::$bundlePath of type ?string in /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Security/CertificateManager.php:250
Stack trace:
#0 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Http/Client/Client.php(127): OC\Security\CertificateManager->getAbsoluteBundlePath()
#1 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Http/Client/Client.php(74): OC\Http\Client\Client->getCertBundle()
#2 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Http/Client/Client.php(218): OC\Http\Client\Client->buildRequestOptions()
#3 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/App/AppStore/Fetcher/Fetcher.php(120): OC\Http\Client\Client->get()
#4 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/App/AppStore/Fetcher/AppFetcher.php(87): OC\App\AppStore\Fetcher\Fetcher->fetch()
#5 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/App/AppStore/Fetcher/Fetcher.php(192): OC\App\AppStore\Fetcher\AppFetcher->fetch()
#6 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/App/AppStore/Fetcher/AppFetcher.php(188): OC\App\AppStore\Fetcher\Fetcher->get()
#7 /var/www/vhosts/158.69.0.213/nextcloud/ib/private/Installer.php(422): OC\App\AppStore\Fetcher\AppFetcher->get()
#8 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Updater.php(413): OC\Installer->isUpdateAvailable()
#9 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Updater.php(274): OC\Updater->upgradeAppStoreApps()
#10 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Updater.php(133): OC\Updater->doUpgrade()
#11 /var/www/vhosts/158.69.0.213/nextcloud/core/Command/Upgrade.php(235): OC\Updater->upgrade()
#12 /var/www/vhosts/158.69.0.213/nextcloud/3rdparty/symfony/console/Command/Command.php(255): OC\Core\Command\Upgrade->execute()
#13 /var/www/vhosts/158.69.0.213/nextcloud/3rdparty/symfony/console/Application.php(1009): Symfony\Component\Console\Command\Command->run()
#14 /var/www/vhosts/158.69.0.213/nextcloud/3rdparty/symfony/console/Application.php(273): Symfony\Component\Console\Application->doRunCommand()
#15 /var/www/vhosts/158.69.0.213/nextcloud/3rdparty/symfony/console/Application.php(149): Symfony\Component\Console\Application->doRun()
#16 /var/www/vhosts/158.69.0.213/nextcloud/lib/private/Console/Application.php(211): Symfony\Component\Console\Application->run()
#17 /var/www/vhosts/158.69.0.213/nextcloud/console.php(100): OC\Console\Application->run()
#18 /var/www/vhosts/158.69.0.213/nextcloud/occ(11): require_once('...')
#19 {main}
Keep maintenance mode active? [y/N] N
Nextcloud or one of the apps require upgrade - only a limited number of commands are available
You may use your browser or the occ upgrade command to do the upgrade
Maintenance mode disabled

Maintenance mode is disabled
mrAceT commented 2 years ago

./occ files:cleanup won't do a thing with S3 to my knowledge..

This isn't a version problem, it's a database problem (that likely arose due to Nextcloud being migrated to S3 and then an update that got confused with /local/S3)

Questions: 1) you have now one user in your test setup? 2) you now have two entries in oc_storage ? (more?) 3) you now have no entries linking to no more existing oc_storage ID's in your oc_filecache ? 4) when you create a new user and upload an image it works? (if not, your S3 config is broken!! you need to fix that first!!) 5) you can log in to your admin account and do you see the data structure? (if not, then something else is horribly wrong..) 6) when you pick a single table entry that's an image linking to your admin account in oc_filecache, do the ID's of the account and storage match 7) Download that id from your S3 storage, rename it to your image.jpg/png/etc do you get the image you expected?

edwinosky commented 2 years ago

Archivos ./occ: la limpieza no hará nada con S3 que yo sepa.

Este no es un problema de versión, es un problema de la base de datos (que probablemente surgió debido a que Nextcloud se migró a S3 y luego a una actualización que se confundió con /local/S3)

Preguntas:

  1. ¿Tiene ahora un usuario en su configuración de prueba?
  2. ¿ahora tiene dos entradas en oc_storage? (¿más?)
  3. ¿ahora no tiene entradas que se vinculen a no más ID de oc_storage existentes en su oc_filecache?
  4. cuando creas un nuevo usuario y subes una imagen funciona? (¡Si no es así, su configuración de S3 está rota! ¡Debe arreglar eso primero!)
  5. puede iniciar sesión en su cuenta de administrador y ve la estructura de datos? (si no, entonces algo más está terriblemente mal...)
  6. cuando elige una sola entrada de la tabla que es una imagen que se vincula a su cuenta de administrador en oc_filecache, haga coincidir las ID de la cuenta y el almacenamiento
  7. Descargue esa identificación de su almacenamiento S3, cámbiele el nombre a su image.jpg/png/etc. ¿Obtiene la imagen que esperaba?

answers

1 yes

2 id 1 object::store:amazon::nextcloud id 2 object::user:admin

3 exactly in oc_filecache there are only entries with id 1 and id 2

4 I can't create new users, it gives me an error when registering, of course enabling the registration app from the console because I had it disabled

5 I can log in as admin, but when I enter everything looks broken, as if the css styles and other things did not load, the files are not listed as it should happen

6 is correct, the id match

7 I enter the cloudflare R2 panel, I see all the files with names urn:oid*** when I download them I can get the original image

mrAceT commented 2 years ago

5 I can log in as admin, but when I enter everything looks broken, as if the css styles and other things did not load, the files are not listed as it should happen

I have had trouble with S3 (and a lot of it..) but it never broke the styling!? Are you still in that bèta version? Something else broke down.. it must have..

another "weird idea":

get out of maintenance mode

let me know how it goes..

mrAceT commented 2 years ago

Did this solve it?

edwinosky commented 2 years ago

Did this solve it?

no friend I couldn't fix it, thank you very much for your help and the suggestions you have given me I can't believe how I trusted over 1TB of personal and family data on that S3 storage and now my data is unrecoverable The worst thing is that I did it to reduce costs but it backfired.

edwinosky commented 2 years ago

the last thing i tried to do was create a clean install of netxcloud, regardless of the broken database, connect the broken install with rclone via webdav, i can list the file folders correctly, but when i try to send from my nextcloud broken to my new nextcloud just rclone stays thinking for hours and does not show an error or any information

mrAceT commented 2 years ago

In all honesty I do not understand why my last option did not work.. it should have..

Do you have 1Tb of disk space available? I fear the only option I can think of is this: 1) create the tables and set nextcloud up as described in my last not working solution 2) have 1TB of space.. 3) use my S3-to-local conversion script in test mode (that uses your nextcloud S3-credentials and those two tables)

Your data is in there.. You could create a "spin off" of my s3-to-local-script that simply creates the folder-structure with the correct file name.. In essence that is what that script does..

If that does not work.. I could fix it for you, I'm practically certain.. but I'dd need shell access to your account with 1TB of space and access to your S3 credentials and database.. (and in all honesty I'dd need to charge for my time.. I've got a mortgage to you know.. ;) )

mrAceT commented 2 years ago

OK.. it's past midnight and I seriously need to go to bed, but here is my "special standalone S3 > local" version for you.. I am hoping you know about 'vendor/autoload' and know how to install the Aws\S3\S3Client package..

NEEDED:

Set up the variables and let it rip.. My first run of I believe about 100 Gb took hours.. so be patient.. There is a progress indication.. but do be patient..

All I'll ask for it is that you go to OpenStreetMap and look op Friesland (in The Netherlands) and remember that that's where your saviour lives ;) Oh, if I/we ever go to Venezuela, you ow me a beer ;)

[update] $NR_OF_COPY_ERRORS_OK has no real use in this case => removed

<?php
# runuser -u [user] -- composer require aws/aws-sdk-php
use Aws\S3\S3Client;

echo "\n#########################################################################################";
echo "\n Migration tool for Nextcloud S3 back to local version 0.21 alpha\n";
echo "\n special rescue version for edwinosky...";

// Note: Preferably use absolute path without trailing directory separators
$PATH_BASE      = '/tmp'; // Path to the base of the main Nextcloud directory

$PATH_DATA      = $PATH_BASE.'/data'; // Path of the new Nextcloud data directory
$PATH_DATA_BKP  = $PATH_BASE.'/data.bkp'; // Path of a previous migration.. to speed things up.. (manually move a previous migration here!!)

$NON_EMPTY_TARGET_OK = 1;

$PATH_DATA_LOCAL_EXISTS_OK = 1; //default 0 !! Only set to 1 if you're sure..

$CONFIG = array (
// MySql login info
  'dbname'        => 'dbname',
  'dbhost'        => 'dbhost',
  'dbuser'        => 'dbuser',
  'dbpassword'    => 'dbpassword',
  'mysql.utf8mb4' => true,  
// S3 login info
  'objectstore' => array (
    'arguments' => array (
      'bucket'   => 'bucket',
      'key'      => 'key',
      'secret'   => 'secret',
      'hostname' => 'hostname',
      'region'   => 'region',
    ),
  ),
); 

echo "\n\n#########################################################################################";
echo "\nSetting up S3 migration to local...\n";

// Autoload
require_once(dirname(__FILE__).'/vendor/autoload.php');

echo "\nconnect to sql-database...";
// Database setup
$mysqli = new mysqli($CONFIG['dbhost'], $CONFIG['dbuser'], $CONFIG['dbpassword'], $CONFIG['dbname']);
if ($CONFIG['mysql.utf8mb4']) {
  $mysqli->set_charset('utf8mb4');
}

$OBJECT_STORE_ID = 0;
if ($result = $mysqli->query("SELECT * FROM `oc_storages` WHERE `id` LIKE 'object::store:%'")) {
  if ($result->num_rows>1) {
    echo "\nMultiple 'object::store:' clean this up, it's an accident waiting to happen!!\n";
    die;
  }
  else if ($result->num_rows == 0) {
    echo "\nNo 'object::store:' No S3 storage defined!?\n";
    die;
  }
  else {
    $row = $result->fetch_assoc();
    $OBJECT_STORE_ID = $row['numeric_id']; // for creative rename command..
  }
}

echo "\nconnect to S3...";
$s3 = new S3Client([
    'version' => 'latest',
    'endpoint' => 'https://'.$CONFIG['objectstore']['arguments']['bucket'].'.'.$CONFIG['objectstore']['arguments']['hostname'],
    'bucket_endpoint' => true,
    'region'  => $CONFIG['objectstore']['arguments']['region'],
    'credentials' => [
        'key' => $CONFIG['objectstore']['arguments']['key'],
        'secret' => $CONFIG['objectstore']['arguments']['secret'],
    ],
]);

// Check that new Nextcloud data directory is empty
if (count(scandir($PATH_DATA)) != 2) {
  echo "\nThe new Nextcloud data directory is not empty..";
  if (!$NON_EMPTY_TARGET_OK) {
    echo " nAborting script\n";
    die;
  } else {
    echo "WARNING: deleted files since previous copy are NOT removed! (take a look at the option '\$PATH_DATA_BKP')\n";
  }
}

echo "\n#########################################################################################";
echo "\nSetting everything up finished ##########################################################\n";

echo "\nCreating folder structure started... ";

if ($result = $mysqli->query("SELECT st.id, fc.fileid, fc.path, fc.storage_mtime FROM oc_filecache as fc, oc_storages as st, oc_mimetypes as mt WHERE st.numeric_id = fc.storage AND st.id LIKE 'object::%' AND fc.mimetype = mt.id AND mt.mimetype = 'httpd/unix-directory'")) {

  // Init progress
  $complete = $result->num_rows;
  $prev     = '';
  $current  = 0;

  while ($row = $result->fetch_assoc()) {
    $current++;
    try {
      // Determine correct path
      if (substr($row['id'], 0, 13) != 'object::user:') {
        $path = $PATH_DATA . DIRECTORY_SEPARATOR . $row['path'];
      } else {
        $path = $PATH_DATA . DIRECTORY_SEPARATOR . substr($row['id'], 13) . DIRECTORY_SEPARATOR . $row['path'];
      }
      // Create folder (if it doesn't already exist)
      if (!file_exists($path)) {
        mkdir($path, 0777, true);
      }
      #echo "\n".$path."\t";
      touch($path, $row['storage_mtime']);
    } catch (Exception $e) {
      echo "    Failed to create: ".$row['path']." (".$e->getMessage().")\n";
      $flag = false;
    }
    // Update progress
    $new = floor($current/$complete*100).'%';
    if ($prev != $new ) {
      echo str_repeat(chr(8) , strlen($prev) );
      $prev = $current+1 >= $complete ? ' DONE ' : $new;
      echo $prev;
    }
  }
  $result->free_result();
}

echo "\nCreating folder structure finished\n";

echo "Copying files started... ";

$error_copy = '';

if ($result = $mysqli->query("SELECT st.id, fc.fileid, fc.path, fc.storage_mtime FROM oc_filecache as fc,".
                             " oc_storages as st,".
                             " oc_mimetypes as mt".
                             " WHERE st.numeric_id = fc.storage AND st.id LIKE 'object::%' AND fc.mimetype = mt.id AND mt.mimetype != 'httpd/unix-directory'".
                             " ORDER BY st.id ASC")) {

  // Init progress
  $complete = $result->num_rows;
  $current  = 0;
  $prev     = '';

  while ($row = $result->fetch_assoc()) {
    $current++;
    try {
      // Determine correct path
      if (substr($row['id'], 0, 13) != 'object::user:') {
        $path = $PATH_DATA . DIRECTORY_SEPARATOR . $row['path'];
      } else {
        $path = $PATH_DATA . DIRECTORY_SEPARATOR . substr($row['id'], 13) . DIRECTORY_SEPARATOR . $row['path'];
      }
      $user = substr($path, strlen($PATH_DATA. DIRECTORY_SEPARATOR));
      $user = substr($user,0,strpos($user,DIRECTORY_SEPARATOR));

      #echo "\n".$path."\t".$row['storage_mtime'];
      $copy = 1;
      if(file_exists($path) && is_file($path)){
        if ($row['storage_mtime'] > filemtime($path) ) {
          unlink($path);
        }
        else {
          $copy = 0;
          #echo '.'; // uncomment to see progress
        }
      }
      if ($copy) {
        $path_bkp = str_replace($PATH_DATA,
                                $PATH_DATA_BKP,
                                $path);
        if (file_exists($path_bkp) && is_file($path_bkp)
         && $row['storage_mtime'] == filemtime($path_bkp) ) {
          if (rename($path_bkp,
                     $path) ) {
            $copy = 0;
          } else {
            echo "\nmove failed!?\n";
            exit;
          }
          #echo ':';
        }
      }
      if ($copy) {
        // Download file from S3
        $s3->getObject(array(
          'Bucket' => $CONFIG['objectstore']['arguments']['bucket'],
          'Key'    => 'urn:oid:'.$row['fileid'],
          'SaveAs' => $path,
        ));
        // Also set modification time
        touch($path, $row['storage_mtime']);
        #echo '!'; // uncomment to see progress
      }
      #echo ''.$copy."\n";if ($copy) { exit;} 
    } catch (Exception $e) {
      if(file_exists($path) && is_file($path) ){
        unlink($path);
      }
      echo "\n#########################################################################################";
      echo "\nFailed to transfer: $row[fileid] (".$e->getMessage().")\n";
      echo "\ntarget: ".$path."\n";
      echo "datadump of database record:\n";
      print_r($row);
      $error_copy.= $path."\n";
      $prev = '';
      #exit;
    }
    // Update progress
    $new = sprintf('%.2f',$current/$complete*100).'% (now at user '.$user.')';
    if ($prev != $new ) {
      echo str_repeat(chr(8) , strlen($prev) );
      $prev = $current+1 >= $complete ? ' DONE ' : $new;
      echo $prev;
    }
  }
  $result->free_result();
}
echo "\n";
#exit; ###################################################################################

if (!empty($error_copy)) {
  echo "\n#########################################################################################";
  $error_count = substr_count($error_copy,"\n");
  echo "\nCopying of ".$error_count." files failed:\n".$error_copy."\n\n";
}

echo "\nCopying files finished";

echo "\n\ndone..\n";

#########################################################################################
function recursive_copy($src,$dst) {
  $dir = opendir($src);
  @mkdir($dst);
  while( $file = readdir($dir) ) {
    if ( $file != '.'
     &&  $file != '..' ) {
      if ( is_dir($src . DIRECTORY_SEPARATOR . $file) ) {
        recursive_copy($src . DIRECTORY_SEPARATOR . $file,
                       $dst . DIRECTORY_SEPARATOR . $file);
      } else {

        $copy = 1;
        if(file_exists($dst . DIRECTORY_SEPARATOR . $file)){
          if (filemtime($src . DIRECTORY_SEPARATOR . $file) > filemtime($dst . DIRECTORY_SEPARATOR . $file) ) {
            unlink($dst . DIRECTORY_SEPARATOR . $file);
          }
          else { $copy = 0; }
        }
        if ($copy) {
          copy($src . DIRECTORY_SEPARATOR . $file,
               $dst . DIRECTORY_SEPARATOR . $file);
        }

      }
    }
  }
  closedir($dir);
}

Lett me know how it went..

mrAceT commented 2 years ago

(no) success?

edwinosky commented 2 years ago

(no) success?

no my friend, i tell you what i am trying to do today, i am using your script to see if data access, create a new install using primary s3, mount that new install to a folder on my server using webdav, now i am seeing if with your script it is possible to recover the data and store it in that new installation, I thought I would write here if it worked for me

edwinosky commented 2 years ago

I get this error when executing your script, it seems that it detects the wrong key to connect to S3 R2, the detail is that I already verified my keys, in fact I am using the same key and secret in the new installation and it works fine image

mrAceT commented 2 years ago

The script should continue.. it means that number 482060 does exist in your database, but not in your S3 storage.

It is very obvious something went horribly wrong.. what that was, I can not tell you..

But did you cancel the script or did you let it continue? If you canceled it, let it go.. it's restore all it can (combining all that is possible combining your S3 and your tables..) When you are (more or less) lucky you only have to manually figure out a few of the errors..

If it canceled by itself.. show me more..

edwinosky commented 2 years ago

the script is running, it has been running for more than an hour, the percentage is slowly increasing, I am also monitoring the storage and it does not increase, that is, no information is being downloaded from the s3 storage, I suppose that the script is currently consulting the thousands of indices that I have in my database, I will calmly wait the hours that are necessary

image

mrAceT commented 2 years ago

Are you using S3 via WebDav to use S3 as a "mounted folder"?

If so, I have tried that.. with various ways of connecting.. I'dd advise against it for "life usage".. I got it working, but it was (beyond) slow.. If it's for restoring your data because of lack of disk space I'dd expect you will need to be extremely patient with that amount of data

PS: maybe change sprintf('%.2f' into sprintf('%.3f' to be sure it didn't die off..

PS2: if you have set NON_EMPTY_TARGET_OK to '1' you can abort and restart it, the script will skip the files it already got!

edwinosky commented 2 years ago

I was using webdav mounted in a folder but it was extremely slow, so I canceled that process and I'm doing it with a normal system folder and of course it goes faster

szaimen commented 1 year ago

Hi, please update to 24.0.9 or better 25.0.3 and report back if it fixes the issue. Thank you!

My goal is to add a label like e.g. 25-feedback to this ticket of an up-to-date major Nextcloud version where the bug could be reproduced. However this is not going to work without your help. So thanks for all your effort!

If you don't manage to reproduce the issue in time and the issue gets closed but you can reproduce the issue afterwards, feel free to create a new bug report with up-to-date information by following this link: https://github.com/nextcloud/server/issues/new?assignees=&labels=bug%2C0.+Needs+triage&template=BUG_REPORT.yml&title=%5BBug%5D%3A+

edwinosky commented 1 year ago

Hi, please update to 24.0.9 or better 25.0.3 and report back if it fixes the issue. Thank you!

My goal is to add a label like e.g. 25-feedback to this ticket of an up-to-date major Nextcloud version where the bug could be reproduced. However this is not going to work without your help. So thanks for all your effort!

If you don't manage to reproduce the issue in time and the issue gets closed but you can reproduce the issue afterwards, feel free to create a new bug report with up-to-date information by following this link: https://github.com/nextcloud/server/issues/new?assignees=&labels=bug%2C0.+Needs+triage&template=BUG_REPORT.yml&title=%5BBug%5D%3A+

no friend, in the end I ended up losing almost 1TB of files that I had in cloudflare R2 and I could not recover them in any way, unfortunately I did not have them backed up elsewhere, I tried to do everything to recover them, using clean installations and nothing worked for me, anyway In fact, I had to delete the bucket where you had that data because clouflare was charging me and my clients did not have access to the information, now I am using an independent company to take care of storing my data and redid my clients to their links, I will close this topic thanks for the help anyway

tomcatcw1980 commented 8 months ago

Hello MrAceT,

first of all, thank you very much for your script. I would also like to reduce the complexity of my NC installation. I have therefore also tried to use your script to switch from S3 to Local. Unfortunately I have not (yet) been successful as the script always aborts. I get similar inconsistency messages as @edwinosky but let the script run until it aborts. However, I cannot see where it breaks off.

Your script copies all the data from S3 to local. But since the script then aborts, it probably does not update the DB entries to the end or can also run through all other necessary points. Because all the shares no longer exist as soon as I dial into Nextcloud.

Perhaps I have configured something incorrectly. I still have the following questions about your script that are not quite clear to me:

1) Who is the clouduser? www-data is probably not, because with sudo www-data I get the message that www-data is logically not sudoer. That's why I just left it at root. 2) What is meant by this variable? $PATH_BASE = ''; // Path to the base of the main Nextcloud directory. What has to go in there? or I have left it empty, otherwise the path is not correct and the script will not start. 3) What is the difference to this variable? $PATH_NEXTCLOUD = $PATH_BASE.'/var/www/nextcloud'; // Path of the public Nextcloud directory 4) Is it intended in your script that it retains the shares etc.?

Thanks in advance for your answer.

Best regards Christian

PS: I just got this error message. It copied all data, then asked to continue:

[...] /nc_data/appdata_ocoyq3be73jp/css/terms_of_service/6aaf-32d3-overlay.css.deps /nc_data/appdata_ocoyq3be73jp/css/theming/d71e-32d3-settings-admin.css.gzip

Continue?Y

Copying files finished ######################################################################################### Modifying database started... PHP Fatal error: Uncaught mysqli_sql_exception: MySQL server has gone away in /var/www/nextcloud-s3-to-disk-migration/s3tolocal.php:367 Stack trace:

0 /var/www/nextcloud-s3-to-disk-migration/s3tolocal.php(367): mysqli->query()

1 {main}

thrown in /var/www/nextcloud-s3-to-disk-migration/s3tolocal.php on line 367

mrAceT commented 8 months ago

no friend, in the end I ended up losing almost 1TB of files that I had in cloudflare R2 and I could not recover them in any way, unfortunately I did not have them backed up elsewhere, I tried to do everything to recover them, using clean installations and nothing worked for me, anyway In fact, I had to delete the bucket where you had that data because clouflare was charging me and my clients did not have access to the information, now I am using an independent company to take care of storing my data and redid my clients to their links, I will close this topic thanks for the help anyway

Yikes.. I am guessing that the data lost was regrettable but not a disaster? (I think I would have been able to rescue that data).