Altinity / clickhouse-backup

Tool for easy backup and restore for ClickHouse® using object storage for backup files.
https://altinity.com
Other
1.29k stars 226 forks source link

Restoring local backup fails due to NewBackupDestination error: storage type 'none' is not supported #958

Closed fschoell closed 4 months ago

fschoell commented 4 months ago

I just tried clickhouse-backup to migrate some data from one of our old clusters to a newer one. For this I basically followed this example.

Which means I ran clickhouse-backup create -t my_database.my_events migration on our old node, then used rsync to copy the /var/lib/clickhouse/backup/migration directory to the new node and then I tried to restore the backup on the new node running clickhouse-backup restore -m my_database:migrate migration

This then fails with the error NewBackupDestination error: storage type 'none' is not supported which is weird as I don't want to create a new backup destination, I just want to restore.

The full output of clickhouse-backup restore is:

2024/07/24 20:01:59.111855  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/07/24 20:01:59.114788  info clickhouse connection success: tcp://localhost:9000 logger=clickhouse
2024/07/24 20:01:59.114803  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/07/24 20:01:59.116145  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='object_storage_type') AS is_object_storage_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/07/24 20:01:59.119590  info SELECT d.path, any(d.name) AS name, any(lower(if(d.type='ObjectStorage',d.object_storage_type,d.type))) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  LEFT JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/07/24 20:01:59.123240  info CREATE DATABASE IF NOT EXISTS `migrate` ENGINE = Atomic with args [[]] logger=clickhouse
2024/07/24 20:01:59.123602  info clickhouse connection closed logger=clickhouse
2024/07/24 20:01:59.123618 error NewBackupDestination error: storage type 'none' is not supported

clickhouse-backup list also shows the backup as locally available:

2024/07/24 20:04:07.154183  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/07/24 20:04:07.154935  info clickhouse connection success: tcp://localhost:9000 logger=clickhouse
2024/07/24 20:04:07.154955  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/07/24 20:04:07.156523  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='object_storage_type') AS is_object_storage_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/07/24 20:04:07.160645  info SELECT d.path, any(d.name) AS name, any(lower(if(d.type='ObjectStorage',d.object_storage_type,d.type))) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  LEFT JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
migration   404.22GiB   24/07/2024 19:18:19   local      regular
2024/07/24 20:04:07.164836  info clickhouse connection closed logger=clickhouse

Running the up-to-date version of clickhouse-backup:

$ clickhouse-backup --version
Version:     2.5.20
Git Commit:  ab47d585e8418888a5169e6c0160c9859b5d6ed9
Build Date:  2024-07-04

While my clickhouse-backup config is pretty much empty and only contains the clickhouse credentials:

clickhouse:
  username: "migrate"
  password: "<redacted>"
  host: "localhost"
  port: 9000
Slach commented 4 months ago

could you share cat /var/lib/clickhouse/backup/migration/metadata.json

need to check diskTypes section

fschoell commented 4 months ago
{
    "backup_name": "migration",
    "disks": {
        "default": "/var/lib/clickhouse/",
        "s3_plain": "/var/lib/clickhouse/disks/s3_plain/"
    },
    "disk_types": {
        "default": "local",
        "s3_plain": "s3"
    },
    "version": "2.5.20",
    "creation_date": "2024-07-24T19:18:19.755980657Z",
    "tags": "regular",
    "clickhouse_version": "v24.3.2.23-stable",
    "data_size": 434033004011,
    "metadata_size": 1583,
    "rbac_size": 908,
    "databases": [
        {
            "name": "my_database",
            "engine": "Atomic",
            "query": "CREATE DATABASE my_database\nENGINE = Atomic"
        }
    ],
    "tables": [
        {
            "database": "my_database",
            "table": "my_events"
        }
    ],
    "functions": [],
    "data_format": ""
}
fschoell commented 4 months ago

Did another backup with clickhouse-backup, but removed my Clickhouse native backup configuration before and that seems to solve that issue.

Another workaround seems to be to set remote_storage: "custom" in the clickhouse-backup config

Slach commented 4 months ago
"disk_types": {
        "default": "local",
        "s3_plain": "s3"
    },

clickhouse-backup trying to create connection to s3_plain disk via S3 protocol

do you have any data related to s3_plain?

could you check data parts

grep s3_plain -r /var/lib/clickhouse/backup/migration/metadata/

fschoell commented 4 months ago

nothing in the metadata directory referencing s3_plain

Slach commented 4 months ago

in this case just change migation/metadata.json

and setup

    "disk_types": {
        "default": "local"
    },

your workaround about "remote_storage: custom" also worked solution