minio / docs

MinIO Object Storage Documentation
https://docs.min.io/minio/baremetal
Creative Commons Attribution 4.0 International
547 stars 300 forks source link

[BUG] Config migration from old version #739

Closed ashkraba closed 1 year ago

ashkraba commented 1 year ago

Hello team. Sorry that I am writing here, but googling doesn't help.

Describe the bug I am trying to migrate from the old version of the standalone single drive minio to a new one with the latest version. But when I try to import a config, exported from the old one I get the next error mc: Unable to set server config: sub-system 'heal' cannot have empty keys. I couldn't understand which keys are empty

To Reproduce Steps to reproduce the behavior:

  1. export config from old version with mc admin config export old > config.txt
  2. import it to new version with mc admin config import new < config.txt
  3. See error

Source minio version is Version: 2021-08-05T22:01:19Z Destination minio version is 2023-02-22T18:23:45Z Config file:

region name=
# cache drives= exclude= expiry=90 quota=80 after=0 watermark_low=70 watermark_high=80 range=on commit=writethrough
# compression enable=off allow_encryption=off extensions=.txt,.log,.csv,.json,.tar,.xml,.bin mime_types=text/*,application/json,application/xml,binary/octet-stream
# etcd endpoints= path_prefix= coredns_path=/skydns client_cert= client_cert_key=
# identity_openid config_url= client_id= client_secret= claim_name=policy claim_prefix= redirect_uri= scopes= jwks_url=
# identity_ldap server_addr= username_format= user_dn_search_base_dn= user_dn_search_filter= group_search_filter= group_search_base_dn= sts_expiry=1h tls_skip_verify=off server_insecure=off server_starttls=off lookup_bind_dn= lookup_bind_password=
# policy_opa url= auth_token=
api requests_max=0 requests_deadline=10s cluster_deadline=10s cors_allow_origin=* remote_transport_deadline=2h list_quorum=optimal replication_workers=250 replication_failed_workers=8
heal bitrotscan=off max_sleep=1s max_io=10
scanner delay=10 max_wait=15s cycle=1m
# logger_webhook enable=off endpoint= auth_token=
# audit_webhook enable=off endpoint= auth_token= client_cert= client_key=
# audit_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off version=
# notify_webhook enable=off endpoint= auth_token= queue_limit=0 queue_dir= client_cert= client_key=
# notify_amqp enable=off url= exchange= exchange_type= routing_key= mandatory=off durable=off no_wait=off internal=off auto_deleted=off delivery_mode=0 publisher_confirms=off queue_limit=0 queue_dir=
# notify_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off queue_limit=0 queue_dir= version=
# notify_mqtt enable=off broker= topic= password= username= qos=0 keep_alive_interval=0s reconnect_interval=0s queue_dir= queue_limit=0
# notify_nats enable=off address= subject= username= password= token= tls=off tls_skip_verify=off cert_authority= client_cert= client_key= ping_interval=0 streaming=off streaming_async=off streaming_max_pub_acks_in_flight=0 streaming_cluster_id= queue_dir= queue_limit=0
# notify_nsq enable=off nsqd_address= topic= tls=off tls_skip_verify=off queue_dir= queue_limit=0
# notify_mysql enable=off format=namespace dsn_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_postgres enable=off format=namespace connection_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_elasticsearch enable=off url= format=namespace index= queue_dir= queue_limit=0 username= password=
# notify_redis enable=off format=namespace address= key= password= queue_dir= queue_limit=0
harshavardhana commented 1 year ago

You should just comment the relevant sub-systems like this

region name=
# cache drives= exclude= expiry=90 quota=80 after=0 watermark_low=70 watermark_high=80 range=on commit=writethrough
# compression enable=off allow_encryption=off extensions=.txt,.log,.csv,.json,.tar,.xml,.bin mime_types=text/*,application/json,application/xml,binary/octet-stream
# etcd endpoints= path_prefix= coredns_path=/skydns client_cert= client_cert_key=
# identity_openid config_url= client_id= client_secret= claim_name=policy claim_prefix= redirect_uri= scopes= jwks_url=
# identity_ldap server_addr= username_format= user_dn_search_base_dn= user_dn_search_filter= group_search_filter= group_search_base_dn= sts_expiry=1h tls_skip_verify=off server_insecure=off server_starttls=off lookup_bind_dn= lookup_bind_password=
# policy_opa url= auth_token=
# api requests_max=0 requests_deadline=10s cluster_deadline=10s cors_allow_origin=* remote_transport_deadline=2h list_quorum=optimal replication_workers=250 replication_failed_workers=8
# heal bitrotscan=off max_sleep=1s max_io=10
scanner delay=10 max_wait=15s cycle=1m
# logger_webhook enable=off endpoint= auth_token=
# audit_webhook enable=off endpoint= auth_token= client_cert= client_key=
# audit_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off version=
# notify_webhook enable=off endpoint= auth_token= queue_limit=0 queue_dir= client_cert= client_key=
# notify_amqp enable=off url= exchange= exchange_type= routing_key= mandatory=off durable=off no_wait=off internal=off auto_deleted=off delivery_mode=0 publisher_confirms=off queue_limit=0 queue_dir=
# notify_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off queue_limit=0 queue_dir= version=
# notify_mqtt enable=off broker= topic= password= username= qos=0 keep_alive_interval=0s reconnect_interval=0s queue_dir= queue_limit=0
# notify_nats enable=off address= subject= username= password= token= tls=off tls_skip_verify=off cert_authority= client_cert= client_key= ping_interval=0 streaming=off streaming_async=off streaming_max_pub_acks_in_flight=0 streaming_cluster_id= queue_dir= queue_limit=0
# notify_nsq enable=off nsqd_address= topic= tls=off tls_skip_verify=off queue_dir= queue_limit=0
# notify_mysql enable=off format=namespace dsn_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_postgres enable=off format=namespace connection_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_elasticsearch enable=off url= format=namespace index= queue_dir= queue_limit=0 username= password=
# notify_redis enable=off format=namespace address= key= password= queue_dir= queue_limit=0

Such as add # in front of api - # from heal

arundeep78 commented 1 year ago

i had this issue while trying to move from old version supporting gateway/filesystem minio to new version based on migration document.

Old minio server : image: minio/minio:RELEASE.2021-06-17T00-10-46Z.fips new minio server: minio/minio:RELEASE.2022-12-02T19-19-22Z

I upgraded to Minio to the last version that still supports the old method. latest version that supports gateway/filesystem mode : minio/minio:RELEASE.2022-10-24T18-35-07Z

I used this export/import again and it worked. it also solved issue with bucket export.

Now trying to get the mc mirror to work which is taking hours to copy 270KB file!!!! Will open a separate issue for that.

djwfyi commented 1 year ago

Docs note: add suggestions of commenting out the offending sub-systems to the migration doc.

ravindk89 commented 1 year ago

Captured by @feorlen 's recent work here under step 4 - Filesystem Mode