minio / minio

MinIO is a high-performance, S3 compatible object store, open sourced under GNU AGPLv3 license.
https://min.io/download?license=agpl&platform=linux
GNU Affero General Public License v3.0
46.98k stars 5.43k forks source link

[Unresolved] Error: Drive /data, path (.minio.sys/buckets/.background-heal.json) returned an error (file is corrupted) (*fmt.wrapError) #16148

Closed voidus closed 1 year ago

voidus commented 1 year ago

I'm using minio as an object store for e2e tests using docker-compose, but I could minize the repro case quite a bit

It's important to note that my tests seem to run fine though, so I'm not sure what this is actually affecting.

When running minio with docker on my local machine (arch linux with some nix packages) I get the error in the summary after a while, even if I didn't interact with the service.

This only seems to affect me, so I guess this is more of a debugging thing. Since minio only logs errors, I can't see any way forward here. I still wanted to report it though because searching the internet turned up literally nothing for this.

Expected Behavior

Nothing, basically.

Current Behavior

> docker run --rm -ti quay.io/minio/minio server /data --console-address :9001
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2022 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2022-09-01T23-53-36Z (go1.18.5 linux/amd64)

Status:         1 Online, 0 Offline. 
API: http://172.17.0.2:9000  http://127.0.0.1:9000 
RootUser: minioadmin 
RootPass: minioadmin 
Console: http://172.17.0.2:9001 http://127.0.0.1:9001 
RootUser: minioadmin 
RootPass: minioadmin 

Command-line: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc alias set myminio http://172.17.0.2:9000 minioadmin minioadmin

Documentation: https://docs.min.io

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ You are running an older version of MinIO released 2 months ago ┃
┃ Update: Run `mc admin update`                                   ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

API: SYSTEM()
Time: 09:44:45 UTC 11/30/2022
DeploymentID: 2b5c050f-0b11-4346-8f86-ab43a28ca3c1
Error: Drive /data, path (.minio.sys/buckets/.background-heal.json) returned an error (file is corrupted) (*fmt.wrapError)
      11: internal/logger/logger.go:259:logger.LogIf()
      10: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
       9: internal/logger/logonce.go:135:logger.LogOnceIf()
       8: cmd/erasure-metadata-utils.go:156:cmd.readAllFileInfo()
       7: cmd/erasure-single-drive.go:751:cmd.(*erasureSingle).getObjectFileInfo()
       6: cmd/erasure-single-drive.go:574:cmd.(*erasureSingle).GetObjectNInfo()
       5: cmd/config-common.go:33:cmd.readConfigWithMetadata()
       4: cmd/config-common.go:55:cmd.readConfig()
       3: cmd/data-scanner.go:122:cmd.readBackgroundHealInfo()
       2: cmd/data-scanner.go:200:cmd.runDataScanner()
       1: cmd/data-scanner.go:83:cmd.initDataScanner.func1()

API: SYSTEM()
Time: 09:44:45 UTC 11/30/2022
DeploymentID: 2b5c050f-0b11-4346-8f86-ab43a28ca3c1
Error: file is corrupted (cmd.StorageErr)
       4: internal/logger/logger.go:259:logger.LogIf()
       3: cmd/data-scanner.go:125:cmd.readBackgroundHealInfo()
       2: cmd/data-scanner.go:200:cmd.runDataScanner()
       1: cmd/data-scanner.go:83:cmd.initDataScanner.func1()

API: SYSTEM()
Time: 09:44:45 UTC 11/30/2022
DeploymentID: 2b5c050f-0b11-4346-8f86-ab43a28ca3c1
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:259:logger.LogIf()
       4: cmd/xl-storage.go:2250:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:373:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:721:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 09:44:45 UTC 11/30/2022
DeploymentID: 2b5c050f-0b11-4346-8f86-ab43a28ca3c1
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:259:logger.LogIf()
       4: cmd/xl-storage.go:2250:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:373:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:721:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 09:44:45 UTC 11/30/2022
DeploymentID: 2b5c050f-0b11-4346-8f86-ab43a28ca3c1
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:259:logger.LogIf()
       4: cmd/xl-storage.go:2250:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:373:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:721:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 09:44:45 UTC 11/30/2022
DeploymentID: 2b5c050f-0b11-4346-8f86-ab43a28ca3c1
Error: Drive /data, path (.minio.sys/buckets/.usage.json) returned an error (file is corrupted) (*fmt.wrapError)
      10: internal/logger/logger.go:259:logger.LogIf()
       9: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
       8: internal/logger/logonce.go:135:logger.LogOnceIf()
       7: cmd/erasure-metadata-utils.go:156:cmd.readAllFileInfo()
       6: cmd/erasure-single-drive.go:751:cmd.(*erasureSingle).getObjectFileInfo()
       5: cmd/erasure-single-drive.go:574:cmd.(*erasureSingle).GetObjectNInfo()
       4: cmd/config-common.go:33:cmd.readConfigWithMetadata()
       3: cmd/config-common.go:55:cmd.readConfig()
       2: cmd/data-usage.go:95:cmd.loadDataUsageFromBackend()
       1: cmd/bucket-replication-stats.go:193:cmd.(*ReplicationStats).loadInitialReplicationMetrics()
^CExiting on signal: INTERRUPT

Steps to Reproduce (for bugs)

docker run --rm -ti quay.io/minio/minio server /data --console-address :9001

Context

Corrupted /data/.minio.sys/buckets/.usage.json/xl.meta (only file in that directory) from another run:

> xxd corrupted
00000000: 584c 3220 0100 0300 c600 0001 6e02 0101  XL2 ........n...
00000010: c424 95c4 1000 0000 0000 0000 0000 0000  .$..............
00000020: 0000 0000 00d3 172c 2787 31df d4be c404  .......,'.1.....
00000030: b259 1e10 0106 c501 4283 a454 7970 6501  .Y......B..Type.
00000040: a556 324f 626a de00 11a2 4944 c410 0000  .V2Obj....ID....
00000050: 0000 0000 0000 0000 0000 0000 0000 a444  ...............D
00000060: 4469 72c4 1098 696f 5ab0 f144 aeba ea26  Dir...ioZ..D...&
00000070: dc55 92ff 24a6 4563 416c 676f 01a3 4563  .U..$.EcAlgo..Ec
00000080: 4d01 a345 634e 00a7 4563 4253 697a 65d2  M..EcN..EcBSize.
00000090: 0010 0000 a745 6349 6e64 6578 01a6 4563  .....EcIndex..Ec
000000a0: 4469 7374 9101 a843 5375 6d41 6c67 6f01  Dist...CSumAlgo.
000000b0: a850 6172 744e 756d 7391 01a9 5061 7274  .PartNums...Part
000000c0: 4554 6167 73c0 a950 6172 7453 697a 6573  ETags..PartSizes
000000d0: 91d1 02c3 aa50 6172 7441 5369 7a65 7391  .....PartASizes.
000000e0: d102 c3a4 5369 7a65 d102 c3a5 4d54 696d  ....Size....MTim
000000f0: 65d3 172c 2787 31df d4be a74d 6574 6153  e..,'.1....MetaS
00000100: 7973 81bc 782d 6d69 6e69 6f2d 696e 7465  ys..x-minio-inte
00000110: 726e 616c 2d69 6e6c 696e 652d 6461 7461  rnal-inline-data
00000120: c404 7472 7565 a74d 6574 6155 7372 82a4  ..true.MetaUsr..
00000130: 6574 6167 d920 3039 3639 6563 3063 3632  etag. 0969ec0c62
00000140: 3937 6336 3761 3362 3162 3263 3930 3161  97c67a3b1b2c901a
00000150: 6161 3163 3234 ac63 6f6e 7465 6e74 2d74  aa1c24.content-t
00000160: 7970 65b0 6170 706c 6963 6174 696f 6e2f  ype.application/
00000170: 6a73 6f6e a176 ce63 1146 00ce e7f3 0de3  json.v.c.F......
00000180: 0181 a46e 756c 6cc5 02e3 4593 77d1 08c5  ...null...E.w...
00000190: 7094 0b54 ea9b 9a54 bc4a ddab d278 6c9a  p..T...T.J...xl.
000001a0: 0fb3 b579 93b2 59bd de2e 7b22 6c61 7374  ...y..Y...{"last
000001b0: 5570 6461 7465 223a 2232 3032 322d 3131  Update":"2022-11
000001c0: 2d32 3954 3230 3a31 373a 3033 2e34 3531  -29T20:17:03.451
000001d0: 3336 3638 3334 5a22 2c22 6f62 6a65 6374  366834Z","object
000001e0: 7343 6f75 6e74 223a 312c 2276 6572 7369  sCount":1,"versi
000001f0: 6f6e 7343 6f75 6e74 223a 302c 226f 626a  onsCount":0,"obj
00000200: 6563 7473 546f 7461 6c53 697a 6522 3a37  ectsTotalSize":7
00000210: 3032 332c 226f 626a 6563 7473 5265 706c  023,"objectsRepl
00000220: 6963 6174 696f 6e49 6e66 6f22 3a6e 756c  icationInfo":nul
00000230: 6c2c 2262 7563 6b65 7473 436f 756e 7422  l,"bucketsCount"
00000240: 3a31 2c22 6275 636b 6574 7355 7361 6765  :1,"bucketsUsage
00000250: 496e 666f 223a 7b22 6261 636b 6f66 6669  Info":{"backoffi
00000260: 6365 2d66 696c 6573 223a 7b22 7369 7a65  ce-files":{"size
00000270: 223a 3730 3233 2c22 6f62 6a65 6374 7350  ":7023,"objectsP
00000280: 656e 6469 6e67 5265 706c 6963 6174 696f  endingReplicatio
00000290: 6e54 6f74 616c 5369 7a65 223a 302c 226f  nTotalSize":0,"o
000002a0: 626a 6563 7473 4661 696c 6564 5265 706c  bjectsFailedRepl
000002b0: 6963 6174 696f 6e54 6f74 616c 5369 7a65  icationTotalSize
000002c0: 223a 302c 226f 626a 6563 7473 5265 706c  ":0,"objectsRepl
000002d0: 6963 6174 6564 546f 7461 6c53 697a 6522  icatedTotalSize"
000002e0: 3a30 2c22 6f62 6a65 6374 7350 656e 6469  :0,"objectsPendi
000002f0: 6e67 5265 706c 6963 6174 696f 6e43 6f75  ngReplicationCou
00000300: 6e74 223a 302c 226f 626a 6563 7473 4661  nt":0,"objectsFa
00000310: 696c 6564 5265 706c 6963 6174 696f 6e43  iledReplicationC
00000320: 6f75 6e74 223a 302c 226f 626a 6563 7473  ount":0,"objects
00000330: 436f 756e 7422 3a31 2c22 6f62 6a65 6374  Count":1,"object
00000340: 7353 697a 6573 4869 7374 6f67 7261 6d22  sSizesHistogram"
00000350: 3a7b 2242 4554 5745 454e 5f31 3032 345f  :{"BETWEEN_1024_
00000360: 425f 414e 445f 315f 4d42 223a 312c 2242  B_AND_1_MB":1,"B
00000370: 4554 5745 454e 5f31 305f 4d42 5f41 4e44  ETWEEN_10_MB_AND
00000380: 5f36 345f 4d42 223a 302c 2242 4554 5745  _64_MB":0,"BETWE
00000390: 454e 5f31 3238 5f4d 425f 414e 445f 3531  EN_128_MB_AND_51
000003a0: 325f 4d42 223a 302c 2242 4554 5745 454e  2_MB":0,"BETWEEN
000003b0: 5f31 5f4d 425f 414e 445f 3130 5f4d 4222  _1_MB_AND_10_MB"
000003c0: 3a30 2c22 4245 5457 4545 4e5f 3634 5f4d  :0,"BETWEEN_64_M
000003d0: 425f 414e 445f 3132 385f 4d42 223a 302c  B_AND_128_MB":0,
000003e0: 2247 5245 4154 4552 5f54 4841 4e5f 3531  "GREATER_THAN_51
000003f0: 325f 4d42 223a 302c 224c 4553 535f 5448  2_MB":0,"LESS_TH
00000400: 414e 5f31 3032 345f 4222 3a30 7d2c 2276  AN_1024_B":0},"v
00000410: 6572 7369 6f6e 7343 6f75 6e74 223a 302c  ersionsCount":0,
00000420: 226f 626a 6563 7452 6570 6c69 6361 546f  "objectReplicaTo
00000430: 7461 6c53 697a 6522 3a30 2c22 6f62 6a65  talSize":0,"obje
00000440: 6374 7352 6570 6c69 6361 7469 6f6e 496e  ctsReplicationIn
00000450: 666f 223a 7b7d 7d7d 2c22 6275 636b 6574  fo":{}}},"bucket
00000460: 7353 697a 6573 223a 6e75 6c6c 7d         sSizes":null}
> docker inspect ...
[
    {
        "Id": "c1bfa865993551ff3094ea275c08074375297ea52db7893bc1b2ec32db2f5275",
        "Created": "2022-11-30T09:46:34.196663263Z",
        "Path": "/usr/bin/docker-entrypoint.sh",
        "Args": [
            "server",
            "/data",
            "--console-address",
            ":9001"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 571033,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2022-11-30T09:46:34.621926093Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:60628873a9f4290730732aed195abfc823661883d68a9206052142919545ab98",
        "ResolvConfPath": "/var/lib/docker/containers/c1bfa865993551ff3094ea275c08074375297ea52db7893bc1b2ec32db2f5275/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/c1bfa865993551ff3094ea275c08074375297ea52db7893bc1b2ec32db2f5275/hostname",
        "HostsPath": "/var/lib/docker/containers/c1bfa865993551ff3094ea275c08074375297ea52db7893bc1b2ec32db2f5275/hosts",
        "LogPath": "/var/lib/docker/containers/c1bfa865993551ff3094ea275c08074375297ea52db7893bc1b2ec32db2f5275/c1bfa865993551ff3094ea275c08074375297ea52db7893bc1b2ec32db2f5275-json.log",
        "Name": "/loving_hofstadter",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": true,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "private",
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": null,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/24ecc1ea24b892ad87d3bbdd58351f235a272a936df7696ec20591293ee65c8f-init/diff:/var/lib/docker/overlay2/087443acb3b45742af615c06579778d73a8a7c8f25bfa791a7cb9a1e27d55877/diff:/var/lib/docker/overlay2/2c22f7190ce909bf03fd02651cb8e8752e083f1dd769ad3e3aa1d761ca8d11fa/diff:/var/lib/docker/overlay2/e625041ad36d0e5178fa79e8e06515fa9d352304211d7a9d159bc80f7cf84950/diff:/var/lib/docker/overlay2/053e9f13454daf2f356bda979b6b77911f8bb932d85b5a7c97792c26c313d933/diff:/var/lib/docker/overlay2/058a44c6d44c899874c7eb3a516deb065c3508c3b2cf3f1c4a6bbc3993e3f93f/diff:/var/lib/docker/overlay2/c4d6c8d429eecb114775855698d8919c8c74a866371071cf1cb777b1ea8eb311/diff",
                "MergedDir": "/var/lib/docker/overlay2/24ecc1ea24b892ad87d3bbdd58351f235a272a936df7696ec20591293ee65c8f/merged",
                "UpperDir": "/var/lib/docker/overlay2/24ecc1ea24b892ad87d3bbdd58351f235a272a936df7696ec20591293ee65c8f/diff",
                "WorkDir": "/var/lib/docker/overlay2/24ecc1ea24b892ad87d3bbdd58351f235a272a936df7696ec20591293ee65c8f/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "c00db497ab1871f687619f98ad4bec4f105531c74031b8e1324ca97d32fa2568",
                "Source": "/var/lib/docker/volumes/c00db497ab1871f687619f98ad4bec4f105531c74031b8e1324ca97d32fa2568/_data",
                "Destination": "/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "c1bfa8659935",
            "Domainname": "",
            "User": "",
            "AttachStdin": true,
            "AttachStdout": true,
            "AttachStderr": true,
            "ExposedPorts": {
                "9000/tcp": {}
            },
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": true,
            "Env": [
                "PATH=/opt/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=oci",
                "MINIO_ACCESS_KEY_FILE=access_key",
                "MINIO_SECRET_KEY_FILE=secret_key",
                "MINIO_ROOT_USER_FILE=access_key",
                "MINIO_ROOT_PASSWORD_FILE=secret_key",
                "MINIO_KMS_SECRET_KEY_FILE=kms_master_key",
                "MINIO_UPDATE_MINISIGN_PUBKEY=RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav",
                "MINIO_CONFIG_ENV_FILE=config.env"
            ],
            "Cmd": [
                "server",
                "/data",
                "--console-address",
                ":9001"
            ],
            "Image": "quay.io/minio/minio",
            "Volumes": {
                "/data": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/usr/bin/docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {
                "architecture": "x86_64",
                "build-date": "2022-10-26T11:07:47",
                "com.redhat.component": "ubi8-minimal-container",
                "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                "description": "MinIO object storage is fundamentally different. Designed for performance and the S3 API, it is 100% open-source. MinIO is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads.",
                "distribution-scope": "public",
                "io.buildah.version": "1.26.2",
                "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                "io.k8s.display-name": "Red Hat Universal Base Image 8 Minimal",
                "io.openshift.expose-services": "",
                "io.openshift.tags": "minimal rhel8",
                "maintainer": "MinIO Inc <dev@min.io>",
                "name": "MinIO",
                "release": "RELEASE.2022-11-26T22-43-32Z",
                "summary": "MinIO is a High Performance Object Storage, API compatible with Amazon S3 cloud storage service.",
                "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi8-minimal/images/8.6-994",
                "vcs-ref": "50293e4ba66ca2194d8ee5a149a665cb9a375c71",
                "vcs-type": "git",
                "vendor": "MinIO Inc <dev@min.io>",
                "version": "RELEASE.2022-11-26T22-43-32Z"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "4bf7b605684efaa3d0e01cff9a9852c0c78345ff3abd60700a8447c9a5acf52c",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "9000/tcp": null
            },
            "SandboxKey": "/var/run/docker/netns/4bf7b605684e",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "4d9eb9ec8df7d4778b4cd89c27557dfc6a2122df86c7fb6e9aab36ced02c0518",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "d3516e9f9f589a61f4f8fef35690e0cfe46b388baebafedd3e968dbf4c88ca6a",
                    "EndpointID": "4d9eb9ec8df7d4778b4cd89c27557dfc6a2122df86c7fb6e9aab36ced02c0518",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

Your Environment

klauspost commented 1 year ago

@voidus Upgrade. Your logs show RELEASE.2022-09-01T23-53-36Z, not RELEASE.2022-11-26T22-43-32Z. This bug was fixed in more recent versions.

voidus commented 1 year ago

Still happening with RELEASE.2022-11-29T23-40-49Z in our test suite, I'll make sure it reproduces with just docker run and post the log here

voidus commented 1 year ago

Did a docker system prune --volumes followed by docker system prune -a and ran again:

> docker run --rm -ti quay.io/minio/minio server /data --console-address :9001
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
Warning: Default parity set to 0. This can lead to data loss.
WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
MinIO Object Storage Server
Copyright: 2015-2022 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2022-11-29T23-40-49Z (go1.19.3 linux/amd64)

Status:         1 Online, 0 Offline. 
API: http://172.17.0.2:9000  http://127.0.0.1:9000 
RootUser: minioadmin 
RootPass: minioadmin 
Console: http://172.17.0.2:9001 http://127.0.0.1:9001 
RootUser: minioadmin 
RootPass: minioadmin 

Command-line: https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart
   $ mc alias set myminio http://172.17.0.2:9000 minioadmin minioadmin

Documentation: https://min.io/docs/minio/linux/index.html

API: SYSTEM()
Time: 11:50:22 UTC 12/02/2022
DeploymentID: 82cf9dc5-122c-49a6-801b-fc760ead88e6
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:258:logger.LogIf()
       4: cmd/xl-storage.go:2262:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:374:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:762:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 11:50:22 UTC 12/02/2022
DeploymentID: 82cf9dc5-122c-49a6-801b-fc760ead88e6
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:258:logger.LogIf()
       4: cmd/xl-storage.go:2262:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:374:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:762:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 11:51:22 UTC 12/02/2022
DeploymentID: 82cf9dc5-122c-49a6-801b-fc760ead88e6
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:258:logger.LogIf()
       4: cmd/xl-storage.go:2262:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:374:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:762:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 11:51:22 UTC 12/02/2022
DeploymentID: 82cf9dc5-122c-49a6-801b-fc760ead88e6
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:258:logger.LogIf()
       4: cmd/xl-storage.go:2262:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:374:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:762:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
^CExiting on signal: INTERRUPT
voidus commented 1 year ago

As I wrote, any pointers how to debug this further would be appreciated. There are no requests hitting this, this is an unedited log.

voidus commented 1 year ago

This log doesn't mention it, but I'm seeing the line mentioned in the issue title with out test logs. I can run the plain container a bit longer if you think that might be insightful.

klauspost commented 1 year ago

Likely corrupted objects on disk. Should be fixable with healing.

voidus commented 1 year ago

I'm not exactly sure how this applies here. This is a fresh docker container without volume or port mappings, and it happens pretty reliably, would there be a reason why the objects on my disk get corrupted after a few minutes with zero requests sent to the server?

harshavardhana commented 1 year ago

It is because your backend disk is faking writes and not persistent. You need to provide real disks with real filesystem for the data partition.

The overlay filesystem perhaps under use is loosing data sequences.

harshavardhana commented 1 year ago

You should start the container via docker run -v /my-local-path:/data minio/minio server /data if you are using some kind of overlayed FS then it can fake POSIX semantics and can lose data in between.

Nothing to do with MinIO here just a setup issue on your end.

voidus commented 1 year ago

Could you please elaborate or drop a relevant link to "losing data sequences"? I'm not familiar with that term and searching didn't bring up anything.

I looked into journalctl -k and found occasional i2c i2c-2: sendbytes: NAK bailout. lines, but I don't think this is relevant? Feel free to suggest otherwise. I'm using a SATA Samsung SSD 860 EVO 1TB formatted with f2fs.


I'm still having the issue with out docker-compose stuff. I've been trying more stuff

# no errors with this (yet)
> docker run --rm -ti -v "$(mktemp -d):/data" quay.io/minio/minio server /data --console-address :9001
# <output omitted>

# this produces an error, not sure if it's the same one
> docker run --rm -ti -v "foo:/data" quay.io/minio/minio server /data --console-address :9001
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
Warning: Default parity set to 0. This can lead to data loss.
WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
MinIO Object Storage Server
Copyright: 2015-2022 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2022-11-29T23-40-49Z (go1.19.3 linux/amd64)

Status:         1 Online, 0 Offline. 
API: http://172.17.0.3:9000  http://127.0.0.1:9000 
RootUser: minioadmin 
RootPass: minioadmin 
Console: http://172.17.0.3:9001 http://127.0.0.1:9001 
RootUser: minioadmin 
RootPass: minioadmin 

Command-line: https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart
   $ mc alias set myminio http://172.17.0.3:9000 minioadmin minioadmin

Documentation: https://min.io/docs/minio/linux/index.html

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ You are running an older version of MinIO released 2 days ago ┃
┃ Update: Run `mc admin update`                                 ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

^[[B
API: SYSTEM()
Time: 16:29:30 UTC 12/06/2022
DeploymentID: 709f69d3-18fd-4d76-8ded-61c1b0fd1977
Error: readObjectStart: expect { or n, but found , error found in #1 byte of ...||..., bigger context ...||... (*errors.errorString)
       5: internal/logger/logger.go:258:logger.LogIf()
       4: cmd/xl-storage.go:2262:cmd.(*xlStorage).RenameData()
       3: cmd/xl-storage-disk-id-check.go:374:cmd.(*xlStorageDiskIDCheck).RenameData()
       2: cmd/erasure-object.go:762:cmd.renameData.func1()
       1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

<repeats>

Here's the relevant docker inspect output

> docker inspect dazzling_mclean | jq '.[0].Mounts'
[
  {
    "Type": "bind",
    "Source": "/tmp/nix-shell.4lT9K6/tmp.AGR0vjPqIJ",
    "Destination": "/data",
    "Mode": "",
    "RW": true,
    "Propagation": "rprivate"
  }
]
> docker exec dazzling_mclean mount | grep 'on /data'
tmpfs on /data type tmpfs (rw,nosuid,nodev,nr_inodes=1048576,inode64)

> docker inspect youthful_napier | jq '.[0].Mounts'
[
  {
    "Type": "volume",
    "Name": "foo",
    "Source": "/var/lib/docker/volumes/foo/_data",
    "Destination": "/data",
    "Driver": "local",
    "Mode": "z",
    "RW": true,
    "Propagation": ""
  }
]
> docker exec youthful_napier mount | grep 'on /data' 
/dev/mapper/vg-root on /data type f2fs (rw,noatime,lazytime,background_gc=on,nodiscard,no_heap,user_xattr,inline_xattr,acl,inline_data,inline_dentry,flush_merge,extent_cache,mode=adaptive,active_logs=6,alloc_mode=default,checkpoint_merge,fsync_mode=posix,discard_unit=block,memory=normal)

This makes me think that it's an f2fs issue, since it's working on tmpfs. I'll try to find out more.

voidus commented 1 year ago

I looked at disk SMART data and ran a short self test, everything looks peachy. Running linux 6.0.8-arch1-1 here, basically stock arch linux.

voidus commented 1 year ago

Also the docker image page is suggesting to run it without a persistent volume. Is that a documentation bug? (I'm assuming podman and docker do the same thing with overlays, which is probably wrong, but since it's on dockerhub, if running on docker without -v is unsupported, it should be clearly stated)

voidus commented 1 year ago

From the f2fs docs:

fsync_mode=%s          Control the policy of fsync. Currently supports "posix",
                       "strict", and "nobarrier". In "posix" mode, which is
                       default, fsync will follow POSIX semantics and does a
                       light operation to improve the filesystem performance.
                       In "strict" mode, fsync will be heavy and behaves in line
                       with xfs, ext4 and btrfs, where xfstest generic/342 will
                       pass, but the performance will regress. "nobarrier" is
                       based on "posix", but doesn't issue flush command for
                       non-atomic files likewise "nobarrier" mount option.

Note that I'm using posix. Please re-open, it is not the setup problem that you hinted at.

voidus commented 1 year ago

@harshavardhana please reopen the issue, using a volume doesn't change anything.

spantaleev commented 1 year ago

I've hit the same problem but with the .minio.sys/pool.bin file being reported as "file corrupted".

I'm also on f2fs. Changing fsync_mode from the implitic default (posix) to strict and starting from scratch (empty MinIO data/configuration) fixed the problem.

klauspost commented 1 year ago

We cannot stop you from shooting yourself in the foot. We recommend xfs only for a reason - anything else is at your own risk.