EnterpriseDB / barman

Barman - Backup and Recovery Manager for PostgreSQL
https://www.pgbarman.org/
GNU General Public License v3.0
2.02k stars 190 forks source link

Barman fails during backup with "can't extract backup id" #835

Closed btxbtx closed 11 months ago

btxbtx commented 1 year ago

I am running barman within the context of cloudnative-pg (pg-operator chart version 0.18.0). I've not changed anything about the default barman config they use. I noticed that my backups are performing, at least partially, before eventually failing with can't extract backup id.

The logs:

my-cluster-2 postgres {
  "level": "info",
  "ts": "2023-07-26T09:12:07Z",
  "msg": "WAL archiving is working",
  "logging_pod": "my-cluster-2"
}
my-cluster-2 postgres {
  "level": "info",
  "ts": "2023-07-26T09:12:07Z",
  "msg": "Backup started",
  "backupName": "cnpg-backup",
  "backupNamespace": "cnpg-backup",
  "logging_pod": "my-cluster-2",
  "options": [
    "--user",
    "postgres",
    "--name",
    "backup-1690362727",
    "--endpoint-url",
    "<my s3 compatible storage>",
    "--cloud-provider",
    "aws-s3",
    "<my s3 destination>",
    "my-cluster"
  ]
}
my-cluster-2 postgres {
  "level": "info",
  "ts": "2023-07-26T09:12:29Z",
  "msg": "Backup completed",
  "backupName": "cnpg-backup",
  "backupNamespace": "cnpg-backup",
  "logging_pod": "my-cluster-2"
}
my-cluster-2 postgres {
  "level": "error",
  "ts": "2023-07-26T09:12:30Z",
  "logger": "barman",
  "msg": "Can't extract backup id",
  "logging_pod": "my-cluster-2",
  "command": "barman-cloud-backup-show",
  "options": [
    "--format",
    "json",
    "--endpoint-url",
    "<my s3 compatible storage>",
    "--cloud-provider",
    "aws-s3",
    "<my s3 destination>",
    "my-cluster",
    "backup-1690362727"
  ],
  "stdout": "",
  "stderr": "2023-07-26 09:12:30,354 [273] ERROR: Barman cloud backup show exception: Unknown backup 'backup-1690362727' for server 'my-cluster'\n",
  "error": "exit status 4",
  "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman.executeQueryCommand\n\tpkg/management/barman/backuplist.go:87\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman.GetBackupByName\n\tpkg/management/barman/backuplist.go:140\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).getExecutedBackupInfo\n\tpkg/management/postgres/backup.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).takeBackup\n\tpkg/management/postgres/backup.go:352\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).run\n\tpkg/management/postgres/backup.go:267"
}
my-cluster-2 postgres {
  "level": "error",
  "ts": "2023-07-26T09:12:30Z",
  "msg": "Backup failed",
  "backupName": "cnpg-backup",
  "backupNamespace": "cnpg-backup",
  "logging_pod": "my-cluster-2",
  "error": "exit status 4",
  "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).run\n\tpkg/management/postgres/backup.go:271"
}

Happy to provide more config if it is helpful, just let me know what is needed.

btxbtx commented 1 year ago

This seems related to a fix that @mikewallace1979 implemented in 3.4.1: https://github.com/EnterpriseDB/barman/releases/tag/release%2F3.4.1

I didn't mention in the original post, but this is happening with barman 3.5.0.

mikewallace1979 commented 1 year ago

Hi @btxbtx - I've created a local cnpg cluster using the 0.18.0 chart to reproduce this issue but so far my backups to a local minio container are working as expected.

Are you able to get a shell on one of the PostgreSQL pods via kubectl exec? If so, could you please run the following commands and post the output here?

AWS_ACCESS_KEY_ID=<access key id> AWS_SECRET_ACCESS_KEY=<secret access key> barman-cloud-backup-list <my s3 destination> my-cluster --endpoint-url=<my s3 compatible storage>
AWS_ACCESS_KEY_ID=<access key id> AWS_SECRET_ACCESS_KEY=<secret access key> barman-cloud-backup-show <my s3 destination> my-cluster backup-1690362727 --endpoint-url=<my s3 compatible storage>

If either of the above commands fail please retry them with the -vv option and post the output.

If you can't get a shell on the pods, could you list the backup IDs saved in your object store? They should be available under my-cluster/base prefix.

Also, which s3-compatible object store are you using?

Thanks.

btxbtx commented 1 year ago

Hi @mikewallace1979 thank you for the instructions.

I am using Linode Object Storage.

I've tested using the 0.18.0 chart on KinD, and on a managed linode cluster.

Output of barman-cloud-backup-list -vv, with region, endpoint URL, and credentials redacted.:

backup-list output ``` 2023-08-23 03:44:27,799 [1447] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-23 03:44:27,800 [1447] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-23 03:44:27,801 [1447] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-23 03:44:27,801 [1447] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-23 03:44:27,802 [1447] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-23 03:44:27,802 [1447] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-23 03:44:27,802 [1447] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-23 03:44:27,803 [1447] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-23 03:44:27,803 [1447] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-23 03:44:27,803 [1447] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-23 03:44:27,803 [1447] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-23 03:44:27,841 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-23 03:44:27,845 [1447] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-23 03:44:27,846 [1447] DEBUG: Looking for credentials via: env 2023-08-23 03:44:27,846 [1447] INFO: Found credentials in environment variables. 2023-08-23 03:44:27,850 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-23 03:44:27,862 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-23 03:44:27,862 [1447] DEBUG: Event choose-service-name: calling handler 2023-08-23 03:44:27,872 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-23 03:44:27,892 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-23 03:44:27,897 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-23 03:44:27,899 [1447] DEBUG: Event creating-client-class.s3: calling handler 2023-08-23 03:44:27,899 [1447] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffff8e9969d0> 2023-08-23 03:44:27,907 [1447] DEBUG: Event creating-client-class.s3: calling handler 2023-08-23 03:44:27,907 [1447] DEBUG: Setting s3 timeout as (60, 60) 2023-08-23 03:44:27,909 [1447] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-23 03:44:27,909 [1447] DEBUG: Registering retry handlers for service: s3 2023-08-23 03:44:27,909 [1447] DEBUG: Registering S3 region redirector handler 2023-08-23 03:44:27,909 [1447] DEBUG: Loading s3:s3 2023-08-23 03:44:27,910 [1447] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-23 03:44:27,910 [1447] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-23 03:44:27,910 [1447] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https:/.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-23 03:44:27,911 [1447] DEBUG: Endpoint provider result: https:/.linodeobjects.com/barmantest 2023-08-23 03:44:27,911 [1447] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-23 03:44:27,911 [1447] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler > 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest/', 'url': 'https:/.linodeobjects.com/barmantest', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest'}}}} 2023-08-23 03:44:27,911 [1447] DEBUG: Event request-created.s3.HeadBucket: calling handler > 2023-08-23 03:44:27,911 [1447] DEBUG: Event choose-signer.s3.HeadBucket: calling handler > 2023-08-23 03:44:27,911 [1447] DEBUG: Event choose-signer.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Event before-sign.s3.HeadBucket: calling handler 2023-08-23 03:44:27,911 [1447] DEBUG: Calculating signature using v4 auth. 2023-08-23 03:44:27,911 [1447] DEBUG: CanonicalRequest: HEAD /barmantest host:cnpg..linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230823T034427Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-23 03:44:27,911 [1447] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230823T034427Z 20230823//s3/aws4_request 50d7c9304957b754adfc0aee13de51ef776b72c477ad5eb8aa2473c38027df23 2023-08-23 03:44:27,911 [1447] DEBUG: Signature: 4cb9a1885c257dd344099a48bff2ccff11b7d296140a961faa72c8824bc263cb 2023-08-23 03:44:27,911 [1447] DEBUG: Event request-created.s3.HeadBucket: calling handler 2023-08-23 03:44:27,912 [1447] DEBUG: Sending http request: .linodeobjects.com/barmantest, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230823T034427Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230823//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=4cb9a1885c257dd344099a48bff2ccff11b7d296140a961faa72c8824bc263cb', 'amz-sdk-invocation-id': b'bddcc9b5-1b57-4760-9790-9b6fee2da29f', 'amz-sdk-request': b'attempt=1'}> 2023-08-23 03:44:27,912 [1447] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-23 03:44:27,912 [1447] DEBUG: Starting new HTTPS connection (1): cnpg..linodeobjects.com:443 2023-08-23 03:44:28,783 [1447] DEBUG: https:/.linodeobjects.com:443 "HEAD /barmantest HTTP/1.1" 200 0 2023-08-23 03:44:28,784 [1447] DEBUG: Response headers: {'Date': 'Wed, 23 Aug 2023 03:44:28 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx00000c4d571ed1f028fe5-0064e5809c-47f4fba5-default'} 2023-08-23 03:44:28,786 [1447] DEBUG: Response body: b'' 2023-08-23 03:44:28,787 [1447] DEBUG: Event needs-retry.s3.HeadBucket: calling handler 2023-08-23 03:44:28,787 [1447] DEBUG: No retry needed. 2023-08-23 03:44:28,788 [1447] DEBUG: Event needs-retry.s3.HeadBucket: calling handler > 2023-08-23 03:44:28,788 [1447] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-23 03:44:28,788 [1447] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-23 03:44:28,789 [1447] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https:/.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-23 03:44:28,789 [1447] DEBUG: Endpoint provider result: https:/.linodeobjects.com/barmantest 2023-08-23 03:44:28,790 [1447] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-23 03:44:28,790 [1447] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-23 03:44:28,790 [1447] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,790 [1447] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,790 [1447] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,791 [1447] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler > 2023-08-23 03:44:28,791 [1447] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,792 [1447] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,792 [1447] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,792 [1447] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,792 [1447] DEBUG: Making request for OperationModel(name=ListObjectsV2) with params: {'url_path': '?list-type=2', 'query_string': {'prefix': 'cnpg-cluster-1/base/', 'delimiter': '/', 'encoding-type': 'url'}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest?list-type=2', 'url': 'https:/.linodeobjects.com/barmantest?list-type=2&prefix=cnpg-cluster-1%2Fbase%2F&delimiter=%2F&encoding-type=url', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 'encoding_type_auto_set': True, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest', 'Prefix': 'cnpg-cluster-1/base/', 'Delimiter': '/', 'EncodingType': 'url'}}}} 2023-08-23 03:44:28,792 [1447] DEBUG: Event request-created.s3.ListObjectsV2: calling handler > 2023-08-23 03:44:28,792 [1447] DEBUG: Event choose-signer.s3.ListObjectsV2: calling handler > 2023-08-23 03:44:28,792 [1447] DEBUG: Event choose-signer.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,793 [1447] DEBUG: Event before-sign.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,793 [1447] DEBUG: Calculating signature using v4 auth. 2023-08-23 03:44:28,793 [1447] DEBUG: CanonicalRequest: GET /barmantest delimiter=%2F&encoding-type=url&list-type=2&prefix=cnpg-cluster-1%2Fbase%2F host:cnpg..linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230823T034428Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-23 03:44:28,793 [1447] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230823T034428Z 20230823//s3/aws4_request 3677b32aaad4d75465e15e2ed1cc117af574dcbd12719a4a1e070bf5e3ca5120 2023-08-23 03:44:28,794 [1447] DEBUG: Signature: 5c6c29601e7c103cbb95da62249fd3ad7b6e5ddcce9ec1b088879cccc31aeb82 2023-08-23 03:44:28,794 [1447] DEBUG: Event request-created.s3.ListObjectsV2: calling handler 2023-08-23 03:44:28,794 [1447] DEBUG: Sending http request: .linodeobjects.com/barmantest?list-type=2&prefix=cnpg-cluster-1%2Fbase%2F&delimiter=%2F&encoding-type=url, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230823T034428Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230823//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=5c6c29601e7c103cbb95da62249fd3ad7b6e5ddcce9ec1b088879cccc31aeb82', 'amz-sdk-invocation-id': b'4851efad-0a92-46b3-a8b3-d6500a49eecc', 'amz-sdk-request': b'attempt=1'}> 2023-08-23 03:44:28,795 [1447] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-23 03:44:29,005 [1447] DEBUG: https:/.linodeobjects.com:443 "GET /barmantest?list-type=2&prefix=cnpg-cluster-1%2Fbase%2F&delimiter=%2F&encoding-type=url HTTP/1.1" 200 0 2023-08-23 03:44:29,005 [1447] DEBUG: Response headers: {'Date': 'Wed, 23 Aug 2023 03:44:29 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx00000dbf59d3f447a64f7-0064e5809d-480a84b3-default'} 2023-08-23 03:44:29,005 [1447] DEBUG: Response body: b'' 2023-08-23 03:44:29,006 [1447] DEBUG: Event needs-retry.s3.ListObjectsV2: calling handler 2023-08-23 03:44:29,006 [1447] DEBUG: No retry needed. 2023-08-23 03:44:29,015 [1447] DEBUG: Event needs-retry.s3.ListObjectsV2: calling handler > 2023-08-23 03:44:29,015 [1447] DEBUG: Event after-call.s3.ListObjectsV2: calling handler Backup ID End Time Begin Wal Archival Status Name ```

Output of barman-cloud-backup-show -vv, with region, endpoint URL, and credentials redacted.

backup-show output ``` 2023-08-23 03:38:13,805 [1317] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-23 03:38:13,806 [1317] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-23 03:38:13,806 [1317] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-23 03:38:13,807 [1317] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-23 03:38:13,807 [1317] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-23 03:38:13,807 [1317] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-23 03:38:13,808 [1317] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-23 03:38:13,809 [1317] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-23 03:38:13,809 [1317] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-23 03:38:13,809 [1317] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-23 03:38:13,809 [1317] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-23 03:38:13,832 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-23 03:38:13,834 [1317] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-23 03:38:13,835 [1317] DEBUG: Looking for credentials via: env 2023-08-23 03:38:13,835 [1317] INFO: Found credentials in environment variables. 2023-08-23 03:38:13,839 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-23 03:38:13,852 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-23 03:38:13,852 [1317] DEBUG: Event choose-service-name: calling handler 2023-08-23 03:38:13,871 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-23 03:38:13,897 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-23 03:38:13,901 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-23 03:38:13,903 [1317] DEBUG: Event creating-client-class.s3: calling handler 2023-08-23 03:38:13,904 [1317] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffffa5a08940> 2023-08-23 03:38:13,911 [1317] DEBUG: Event creating-client-class.s3: calling handler 2023-08-23 03:38:13,912 [1317] DEBUG: Setting s3 timeout as (60, 60) 2023-08-23 03:38:13,913 [1317] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-23 03:38:13,913 [1317] DEBUG: Registering retry handlers for service: s3 2023-08-23 03:38:13,913 [1317] DEBUG: Registering S3 region redirector handler 2023-08-23 03:38:13,913 [1317] DEBUG: Loading s3:s3 2023-08-23 03:38:13,914 [1317] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-23 03:38:13,914 [1317] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-23 03:38:13,914 [1317] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-23 03:38:13,915 [1317] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-23 03:38:13,915 [1317] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-23 03:38:13,915 [1317] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler > 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-23 03:38:13,915 [1317] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-23 03:38:13,915 [1317] DEBUG: Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest/', 'url': 'https://.linodeobjects.com/barmantest', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest'}}}} 2023-08-23 03:38:13,916 [1317] DEBUG: Event request-created.s3.HeadBucket: calling handler > 2023-08-23 03:38:13,916 [1317] DEBUG: Event choose-signer.s3.HeadBucket: calling handler > 2023-08-23 03:38:13,916 [1317] DEBUG: Event choose-signer.s3.HeadBucket: calling handler 2023-08-23 03:38:13,916 [1317] DEBUG: Event before-sign.s3.HeadBucket: calling handler 2023-08-23 03:38:13,916 [1317] DEBUG: Calculating signature using v4 auth. 2023-08-23 03:38:13,916 [1317] DEBUG: CanonicalRequest: HEAD /barmantest host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230823T033813Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-23 03:38:13,916 [1317] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230823T033813Z 20230823//s3/aws4_request 41d7c041abe7995e1fdb6de6b7ccd930bfacf5a1793888f2b9642a7aa67746a2 2023-08-23 03:38:13,916 [1317] DEBUG: Signature: 35048cdfb5bf66024d5c66989462852992eabd68a2deebce07fdecc90c59ff9c 2023-08-23 03:38:13,916 [1317] DEBUG: Event request-created.s3.HeadBucket: calling handler 2023-08-23 03:38:13,916 [1317] DEBUG: Sending http request: .linodeobjects.com/barmantest, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230823T033813Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=35048cdfb5bf66024d5c66989462852992eabd68a2deebce07fdecc90c59ff9c', 'amz-sdk-invocation-id': b'369875c5-a945-4593-a973-b3b360236e65', 'amz-sdk-request': b'attempt=1'}> 2023-08-23 03:38:13,916 [1317] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-23 03:38:13,917 [1317] DEBUG: Starting new HTTPS connection (1): .linodeobjects.com:443 2023-08-23 03:38:14,759 [1317] DEBUG: https://.linodeobjects.com:443 "HEAD /barmantest HTTP/1.1" 200 0 2023-08-23 03:38:14,760 [1317] DEBUG: Response headers: {'Date': 'Wed, 23 Aug 2023 03:38:14 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx00000fec3245480039561-0064e57f26-47a3684c-default'} 2023-08-23 03:38:14,760 [1317] DEBUG: Response body: b'' 2023-08-23 03:38:14,761 [1317] DEBUG: Event needs-retry.s3.HeadBucket: calling handler 2023-08-23 03:38:14,762 [1317] DEBUG: No retry needed. 2023-08-23 03:38:14,763 [1317] DEBUG: Event needs-retry.s3.HeadBucket: calling handler > 2023-08-23 03:38:14,763 [1317] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-23 03:38:14,764 [1317] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-23 03:38:14,765 [1317] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-23 03:38:14,766 [1317] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-23 03:38:14,766 [1317] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-23 03:38:14,766 [1317] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-23 03:38:14,767 [1317] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,767 [1317] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,767 [1317] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,767 [1317] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler > 2023-08-23 03:38:14,767 [1317] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,768 [1317] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,769 [1317] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,769 [1317] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,769 [1317] DEBUG: Making request for OperationModel(name=ListObjectsV2) with params: {'url_path': '?list-type=2', 'query_string': {'prefix': 'cnpg-cluster-1/base/', 'delimiter': '/', 'encoding-type': 'url'}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest?list-type=2', 'url': 'https://.linodeobjects.com/barmantest?list-type=2&prefix=cnpg-cluster-1%2Fbase%2F&delimiter=%2F&encoding-type=url', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 'encoding_type_auto_set': True, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest', 'Prefix': 'cnpg-cluster-1/base/', 'Delimiter': '/', 'EncodingType': 'url'}}}} 2023-08-23 03:38:14,770 [1317] DEBUG: Event request-created.s3.ListObjectsV2: calling handler > 2023-08-23 03:38:14,770 [1317] DEBUG: Event choose-signer.s3.ListObjectsV2: calling handler > 2023-08-23 03:38:14,770 [1317] DEBUG: Event choose-signer.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,772 [1317] DEBUG: Event before-sign.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,772 [1317] DEBUG: Calculating signature using v4 auth. 2023-08-23 03:38:14,773 [1317] DEBUG: CanonicalRequest: GET /barmantest delimiter=%2F&encoding-type=url&list-type=2&prefix=cnpg-cluster-1%2Fbase%2F host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230823T033814Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-23 03:38:14,773 [1317] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230823T033814Z 20230823//s3/aws4_request 910699c3e5f32a782f98652d5d62a5b06fa545e788a624b5020daefecc9ffefe 2023-08-23 03:38:14,773 [1317] DEBUG: Signature: 65d876a16708343b37816e89b108ebdf5dc9afb0b7d6b3e6828d73e5a35044e5 2023-08-23 03:38:14,773 [1317] DEBUG: Event request-created.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,774 [1317] DEBUG: Sending http request: .linodeobjects.com/barmantest?list-type=2&prefix=cnpg-cluster-1%2Fbase%2F&delimiter=%2F&encoding-type=url, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230823T033814Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=65d876a16708343b37816e89b108ebdf5dc9afb0b7d6b3e6828d73e5a35044e5', 'amz-sdk-invocation-id': b'8ac99512-e4d9-4dbd-a272-4d0766786556', 'amz-sdk-request': b'attempt=1'}> 2023-08-23 03:38:14,774 [1317] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-23 03:38:14,984 [1317] DEBUG: https://.linodeobjects.com:443 "GET /barmantest?list-type=2&prefix=cnpg-cluster-1%2Fbase%2F&delimiter=%2F&encoding-type=url HTTP/1.1" 200 0 2023-08-23 03:38:14,984 [1317] DEBUG: Response headers: {'Date': 'Wed, 23 Aug 2023 03:38:15 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx000006ee43e32d9cfcdbb-0064e57f27-47ebc39c-default'} 2023-08-23 03:38:14,985 [1317] DEBUG: Response body: b'' 2023-08-23 03:38:14,985 [1317] DEBUG: Event needs-retry.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,997 [1317] DEBUG: No retry needed. 2023-08-23 03:38:14,997 [1317] DEBUG: Event needs-retry.s3.ListObjectsV2: calling handler > 2023-08-23 03:38:14,997 [1317] DEBUG: Event after-call.s3.ListObjectsV2: calling handler 2023-08-23 03:38:14,998 [1317] ERROR: Barman cloud backup show exception: Unknown backup 'cnpg-backup-1692761431' for server 'cnpg-cluster-1' 2023-08-23 03:38:14,998 [1317] DEBUG: Exception details: Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/barman/clients/cloud_backup_show.py", line 65, in main backup_id = catalog.parse_backup_id(config.backup_id) File "/usr/local/lib/python3.9/dist-packages/barman/cloud.py", line 2114, in parse_backup_id raise ValueError( ValueError: Unknown backup 'cnpg-backup-1692761431' for server 'cnpg-cluster-1' ```

Here are the logs from that run, if they are relevant. I have only redacted the endpoint URL in this case.

cnpg cluster logs ```json cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:31Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:31Z", "msg": "Backup started", "backupName": "cnpg-backup-1692761431", "backupNamespace": "cnpg-backup-1692761431", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692761431", "--endpoint-url", "https://.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmantest", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:32Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-23 03:30:32.283 UTC", "process_id": "26", "session_id": "64e5798e.1a", "session_line_num": "5", "session_start_time": "2023-08-23 03:14:22 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:32Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-23 03:30:32.324 UTC", "process_id": "26", "session_id": "64e5798e.1a", "session_line_num": "6", "session_start_time": "2023-08-23 03:14:22 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.042 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32442 kB, estimate=32442 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:44Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000006", "startTime": "2023-08-23T03:30:33Z", "endTime": "2023-08-23T03:30:44Z", "elapsedWalTime": 11.160098297 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:45Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000006.00000028.backup", "startTime": "2023-08-23T03:30:44Z", "endTime": "2023-08-23T03:30:45Z", "elapsedWalTime": 1.436170792 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:46Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-23 03:30:46.279 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "1084", "connection_from": "[local]", "session_id": "64e57d58.43c", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-23 03:30:32 UTC", "virtual_transaction_id": "3/640", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "restore point \"barman_20230823T033032\" created at 0/7000090", "query": "SELECT pg_create_restore_point('barman_20230823T033032')", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-23T03:30:52Z", "msg": "Backup completed", "backupName": "cnpg-backup-1692761431", "backupNamespace": "cnpg-backup-1692761431", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-23T03:30:53Z", "logger": "barman", "msg": "Can't extract backup id", "logging_pod": "cnpg-cluster-1", "command": "barman-cloud-backup-show", "options": [ "--format", "json", "--endpoint-url", "https://.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmantest", "cnpg-cluster", "backup-1692761431" ], "stdout": "", "stderr": "2023-08-23 03:30:53,367 [1127] ERROR: Barman cloud backup show exception: Unknown backup 'backup-1692761431' for server 'cnpg-cluster'\n", "error": "exit status 4", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman.executeQueryCommand\n\tpkg/management/barman/backuplist.go:87\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman.GetBackupByName\n\tpkg/management/barman/backuplist.go:140\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).getExecutedBackupInfo\n\tpkg/management/postgres/backup.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).takeBackup\n\tpkg/management/postgres/backup.go:352\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).run\n\tpkg/management/postgres/backup.go:267" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-23T03:30:53Z", "msg": "Backup failed", "backupName": "cnpg-backup-1692761431", "backupNamespace": "cnpg-backup-1692761431", "logging_pod": "cnpg-cluster-1", "error": "exit status 4", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).run\n\tpkg/management/postgres/backup.go:271" } ```
mikewallace1979 commented 1 year ago

Thanks for providing the additional information. The fact that the operator logs Backup completed while the barman-cloud-backup-list command shows no backups is particularly concerning.

I tried reproducing this issue using linode object storage this morning but have still not had any success.

Can you try taking a manual backup in a shell on one of the pods? You will need to export TMPDIR so that a writable filesystem is used for temporary files - something like this should work:

TMPDIR=/controller AWS_ACCESS_KEY_ID=<access key id> AWS_SECRET_ACCESS_KEY=<secret access key> barman-cloud-backup --user=postgres --name=backup-test s3://barmantest --endpoint-url=https://<my-endpoint>.linodeobjects.com --cloud-provider=aws-s3 cnpg-cluster -vv

Redacting the same things as last time should be fine.

It would definitely be helpful at this point to see a list of the objects present in the linode object store after the backup has completed - can you include that in some form, preferably with the full object keys? Even if the backups aren't being written there should still be archived WALs present and it would be good to just verify that things are ending up where they should be.

btxbtx commented 1 year ago

@mikewallace1979 would my cnpg manifests be at all helpful, or are they out of scope? There's nothing special about my network, so I am not sure what else could be different between your test on linode and mine.

barman-cloud-backup output ``` 2023-08-24 03:39:53,446 [14182] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-24 03:39:53,448 [14182] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-24 03:39:53,448 [14182] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-24 03:39:53,449 [14182] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-24 03:39:53,449 [14182] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-24 03:39:53,450 [14182] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-24 03:39:53,450 [14182] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-24 03:39:53,451 [14182] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-24 03:39:53,451 [14182] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-24 03:39:53,452 [14182] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-24 03:39:53,452 [14182] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-24 03:39:53,473 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-24 03:39:53,475 [14182] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-24 03:39:53,476 [14182] DEBUG: Looking for credentials via: env 2023-08-24 03:39:53,477 [14182] INFO: Found credentials in environment variables. 2023-08-24 03:39:53,478 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-24 03:39:53,488 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-24 03:39:53,488 [14182] DEBUG: Event choose-service-name: calling handler 2023-08-24 03:39:53,498 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-24 03:39:53,516 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-24 03:39:53,522 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-24 03:39:53,523 [14182] DEBUG: Event creating-client-class.s3: calling handler 2023-08-24 03:39:53,523 [14182] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffff820c74c0> 2023-08-24 03:39:53,532 [14182] DEBUG: Event creating-client-class.s3: calling handler 2023-08-24 03:39:53,541 [14182] DEBUG: Setting s3 timeout as (60, 60) 2023-08-24 03:39:53,543 [14182] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-24 03:39:53,543 [14182] DEBUG: Registering retry handlers for service: s3 2023-08-24 03:39:53,543 [14182] DEBUG: Registering S3 region redirector handler 2023-08-24 03:39:53,543 [14182] DEBUG: Loading s3:s3 2023-08-24 03:39:53,544 [14182] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-24 03:39:53,544 [14182] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-24 03:39:53,544 [14182] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-24 03:39:53,545 [14182] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-24 03:39:53,545 [14182] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-24 03:39:53,545 [14182] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler > 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-24 03:39:53,546 [14182] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-24 03:39:53,546 [14182] DEBUG: Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest/', 'url': 'https://.linodeobjects.com/barmantest', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest'}}}} 2023-08-24 03:39:53,547 [14182] DEBUG: Event request-created.s3.HeadBucket: calling handler > 2023-08-24 03:39:53,547 [14182] DEBUG: Event choose-signer.s3.HeadBucket: calling handler > 2023-08-24 03:39:53,547 [14182] DEBUG: Event choose-signer.s3.HeadBucket: calling handler 2023-08-24 03:39:53,547 [14182] DEBUG: Event before-sign.s3.HeadBucket: calling handler 2023-08-24 03:39:53,547 [14182] DEBUG: Calculating signature using v4 auth. 2023-08-24 03:39:53,547 [14182] DEBUG: CanonicalRequest: HEAD /barmantest host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230824T033953Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-24 03:39:53,547 [14182] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230824T033953Z 20230824//s3/aws4_request a41921673b5c6170bb28471d88c32ea32fd660bc11db7cc6e4a1d9a92918ff50 2023-08-24 03:39:53,548 [14182] DEBUG: Signature: ec25784ea407fe974b2fde2b01d9b7ed95a9b0fda1690775ee897979a4423ca3 2023-08-24 03:39:53,548 [14182] DEBUG: Event request-created.s3.HeadBucket: calling handler 2023-08-24 03:39:53,548 [14182] DEBUG: Sending http request: .linodeobjects.com/barmantest, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230824T033953Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230824//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=ec25784ea407fe974b2fde2b01d9b7ed95a9b0fda1690775ee897979a4423ca3', 'amz-sdk-invocation-id': b'f961504b-274f-4d3d-a30e-cebb136e7c5f', 'amz-sdk-request': b'attempt=1'}> 2023-08-24 03:39:53,549 [14182] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-24 03:39:53,549 [14182] DEBUG: Starting new HTTPS connection (1): .linodeobjects.com:443 2023-08-24 03:39:54,496 [14182] DEBUG: https://.linodeobjects.com:443 "HEAD /barmantest HTTP/1.1" 200 0 2023-08-24 03:39:54,496 [14182] DEBUG: Response headers: {'Date': 'Thu, 24 Aug 2023 03:39:54 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx0000096927e77fd125cad-0064e6d10a-47e16cb8-default'} 2023-08-24 03:39:54,496 [14182] DEBUG: Response body: b'' 2023-08-24 03:39:54,497 [14182] DEBUG: Event needs-retry.s3.HeadBucket: calling handler 2023-08-24 03:39:54,497 [14182] DEBUG: No retry needed. 2023-08-24 03:39:54,497 [14182] DEBUG: Event needs-retry.s3.HeadBucket: calling handler > 2023-08-24 03:39:54,503 [14182] INFO: Starting backup '20230824T033954' 2023-08-24 03:39:54,515 [14182] DEBUG: detecting data directory 2023-08-24 03:39:54,521 [14182] DEBUG: detecting tablespaces 2023-08-24 03:39:54,522 [14182] DEBUG: issuing start backup command 2023-08-24 03:39:54,523 [14182] DEBUG: Start of native concurrent backup 2023-08-24 03:39:54,584 [14182] INFO: Uploading 'pgdata' directory '/var/lib/postgresql/data/pgdata' as 'data.tar' 2023-08-24 03:39:54,587 [14182] DEBUG: Uploading ./pg_ident.conf 2023-08-24 03:39:54,587 [14182] DEBUG: Uploading ./postgresql.auto.conf 2023-08-24 03:39:54,588 [14182] DEBUG: Uploading ./postgresql.conf 2023-08-24 03:39:54,588 [14182] DEBUG: Uploading ./pg_hba.conf 2023-08-24 03:39:54,589 [14182] DEBUG: Uploading ./custom.conf 2023-08-24 03:39:54,589 [14182] DEBUG: Uploading ./current_logfiles 2023-08-24 03:39:54,590 [14182] DEBUG: Uploading ./PG_VERSION 2023-08-24 03:39:54,592 [14182] DEBUG: Uploading base/1/4160 2023-08-24 03:39:54,592 [14182] DEBUG: Uploading base/1/3256 2023-08-24 03:39:54,592 [14182] DEBUG: Uploading base/1/2607_fsm 2023-08-24 03:39:54,593 [14182] DEBUG: Uploading base/1/2603_vm 2023-08-24 03:39:54,594 [14182] DEBUG: Uploading base/1/3439 2023-08-24 03:39:54,594 [14182] DEBUG: Uploading base/1/2687 2023-08-24 03:39:54,595 [14182] DEBUG: Uploading base/1/3350 2023-08-24 03:39:54,595 [14182] DEBUG: Uploading base/1/2835 2023-08-24 03:39:54,595 [14182] DEBUG: Uploading base/1/2682 2023-08-24 03:39:54,596 [14182] DEBUG: Uploading base/1/2679 2023-08-24 03:39:54,597 [14182] DEBUG: Uploading base/1/4146 2023-08-24 03:39:54,597 [14182] DEBUG: Uploading base/1/548 2023-08-24 03:39:54,597 [14182] DEBUG: Uploading base/1/175 2023-08-24 03:39:54,598 [14182] DEBUG: Uploading base/1/2337 2023-08-24 03:39:54,599 [14182] DEBUG: Uploading base/1/1249_vm 2023-08-24 03:39:54,599 [14182] DEBUG: Uploading base/1/3576 2023-08-24 03:39:54,599 [14182] DEBUG: Uploading base/1/3455 2023-08-24 03:39:54,600 [14182] DEBUG: Uploading base/1/6102 2023-08-24 03:39:54,600 [14182] DEBUG: Uploading base/1/2612_fsm 2023-08-24 03:39:54,601 [14182] DEBUG: Uploading base/1/3381 2023-08-24 03:39:54,601 [14182] DEBUG: Uploading base/1/4166 2023-08-24 03:39:54,601 [14182] DEBUG: Uploading base/1/6229 2023-08-24 03:39:54,602 [14182] DEBUG: Uploading base/1/3380 2023-08-24 03:39:54,602 [14182] DEBUG: Uploading base/1/2579 2023-08-24 03:39:54,603 [14182] DEBUG: Uploading base/1/2683 2023-08-24 03:39:54,603 [14182] DEBUG: Uploading base/1/2600_fsm 2023-08-24 03:39:54,603 [14182] DEBUG: Uploading base/1/13388 2023-08-24 03:39:54,604 [14182] DEBUG: Uploading base/1/3541_fsm 2023-08-24 03:39:54,604 [14182] DEBUG: Uploading base/1/4165 2023-08-24 03:39:54,605 [14182] DEBUG: Uploading base/1/2618 2023-08-24 03:39:54,605 [14182] DEBUG: Uploading base/1/2608_fsm 2023-08-24 03:39:54,606 [14182] DEBUG: Uploading base/1/3602_fsm 2023-08-24 03:39:54,606 [14182] DEBUG: Uploading base/1/2667 2023-08-24 03:39:54,606 [14182] DEBUG: Uploading base/1/4159 2023-08-24 03:39:54,607 [14182] DEBUG: Uploading base/1/13380_fsm 2023-08-24 03:39:54,607 [14182] DEBUG: Uploading base/1/3601 2023-08-24 03:39:54,608 [14182] DEBUG: Uploading base/1/6110 2023-08-24 03:39:54,608 [14182] DEBUG: Uploading base/1/3118 2023-08-24 03:39:54,609 [14182] DEBUG: Uploading base/1/2609 2023-08-24 03:39:54,610 [14182] DEBUG: Uploading base/1/1247_fsm 2023-08-24 03:39:54,610 [14182] DEBUG: Uploading base/1/4170 2023-08-24 03:39:54,611 [14182] DEBUG: Uploading base/1/3468 2023-08-24 03:39:54,611 [14182] DEBUG: Uploading base/1/3541_vm 2023-08-24 03:39:54,611 [14182] DEBUG: Uploading base/1/4169 2023-08-24 03:39:54,612 [14182] DEBUG: Uploading base/1/4150 2023-08-24 03:39:54,612 [14182] DEBUG: Uploading base/1/2612 2023-08-24 03:39:54,613 [14182] DEBUG: Uploading base/1/2691 2023-08-24 03:39:54,614 [14182] DEBUG: Uploading base/1/2702 2023-08-24 03:39:54,614 [14182] DEBUG: Uploading base/1/2839 2023-08-24 03:39:54,614 [14182] DEBUG: Uploading base/1/1247_vm 2023-08-24 03:39:54,615 [14182] DEBUG: Uploading base/1/3602 2023-08-24 03:39:54,615 [14182] DEBUG: Uploading base/1/2836 2023-08-24 03:39:54,615 [14182] DEBUG: Uploading base/1/4171 2023-08-24 03:39:54,616 [14182] DEBUG: Uploading base/1/3430 2023-08-24 03:39:54,616 [14182] DEBUG: Uploading base/1/2618_vm 2023-08-24 03:39:54,617 [14182] DEBUG: Uploading base/1/2689 2023-08-24 03:39:54,617 [14182] DEBUG: Uploading base/1/3119 2023-08-24 03:39:54,618 [14182] DEBUG: Uploading base/1/3603_vm 2023-08-24 03:39:54,618 [14182] DEBUG: Uploading base/1/4151 2023-08-24 03:39:54,618 [14182] DEBUG: Uploading base/1/13385_fsm 2023-08-24 03:39:54,619 [14182] DEBUG: Uploading base/1/1255 2023-08-24 03:39:54,621 [14182] DEBUG: Uploading base/1/2618_fsm 2023-08-24 03:39:54,621 [14182] DEBUG: Uploading base/1/2336 2023-08-24 03:39:54,621 [14182] DEBUG: Uploading base/1/4153 2023-08-24 03:39:54,622 [14182] DEBUG: Uploading base/1/3600 2023-08-24 03:39:54,622 [14182] DEBUG: Uploading base/1/2753_vm 2023-08-24 03:39:54,623 [14182] DEBUG: Uploading base/1/2602 2023-08-24 03:39:54,623 [14182] DEBUG: Uploading base/1/2675 2023-08-24 03:39:54,624 [14182] DEBUG: Uploading base/1/2619 2023-08-24 03:39:54,624 [14182] DEBUG: Uploading base/1/6111 2023-08-24 03:39:54,625 [14182] DEBUG: Uploading base/1/2678 2023-08-24 03:39:54,626 [14182] DEBUG: Uploading base/1/6117 2023-08-24 03:39:54,626 [14182] DEBUG: Uploading base/1/2684 2023-08-24 03:39:54,626 [14182] DEBUG: Uploading base/1/2673 2023-08-24 03:39:54,627 [14182] DEBUG: Uploading base/1/2613 2023-08-24 03:39:54,627 [14182] DEBUG: Uploading base/1/2228 2023-08-24 03:39:54,628 [14182] DEBUG: Uploading base/1/1255_vm 2023-08-24 03:39:54,628 [14182] DEBUG: Uploading base/1/2836_fsm 2023-08-24 03:39:54,629 [14182] DEBUG: Uploading base/1/3501 2023-08-24 03:39:54,629 [14182] DEBUG: Uploading base/1/2753_fsm 2023-08-24 03:39:54,630 [14182] DEBUG: Uploading base/1/2600 2023-08-24 03:39:54,630 [14182] DEBUG: Uploading base/1/113 2023-08-24 03:39:54,630 [14182] DEBUG: Uploading base/1/2688 2023-08-24 03:39:54,631 [14182] DEBUG: Uploading base/1/1259_fsm 2023-08-24 03:39:54,631 [14182] DEBUG: Uploading base/1/2692 2023-08-24 03:39:54,632 [14182] DEBUG: Uploading base/1/6228 2023-08-24 03:39:54,632 [14182] DEBUG: Uploading base/1/2328 2023-08-24 03:39:54,633 [14182] DEBUG: Uploading base/1/2602_fsm 2023-08-24 03:39:54,633 [14182] DEBUG: Uploading base/1/3456_vm 2023-08-24 03:39:54,634 [14182] DEBUG: Uploading base/1/3600_vm 2023-08-24 03:39:54,634 [14182] DEBUG: Uploading base/1/2755 2023-08-24 03:39:54,634 [14182] DEBUG: Uploading base/1/3604 2023-08-24 03:39:54,635 [14182] DEBUG: Uploading base/1/2620 2023-08-24 03:39:54,635 [14182] DEBUG: Uploading base/1/2651 2023-08-24 03:39:54,635 [14182] DEBUG: Uploading base/1/1417 2023-08-24 03:39:54,636 [14182] DEBUG: Uploading base/1/2836_vm 2023-08-24 03:39:54,636 [14182] DEBUG: Uploading base/1/2699 2023-08-24 03:39:54,637 [14182] DEBUG: Uploading base/1/4145 2023-08-24 03:39:54,637 [14182] DEBUG: Uploading base/1/3429 2023-08-24 03:39:54,638 [14182] DEBUG: Uploading base/1/826 2023-08-24 03:39:54,638 [14182] DEBUG: Uploading base/1/3502 2023-08-24 03:39:54,639 [14182] DEBUG: Uploading base/1/4156 2023-08-24 03:39:54,639 [14182] DEBUG: Uploading base/1/2606 2023-08-24 03:39:54,640 [14182] DEBUG: Uploading base/1/3601_vm 2023-08-24 03:39:54,640 [14182] DEBUG: Uploading base/1/2611 2023-08-24 03:39:54,641 [14182] DEBUG: Uploading base/1/4172 2023-08-24 03:39:54,641 [14182] DEBUG: Uploading base/1/13385 2023-08-24 03:39:54,641 [14182] DEBUG: Uploading base/1/13380 2023-08-24 03:39:54,642 [14182] DEBUG: Uploading base/1/3600_fsm 2023-08-24 03:39:54,642 [14182] DEBUG: Uploading base/1/2605 2023-08-24 03:39:54,642 [14182] DEBUG: Uploading base/1/2603_fsm 2023-08-24 03:39:54,643 [14182] DEBUG: Uploading base/1/2616_fsm 2023-08-24 03:39:54,643 [14182] DEBUG: Uploading base/1/3766 2023-08-24 03:39:54,644 [14182] DEBUG: Uploading base/1/2602_vm 2023-08-24 03:39:54,644 [14182] DEBUG: Uploading base/1/2838_fsm 2023-08-24 03:39:54,644 [14182] DEBUG: Uploading base/1/4144 2023-08-24 03:39:54,645 [14182] DEBUG: Uploading base/1/3608 2023-08-24 03:39:54,645 [14182] DEBUG: Uploading base/1/2600_vm 2023-08-24 03:39:54,645 [14182] DEBUG: Uploading base/1/2616 2023-08-24 03:39:54,646 [14182] DEBUG: Uploading base/1/3440 2023-08-24 03:39:54,646 [14182] DEBUG: Uploading base/1/2612_vm 2023-08-24 03:39:54,647 [14182] DEBUG: Uploading base/1/13374 2023-08-24 03:39:54,647 [14182] DEBUG: Uploading base/1/2831 2023-08-24 03:39:54,648 [14182] DEBUG: Uploading base/1/3456_fsm 2023-08-24 03:39:54,648 [14182] DEBUG: Uploading base/1/2608_vm 2023-08-24 03:39:54,649 [14182] DEBUG: Uploading base/1/2605_vm 2023-08-24 03:39:54,649 [14182] DEBUG: Uploading base/1/549 2023-08-24 03:39:54,650 [14182] DEBUG: Uploading base/1/2840 2023-08-24 03:39:54,650 [14182] DEBUG: Uploading base/1/4152 2023-08-24 03:39:54,651 [14182] DEBUG: Uploading base/1/2658 2023-08-24 03:39:54,651 [14182] DEBUG: Uploading base/1/13384 2023-08-24 03:39:54,652 [14182] DEBUG: Uploading base/1/174 2023-08-24 03:39:54,652 [14182] DEBUG: Uploading base/1/2686 2023-08-24 03:39:54,653 [14182] DEBUG: Uploading base/1/2660 2023-08-24 03:39:54,653 [14182] DEBUG: Uploading base/1/3601_fsm 2023-08-24 03:39:54,653 [14182] DEBUG: Uploading base/1/2617_fsm 2023-08-24 03:39:54,654 [14182] DEBUG: Uploading base/1/4173 2023-08-24 03:39:54,654 [14182] DEBUG: Uploading base/1/6239 2023-08-24 03:39:54,655 [14182] DEBUG: Uploading base/1/2837 2023-08-24 03:39:54,655 [14182] DEBUG: Uploading base/1/2690 2023-08-24 03:39:54,656 [14182] DEBUG: Uploading base/1/3534 2023-08-24 03:39:54,656 [14182] DEBUG: Uploading base/1/2665 2023-08-24 03:39:54,656 [14182] DEBUG: Uploading base/1/2619_fsm 2023-08-24 03:39:54,657 [14182] DEBUG: Uploading base/1/2601 2023-08-24 03:39:54,657 [14182] DEBUG: Uploading base/1/3080 2023-08-24 03:39:54,658 [14182] DEBUG: Uploading base/1/2995 2023-08-24 03:39:54,658 [14182] DEBUG: Uploading base/1/13375_fsm 2023-08-24 03:39:54,659 [14182] DEBUG: Uploading base/1/13379 2023-08-24 03:39:54,659 [14182] DEBUG: Uploading base/1/2669 2023-08-24 03:39:54,660 [14182] DEBUG: Uploading base/1/2615 2023-08-24 03:39:54,661 [14182] DEBUG: Uploading base/1/3258 2023-08-24 03:39:54,661 [14182] DEBUG: Uploading base/1/6237 2023-08-24 03:39:54,661 [14182] DEBUG: Uploading base/1/2996 2023-08-24 03:39:54,662 [14182] DEBUG: Uploading base/1/4157 2023-08-24 03:39:54,662 [14182] DEBUG: Uploading base/1/3257 2023-08-24 03:39:54,662 [14182] DEBUG: Uploading base/1/1249 2023-08-24 03:39:54,664 [14182] DEBUG: Uploading base/1/4148 2023-08-24 03:39:54,664 [14182] DEBUG: Uploading base/1/3466 2023-08-24 03:39:54,665 [14182] DEBUG: Uploading base/1/2601_vm 2023-08-24 03:39:54,665 [14182] DEBUG: Uploading base/1/4158 2023-08-24 03:39:54,666 [14182] DEBUG: Uploading base/1/4168 2023-08-24 03:39:54,666 [14182] DEBUG: Uploading base/1/4174 2023-08-24 03:39:54,666 [14182] DEBUG: Uploading base/1/2663 2023-08-24 03:39:54,667 [14182] DEBUG: Uploading base/1/3712 2023-08-24 03:39:54,667 [14182] DEBUG: Uploading base/1/2604 2023-08-24 03:39:54,668 [14182] DEBUG: Uploading base/1/3606 2023-08-24 03:39:54,669 [14182] DEBUG: Uploading base/1/2615_fsm 2023-08-24 03:39:54,669 [14182] DEBUG: Uploading base/1/pg_filenode.map 2023-08-24 03:39:54,669 [14182] DEBUG: Uploading base/1/2832 2023-08-24 03:39:54,670 [14182] DEBUG: Uploading base/1/3597 2023-08-24 03:39:54,670 [14182] DEBUG: Uploading base/1/2656 2023-08-24 03:39:54,671 [14182] DEBUG: Uploading base/1/2607 2023-08-24 03:39:54,671 [14182] DEBUG: Uploading base/1/13375 2023-08-24 03:39:54,671 [14182] DEBUG: Uploading base/1/3596 2023-08-24 03:39:54,672 [14182] DEBUG: Uploading base/1/1259 2023-08-24 03:39:54,672 [14182] DEBUG: Uploading base/1/2754 2023-08-24 03:39:54,672 [14182] DEBUG: Uploading base/1/2756 2023-08-24 03:39:54,673 [14182] DEBUG: Uploading base/1/3081 2023-08-24 03:39:54,673 [14182] DEBUG: Uploading base/1/3598 2023-08-24 03:39:54,674 [14182] DEBUG: Uploading base/1/3574 2023-08-24 03:39:54,674 [14182] DEBUG: Uploading base/1/3607 2023-08-24 03:39:54,674 [14182] DEBUG: Uploading base/1/2670 2023-08-24 03:39:54,675 [14182] DEBUG: Uploading base/1/13378 2023-08-24 03:39:54,675 [14182] DEBUG: Uploading base/1/13370_fsm 2023-08-24 03:39:54,675 [14182] DEBUG: Uploading base/1/3431 2023-08-24 03:39:54,676 [14182] DEBUG: Uploading base/1/2609_fsm 2023-08-24 03:39:54,676 [14182] DEBUG: Uploading base/1/2606_vm 2023-08-24 03:39:54,677 [14182] DEBUG: Uploading base/1/4143 2023-08-24 03:39:54,677 [14182] DEBUG: Uploading base/1/2610_fsm 2023-08-24 03:39:54,677 [14182] DEBUG: Uploading base/1/2617_vm 2023-08-24 03:39:54,678 [14182] DEBUG: Uploading base/1/2606_fsm 2023-08-24 03:39:54,679 [14182] DEBUG: Uploading base/1/827 2023-08-24 03:39:54,679 [14182] DEBUG: Uploading base/1/3602_vm 2023-08-24 03:39:54,679 [14182] DEBUG: Uploading base/1/2601_fsm 2023-08-24 03:39:54,680 [14182] DEBUG: Uploading base/1/3764 2023-08-24 03:39:54,680 [14182] DEBUG: Uploading base/1/3603_fsm 2023-08-24 03:39:54,681 [14182] DEBUG: Uploading base/1/3767 2023-08-24 03:39:54,681 [14182] DEBUG: Uploading base/1/2830 2023-08-24 03:39:54,682 [14182] DEBUG: Uploading base/1/2838 2023-08-24 03:39:54,683 [14182] DEBUG: Uploading base/1/3503 2023-08-24 03:39:54,684 [14182] DEBUG: Uploading base/1/2187 2023-08-24 03:39:54,684 [14182] DEBUG: Uploading base/1/2680 2023-08-24 03:39:54,684 [14182] DEBUG: Uploading base/1/13370_vm 2023-08-24 03:39:54,685 [14182] DEBUG: Uploading base/1/3079_fsm 2023-08-24 03:39:54,685 [14182] DEBUG: Uploading base/1/2701 2023-08-24 03:39:54,686 [14182] DEBUG: Uploading base/1/13370 2023-08-24 03:39:54,686 [14182] DEBUG: Uploading base/1/2666 2023-08-24 03:39:54,687 [14182] DEBUG: Uploading base/1/2704 2023-08-24 03:39:54,687 [14182] DEBUG: Uploading base/1/3764_vm 2023-08-24 03:39:54,688 [14182] DEBUG: Uploading base/1/2841 2023-08-24 03:39:54,688 [14182] DEBUG: Uploading base/1/3379 2023-08-24 03:39:54,688 [14182] DEBUG: Uploading base/1/3605 2023-08-24 03:39:54,689 [14182] DEBUG: Uploading base/1/1247 2023-08-24 03:39:54,689 [14182] DEBUG: Uploading base/1/3351 2023-08-24 03:39:54,690 [14182] DEBUG: Uploading base/1/6238 2023-08-24 03:39:54,690 [14182] DEBUG: Uploading base/1/2685 2023-08-24 03:39:54,691 [14182] DEBUG: Uploading base/1/3575 2023-08-24 03:39:54,691 [14182] DEBUG: Uploading base/1/2616_vm 2023-08-24 03:39:54,692 [14182] DEBUG: Uploading base/1/3542 2023-08-24 03:39:54,692 [14182] DEBUG: Uploading base/1/2653 2023-08-24 03:39:54,692 [14182] DEBUG: Uploading base/1/3394_fsm 2023-08-24 03:39:54,693 [14182] DEBUG: Uploading base/1/2657 2023-08-24 03:39:54,694 [14182] DEBUG: Uploading base/1/3395 2023-08-24 03:39:54,694 [14182] DEBUG: Uploading base/1/2605_fsm 2023-08-24 03:39:54,694 [14182] DEBUG: Uploading base/1/112 2023-08-24 03:39:54,695 [14182] DEBUG: Uploading base/1/4155 2023-08-24 03:39:54,695 [14182] DEBUG: Uploading base/1/2224 2023-08-24 03:39:54,696 [14182] DEBUG: Uploading base/1/4167 2023-08-24 03:39:54,696 [14182] DEBUG: Uploading base/1/2834 2023-08-24 03:39:54,696 [14182] DEBUG: Uploading base/1/2615_vm 2023-08-24 03:39:54,697 [14182] DEBUG: Uploading base/1/2608 2023-08-24 03:39:54,697 [14182] DEBUG: Uploading base/1/2838_vm 2023-08-24 03:39:54,698 [14182] DEBUG: Uploading base/1/2654 2023-08-24 03:39:54,698 [14182] DEBUG: Uploading base/1/4149 2023-08-24 03:39:54,699 [14182] DEBUG: Uploading base/1/3599 2023-08-24 03:39:54,699 [14182] DEBUG: Uploading base/1/2757 2023-08-24 03:39:54,699 [14182] DEBUG: Uploading base/1/6176 2023-08-24 03:39:54,700 [14182] DEBUG: Uploading base/1/5002 2023-08-24 03:39:54,700 [14182] DEBUG: Uploading base/1/2664 2023-08-24 03:39:54,701 [14182] DEBUG: Uploading base/1/2650 2023-08-24 03:39:54,701 [14182] DEBUG: Uploading base/1/3456 2023-08-24 03:39:54,702 [14182] DEBUG: Uploading base/1/2610_vm 2023-08-24 03:39:54,702 [14182] DEBUG: Uploading base/1/13373 2023-08-24 03:39:54,702 [14182] DEBUG: Uploading base/1/2840_vm 2023-08-24 03:39:54,703 [14182] DEBUG: Uploading base/1/2609_vm 2023-08-24 03:39:54,703 [14182] DEBUG: Uploading base/1/4164 2023-08-24 03:39:54,704 [14182] DEBUG: Uploading base/1/2668 2023-08-24 03:39:54,704 [14182] DEBUG: Uploading base/1/4163 2023-08-24 03:39:54,704 [14182] DEBUG: Uploading base/1/2696 2023-08-24 03:39:54,705 [14182] DEBUG: Uploading base/1/2674 2023-08-24 03:39:54,705 [14182] DEBUG: Uploading base/1/6113 2023-08-24 03:39:54,706 [14182] DEBUG: Uploading base/1/4147 2023-08-24 03:39:54,706 [14182] DEBUG: Uploading base/1/1249_fsm 2023-08-24 03:39:54,706 [14182] DEBUG: Uploading base/1/3764_fsm 2023-08-24 03:39:54,707 [14182] DEBUG: Uploading base/1/1259_vm 2023-08-24 03:39:54,707 [14182] DEBUG: Uploading base/1/3997 2023-08-24 03:39:54,707 [14182] DEBUG: Uploading base/1/3079 2023-08-24 03:39:54,708 [14182] DEBUG: Uploading base/1/13375_vm 2023-08-24 03:39:54,708 [14182] DEBUG: Uploading base/1/6175 2023-08-24 03:39:54,709 [14182] DEBUG: Uploading base/1/13380_vm 2023-08-24 03:39:54,709 [14182] DEBUG: Uploading base/1/PG_VERSION 2023-08-24 03:39:54,710 [14182] DEBUG: Uploading base/1/3609 2023-08-24 03:39:54,710 [14182] DEBUG: Uploading base/1/3603 2023-08-24 03:39:54,711 [14182] DEBUG: Uploading base/1/3394 2023-08-24 03:39:54,711 [14182] DEBUG: Uploading base/1/2703 2023-08-24 03:39:54,711 [14182] DEBUG: Uploading base/1/2607_vm 2023-08-24 03:39:54,712 [14182] DEBUG: Uploading base/1/2652 2023-08-24 03:39:54,712 [14182] DEBUG: Uploading base/1/828 2023-08-24 03:39:54,713 [14182] DEBUG: Uploading base/1/3541 2023-08-24 03:39:54,713 [14182] DEBUG: Uploading base/1/6116 2023-08-24 03:39:54,714 [14182] DEBUG: Uploading base/1/2693 2023-08-24 03:39:54,714 [14182] DEBUG: Uploading base/1/2659 2023-08-24 03:39:54,715 [14182] DEBUG: Uploading base/1/3467 2023-08-24 03:39:54,715 [14182] DEBUG: Uploading base/1/2840_fsm 2023-08-24 03:39:54,715 [14182] DEBUG: Uploading base/1/13383 2023-08-24 03:39:54,716 [14182] DEBUG: Uploading base/1/2661 2023-08-24 03:39:54,716 [14182] DEBUG: Uploading base/1/4154 2023-08-24 03:39:54,717 [14182] DEBUG: Uploading base/1/2603 2023-08-24 03:39:54,717 [14182] DEBUG: Uploading base/1/1418 2023-08-24 03:39:54,717 [14182] DEBUG: Uploading base/1/3085 2023-08-24 03:39:54,718 [14182] DEBUG: Uploading base/1/2617 2023-08-24 03:39:54,719 [14182] DEBUG: Uploading base/1/2753 2023-08-24 03:39:54,719 [14182] DEBUG: Uploading base/1/3079_vm 2023-08-24 03:39:54,719 [14182] DEBUG: Uploading base/1/1255_fsm 2023-08-24 03:39:54,720 [14182] DEBUG: Uploading base/1/6104 2023-08-24 03:39:54,720 [14182] DEBUG: Uploading base/1/2619_vm 2023-08-24 03:39:54,721 [14182] DEBUG: Uploading base/1/13389 2023-08-24 03:39:54,721 [14182] DEBUG: Uploading base/1/3433 2023-08-24 03:39:54,722 [14182] DEBUG: Uploading base/1/2655 2023-08-24 03:39:54,722 [14182] DEBUG: Uploading base/1/2833 2023-08-24 03:39:54,723 [14182] DEBUG: Uploading base/1/6112 2023-08-24 03:39:54,723 [14182] DEBUG: Uploading base/1/2681 2023-08-24 03:39:54,724 [14182] DEBUG: Uploading base/1/2662 2023-08-24 03:39:54,724 [14182] DEBUG: Uploading base/1/13385_vm 2023-08-24 03:39:54,725 [14182] DEBUG: Uploading base/1/3394_vm 2023-08-24 03:39:54,725 [14182] DEBUG: Uploading base/1/3164 2023-08-24 03:39:54,726 [14182] DEBUG: Uploading base/1/6106 2023-08-24 03:39:54,726 [14182] DEBUG: Uploading base/1/2610 2023-08-24 03:39:54,727 [14182] DEBUG: Uploading base/4/4160 2023-08-24 03:39:54,727 [14182] DEBUG: Uploading base/4/3256 2023-08-24 03:39:54,728 [14182] DEBUG: Uploading base/4/2607_fsm 2023-08-24 03:39:54,728 [14182] DEBUG: Uploading base/4/2603_vm 2023-08-24 03:39:54,729 [14182] DEBUG: Uploading base/4/3439 2023-08-24 03:39:54,729 [14182] DEBUG: Uploading base/4/2687 2023-08-24 03:39:54,730 [14182] DEBUG: Uploading base/4/3350 2023-08-24 03:39:54,730 [14182] DEBUG: Uploading base/4/2835 2023-08-24 03:39:54,730 [14182] DEBUG: Uploading base/4/2682 2023-08-24 03:39:54,731 [14182] DEBUG: Uploading base/4/2679 2023-08-24 03:39:54,731 [14182] DEBUG: Uploading base/4/4146 2023-08-24 03:39:54,732 [14182] DEBUG: Uploading base/4/548 2023-08-24 03:39:54,732 [14182] DEBUG: Uploading base/4/175 2023-08-24 03:39:54,733 [14182] DEBUG: Uploading base/4/2337 2023-08-24 03:39:54,734 [14182] DEBUG: Uploading base/4/1249_vm 2023-08-24 03:39:54,734 [14182] DEBUG: Uploading base/4/3576 2023-08-24 03:39:54,734 [14182] DEBUG: Uploading base/4/3455 2023-08-24 03:39:54,735 [14182] DEBUG: Uploading base/4/6102 2023-08-24 03:39:54,735 [14182] DEBUG: Uploading base/4/2612_fsm 2023-08-24 03:39:54,735 [14182] DEBUG: Uploading base/4/3381 2023-08-24 03:39:54,736 [14182] DEBUG: Uploading base/4/4166 2023-08-24 03:39:54,736 [14182] DEBUG: Uploading base/4/6229 2023-08-24 03:39:54,737 [14182] DEBUG: Uploading base/4/3380 2023-08-24 03:39:54,737 [14182] DEBUG: Uploading base/4/2579 2023-08-24 03:39:54,738 [14182] DEBUG: Uploading base/4/2683 2023-08-24 03:39:54,738 [14182] DEBUG: Uploading base/4/2600_fsm 2023-08-24 03:39:54,739 [14182] DEBUG: Uploading base/4/13388 2023-08-24 03:39:54,739 [14182] DEBUG: Uploading base/4/3541_fsm 2023-08-24 03:39:54,740 [14182] DEBUG: Uploading base/4/4165 2023-08-24 03:39:54,740 [14182] DEBUG: Uploading base/4/2618 2023-08-24 03:39:54,741 [14182] DEBUG: Uploading base/4/2608_fsm 2023-08-24 03:39:54,742 [14182] DEBUG: Uploading base/4/3602_fsm 2023-08-24 03:39:54,742 [14182] DEBUG: Uploading base/4/2667 2023-08-24 03:39:54,742 [14182] DEBUG: Uploading base/4/4159 2023-08-24 03:39:54,743 [14182] DEBUG: Uploading base/4/13380_fsm 2023-08-24 03:39:54,743 [14182] DEBUG: Uploading base/4/3601 2023-08-24 03:39:54,744 [14182] DEBUG: Uploading base/4/6110 2023-08-24 03:39:54,744 [14182] DEBUG: Uploading base/4/3118 2023-08-24 03:39:54,744 [14182] DEBUG: Uploading base/4/2609 2023-08-24 03:39:54,746 [14182] DEBUG: Uploading base/4/1247_fsm 2023-08-24 03:39:54,746 [14182] DEBUG: Uploading base/4/4170 2023-08-24 03:39:54,746 [14182] DEBUG: Uploading base/4/3468 2023-08-24 03:39:54,747 [14182] DEBUG: Uploading base/4/3541_vm 2023-08-24 03:39:54,747 [14182] DEBUG: Uploading base/4/4169 2023-08-24 03:39:54,747 [14182] DEBUG: Uploading base/4/4150 2023-08-24 03:39:54,748 [14182] DEBUG: Uploading base/4/2612 2023-08-24 03:39:54,748 [14182] DEBUG: Uploading base/4/2691 2023-08-24 03:39:54,749 [14182] DEBUG: Uploading base/4/2702 2023-08-24 03:39:54,750 [14182] DEBUG: Uploading base/4/2839 2023-08-24 03:39:54,750 [14182] DEBUG: Uploading base/4/1247_vm 2023-08-24 03:39:54,750 [14182] DEBUG: Uploading base/4/3602 2023-08-24 03:39:54,751 [14182] DEBUG: Uploading base/4/2836 2023-08-24 03:39:54,751 [14182] DEBUG: Uploading base/4/4171 2023-08-24 03:39:54,751 [14182] DEBUG: Uploading base/4/3430 2023-08-24 03:39:54,752 [14182] DEBUG: Uploading base/4/2618_vm 2023-08-24 03:39:54,752 [14182] DEBUG: Uploading base/4/2689 2023-08-24 03:39:54,753 [14182] DEBUG: Uploading base/4/3119 2023-08-24 03:39:54,753 [14182] DEBUG: Uploading base/4/3603_vm 2023-08-24 03:39:54,754 [14182] DEBUG: Uploading base/4/4151 2023-08-24 03:39:54,754 [14182] DEBUG: Uploading base/4/13385_fsm 2023-08-24 03:39:54,754 [14182] DEBUG: Uploading base/4/1255 2023-08-24 03:39:54,756 [14182] DEBUG: Uploading base/4/2618_fsm 2023-08-24 03:39:54,757 [14182] DEBUG: Uploading base/4/2336 2023-08-24 03:39:54,757 [14182] DEBUG: Uploading base/4/4153 2023-08-24 03:39:54,758 [14182] DEBUG: Uploading base/4/3600 2023-08-24 03:39:54,758 [14182] DEBUG: Uploading base/4/2753_vm 2023-08-24 03:39:54,759 [14182] DEBUG: Uploading base/4/2602 2023-08-24 03:39:54,759 [14182] DEBUG: Uploading base/4/2675 2023-08-24 03:39:54,760 [14182] DEBUG: Uploading base/4/2619 2023-08-24 03:39:54,761 [14182] DEBUG: Uploading base/4/6111 2023-08-24 03:39:54,761 [14182] DEBUG: Uploading base/4/2678 2023-08-24 03:39:54,762 [14182] DEBUG: Uploading base/4/6117 2023-08-24 03:39:54,762 [14182] DEBUG: Uploading base/4/2684 2023-08-24 03:39:54,762 [14182] DEBUG: Uploading base/4/2673 2023-08-24 03:39:54,763 [14182] DEBUG: Uploading base/4/2613 2023-08-24 03:39:54,763 [14182] DEBUG: Uploading base/4/2228 2023-08-24 03:39:54,764 [14182] DEBUG: Uploading base/4/1255_vm 2023-08-24 03:39:54,764 [14182] DEBUG: Uploading base/4/2836_fsm 2023-08-24 03:39:54,765 [14182] DEBUG: Uploading base/4/3501 2023-08-24 03:39:54,765 [14182] DEBUG: Uploading base/4/2753_fsm 2023-08-24 03:39:54,765 [14182] DEBUG: Uploading base/4/2600 2023-08-24 03:39:54,766 [14182] DEBUG: Uploading base/4/113 2023-08-24 03:39:54,766 [14182] DEBUG: Uploading base/4/2688 2023-08-24 03:39:54,767 [14182] DEBUG: Uploading base/4/1259_fsm 2023-08-24 03:39:54,767 [14182] DEBUG: Uploading base/4/2692 2023-08-24 03:39:54,768 [14182] DEBUG: Uploading base/4/6228 2023-08-24 03:39:54,768 [14182] DEBUG: Uploading base/4/2328 2023-08-24 03:39:54,768 [14182] DEBUG: Uploading base/4/2602_fsm 2023-08-24 03:39:54,769 [14182] DEBUG: Uploading base/4/3456_vm 2023-08-24 03:39:54,769 [14182] DEBUG: Uploading base/4/3600_vm 2023-08-24 03:39:54,770 [14182] DEBUG: Uploading base/4/2755 2023-08-24 03:39:54,770 [14182] DEBUG: Uploading base/4/3604 2023-08-24 03:39:54,771 [14182] DEBUG: Uploading base/4/2620 2023-08-24 03:39:54,772 [14182] DEBUG: Uploading base/4/2651 2023-08-24 03:39:54,772 [14182] DEBUG: Uploading base/4/1417 2023-08-24 03:39:54,772 [14182] DEBUG: Uploading base/4/2836_vm 2023-08-24 03:39:54,773 [14182] DEBUG: Uploading base/4/2699 2023-08-24 03:39:54,774 [14182] DEBUG: Uploading base/4/4145 2023-08-24 03:39:54,774 [14182] DEBUG: Uploading base/4/3429 2023-08-24 03:39:54,775 [14182] DEBUG: Uploading base/4/826 2023-08-24 03:39:54,776 [14182] DEBUG: Uploading base/4/3502 2023-08-24 03:39:54,776 [14182] DEBUG: Uploading base/4/4156 2023-08-24 03:39:54,777 [14182] DEBUG: Uploading base/4/2606 2023-08-24 03:39:54,777 [14182] DEBUG: Uploading base/4/3601_vm 2023-08-24 03:39:54,778 [14182] DEBUG: Uploading base/4/2611 2023-08-24 03:39:54,778 [14182] DEBUG: Uploading base/4/4172 2023-08-24 03:39:54,778 [14182] DEBUG: Uploading base/4/13385 2023-08-24 03:39:54,779 [14182] DEBUG: Uploading base/4/13380 2023-08-24 03:39:54,779 [14182] DEBUG: Uploading base/4/3600_fsm 2023-08-24 03:39:54,780 [14182] DEBUG: Uploading base/4/2605 2023-08-24 03:39:54,780 [14182] DEBUG: Uploading base/4/2603_fsm 2023-08-24 03:39:54,781 [14182] DEBUG: Uploading base/4/2616_fsm 2023-08-24 03:39:54,781 [14182] DEBUG: Uploading base/4/3766 2023-08-24 03:39:54,782 [14182] DEBUG: Uploading base/4/2602_vm 2023-08-24 03:39:54,782 [14182] DEBUG: Uploading base/4/2838_fsm 2023-08-24 03:39:54,782 [14182] DEBUG: Uploading base/4/4144 2023-08-24 03:39:54,783 [14182] DEBUG: Uploading base/4/3608 2023-08-24 03:39:54,783 [14182] DEBUG: Uploading base/4/2600_vm 2023-08-24 03:39:54,784 [14182] DEBUG: Uploading base/4/2616 2023-08-24 03:39:54,784 [14182] DEBUG: Uploading base/4/3440 2023-08-24 03:39:54,785 [14182] DEBUG: Uploading base/4/2612_vm 2023-08-24 03:39:54,785 [14182] DEBUG: Uploading base/4/13374 2023-08-24 03:39:54,786 [14182] DEBUG: Uploading base/4/2831 2023-08-24 03:39:54,786 [14182] DEBUG: Uploading base/4/3456_fsm 2023-08-24 03:39:54,787 [14182] DEBUG: Uploading base/4/2608_vm 2023-08-24 03:39:54,787 [14182] DEBUG: Uploading base/4/2605_vm 2023-08-24 03:39:54,788 [14182] DEBUG: Uploading base/4/549 2023-08-24 03:39:54,788 [14182] DEBUG: Uploading base/4/2840 2023-08-24 03:39:54,789 [14182] DEBUG: Uploading base/4/4152 2023-08-24 03:39:54,789 [14182] DEBUG: Uploading base/4/2658 2023-08-24 03:39:54,790 [14182] DEBUG: Uploading base/4/13384 2023-08-24 03:39:54,790 [14182] DEBUG: Uploading base/4/174 2023-08-24 03:39:54,791 [14182] DEBUG: Uploading base/4/2686 2023-08-24 03:39:54,791 [14182] DEBUG: Uploading base/4/2660 2023-08-24 03:39:54,792 [14182] DEBUG: Uploading base/4/3601_fsm 2023-08-24 03:39:54,792 [14182] DEBUG: Uploading base/4/2617_fsm 2023-08-24 03:39:54,793 [14182] DEBUG: Uploading base/4/4173 2023-08-24 03:39:54,793 [14182] DEBUG: Uploading base/4/6239 2023-08-24 03:39:54,793 [14182] DEBUG: Uploading base/4/2837 2023-08-24 03:39:54,794 [14182] DEBUG: Uploading base/4/2690 2023-08-24 03:39:54,795 [14182] DEBUG: Uploading base/4/3534 2023-08-24 03:39:54,795 [14182] DEBUG: Uploading base/4/2665 2023-08-24 03:39:54,795 [14182] DEBUG: Uploading base/4/2619_fsm 2023-08-24 03:39:54,796 [14182] DEBUG: Uploading base/4/2601 2023-08-24 03:39:54,796 [14182] DEBUG: Uploading base/4/3080 2023-08-24 03:39:54,797 [14182] DEBUG: Uploading base/4/2995 2023-08-24 03:39:54,797 [14182] DEBUG: Uploading base/4/13375_fsm 2023-08-24 03:39:54,797 [14182] DEBUG: Uploading base/4/13379 2023-08-24 03:39:54,798 [14182] DEBUG: Uploading base/4/2669 2023-08-24 03:39:54,798 [14182] DEBUG: Uploading base/4/2615 2023-08-24 03:39:54,799 [14182] DEBUG: Uploading base/4/3258 2023-08-24 03:39:54,799 [14182] DEBUG: Uploading base/4/6237 2023-08-24 03:39:54,799 [14182] DEBUG: Uploading base/4/2996 2023-08-24 03:39:54,800 [14182] DEBUG: Uploading base/4/4157 2023-08-24 03:39:54,800 [14182] DEBUG: Uploading base/4/3257 2023-08-24 03:39:54,801 [14182] DEBUG: Uploading base/4/1249 2023-08-24 03:39:54,802 [14182] DEBUG: Uploading base/4/4148 2023-08-24 03:39:54,803 [14182] DEBUG: Uploading base/4/3466 2023-08-24 03:39:54,803 [14182] DEBUG: Uploading base/4/2601_vm 2023-08-24 03:39:54,804 [14182] DEBUG: Uploading base/4/4158 2023-08-24 03:39:54,804 [14182] DEBUG: Uploading base/4/4168 2023-08-24 03:39:54,805 [14182] DEBUG: Uploading base/4/4174 2023-08-24 03:39:54,805 [14182] DEBUG: Uploading base/4/2663 2023-08-24 03:39:54,805 [14182] DEBUG: Uploading base/4/3712 2023-08-24 03:39:54,806 [14182] DEBUG: Uploading base/4/2604 2023-08-24 03:39:54,807 [14182] DEBUG: Uploading base/4/3606 2023-08-24 03:39:54,807 [14182] DEBUG: Uploading base/4/2615_fsm 2023-08-24 03:39:54,808 [14182] DEBUG: Uploading base/4/pg_filenode.map 2023-08-24 03:39:54,808 [14182] DEBUG: Uploading base/4/2832 2023-08-24 03:39:54,809 [14182] DEBUG: Uploading base/4/3597 2023-08-24 03:39:54,809 [14182] DEBUG: Uploading base/4/2656 2023-08-24 03:39:54,810 [14182] DEBUG: Uploading base/4/2607 2023-08-24 03:39:54,810 [14182] DEBUG: Uploading base/4/13375 2023-08-24 03:39:54,811 [14182] DEBUG: Uploading base/4/3596 2023-08-24 03:39:54,811 [14182] DEBUG: Uploading base/4/1259 2023-08-24 03:39:54,812 [14182] DEBUG: Uploading base/4/2754 2023-08-24 03:39:54,813 [14182] DEBUG: Uploading base/4/2756 2023-08-24 03:39:54,813 [14182] DEBUG: Uploading base/4/3081 2023-08-24 03:39:54,814 [14182] DEBUG: Uploading base/4/3598 2023-08-24 03:39:54,814 [14182] DEBUG: Uploading base/4/3574 2023-08-24 03:39:54,815 [14182] DEBUG: Uploading base/4/3607 2023-08-24 03:39:54,815 [14182] DEBUG: Uploading base/4/2670 2023-08-24 03:39:54,816 [14182] DEBUG: Uploading base/4/13378 2023-08-24 03:39:54,816 [14182] DEBUG: Uploading base/4/13370_fsm 2023-08-24 03:39:54,817 [14182] DEBUG: Uploading base/4/3431 2023-08-24 03:39:54,817 [14182] DEBUG: Uploading base/4/2609_fsm 2023-08-24 03:39:54,818 [14182] DEBUG: Uploading base/4/2606_vm 2023-08-24 03:39:54,818 [14182] DEBUG: Uploading base/4/4143 2023-08-24 03:39:54,818 [14182] DEBUG: Uploading base/4/2610_fsm 2023-08-24 03:39:54,819 [14182] DEBUG: Uploading base/4/2617_vm 2023-08-24 03:39:54,819 [14182] DEBUG: Uploading base/4/2606_fsm 2023-08-24 03:39:54,820 [14182] DEBUG: Uploading base/4/827 2023-08-24 03:39:54,820 [14182] DEBUG: Uploading base/4/3602_vm 2023-08-24 03:39:54,821 [14182] DEBUG: Uploading base/4/2601_fsm 2023-08-24 03:39:54,821 [14182] DEBUG: Uploading base/4/3764 2023-08-24 03:39:54,822 [14182] DEBUG: Uploading base/4/3603_fsm 2023-08-24 03:39:54,822 [14182] DEBUG: Uploading base/4/3767 2023-08-24 03:39:54,823 [14182] DEBUG: Uploading base/4/2830 2023-08-24 03:39:54,823 [14182] DEBUG: Uploading base/4/2838 2023-08-24 03:39:54,826 [14182] DEBUG: Uploading base/4/3503 2023-08-24 03:39:54,827 [14182] DEBUG: Uploading base/4/2187 2023-08-24 03:39:54,827 [14182] DEBUG: Uploading base/4/2680 2023-08-24 03:39:54,828 [14182] DEBUG: Uploading base/4/13370_vm 2023-08-24 03:39:54,830 [14182] DEBUG: Uploading base/4/3079_fsm 2023-08-24 03:39:54,830 [14182] DEBUG: Uploading base/4/2701 2023-08-24 03:39:54,830 [14182] DEBUG: Uploading base/4/13370 2023-08-24 03:39:54,832 [14182] DEBUG: Uploading base/4/2666 2023-08-24 03:39:54,832 [14182] DEBUG: Uploading base/4/2704 2023-08-24 03:39:54,833 [14182] DEBUG: Uploading base/4/3764_vm 2023-08-24 03:39:54,833 [14182] DEBUG: Uploading base/4/2841 2023-08-24 03:39:54,834 [14182] DEBUG: Uploading base/4/3379 2023-08-24 03:39:54,835 [14182] DEBUG: Uploading base/4/3605 2023-08-24 03:39:54,835 [14182] DEBUG: Uploading base/4/1247 2023-08-24 03:39:54,837 [14182] DEBUG: Uploading base/4/3351 2023-08-24 03:39:54,837 [14182] DEBUG: Uploading base/4/6238 2023-08-24 03:39:54,837 [14182] DEBUG: Uploading base/4/2685 2023-08-24 03:39:54,838 [14182] DEBUG: Uploading base/4/3575 2023-08-24 03:39:54,838 [14182] DEBUG: Uploading base/4/2616_vm 2023-08-24 03:39:54,839 [14182] DEBUG: Uploading base/4/3542 2023-08-24 03:39:54,840 [14182] DEBUG: Uploading base/4/2653 2023-08-24 03:39:54,840 [14182] DEBUG: Uploading base/4/3394_fsm 2023-08-24 03:39:54,841 [14182] DEBUG: Uploading base/4/2657 2023-08-24 03:39:54,841 [14182] DEBUG: Uploading base/4/3395 2023-08-24 03:39:54,842 [14182] DEBUG: Uploading base/4/2605_fsm 2023-08-24 03:39:54,843 [14182] DEBUG: Uploading base/4/112 2023-08-24 03:39:54,843 [14182] DEBUG: Uploading base/4/4155 2023-08-24 03:39:54,844 [14182] DEBUG: Uploading base/4/2224 2023-08-24 03:39:54,844 [14182] DEBUG: Uploading base/4/4167 2023-08-24 03:39:54,845 [14182] DEBUG: Uploading base/4/2834 2023-08-24 03:39:54,845 [14182] DEBUG: Uploading base/4/2615_vm 2023-08-24 03:39:54,846 [14182] DEBUG: Uploading base/4/2608 2023-08-24 03:39:54,848 [14182] DEBUG: Uploading base/4/2838_vm 2023-08-24 03:39:54,848 [14182] DEBUG: Uploading base/4/2654 2023-08-24 03:39:54,849 [14182] DEBUG: Uploading base/4/4149 2023-08-24 03:39:54,849 [14182] DEBUG: Uploading base/4/3599 2023-08-24 03:39:54,850 [14182] DEBUG: Uploading base/4/2757 2023-08-24 03:39:54,850 [14182] DEBUG: Uploading base/4/6176 2023-08-24 03:39:54,851 [14182] DEBUG: Uploading base/4/5002 2023-08-24 03:39:54,851 [14182] DEBUG: Uploading base/4/2664 2023-08-24 03:39:54,851 [14182] DEBUG: Uploading base/4/2650 2023-08-24 03:39:54,852 [14182] DEBUG: Uploading base/4/3456 2023-08-24 03:39:54,853 [14182] DEBUG: Uploading base/4/2610_vm 2023-08-24 03:39:54,853 [14182] DEBUG: Uploading base/4/13373 2023-08-24 03:39:54,854 [14182] DEBUG: Uploading base/4/2840_vm 2023-08-24 03:39:54,854 [14182] DEBUG: Uploading base/4/2609_vm 2023-08-24 03:39:54,855 [14182] DEBUG: Uploading base/4/4164 2023-08-24 03:39:54,855 [14182] DEBUG: Uploading base/4/2668 2023-08-24 03:39:54,856 [14182] DEBUG: Uploading base/4/4163 2023-08-24 03:39:54,857 [14182] DEBUG: Uploading base/4/2696 2023-08-24 03:39:54,857 [14182] DEBUG: Uploading base/4/2674 2023-08-24 03:39:54,858 [14182] DEBUG: Uploading base/4/6113 2023-08-24 03:39:54,858 [14182] DEBUG: Uploading base/4/4147 2023-08-24 03:39:54,859 [14182] DEBUG: Uploading base/4/1249_fsm 2023-08-24 03:39:54,859 [14182] DEBUG: Uploading base/4/3764_fsm 2023-08-24 03:39:54,860 [14182] DEBUG: Uploading base/4/1259_vm 2023-08-24 03:39:54,861 [14182] DEBUG: Uploading base/4/3997 2023-08-24 03:39:54,861 [14182] DEBUG: Uploading base/4/3079 2023-08-24 03:39:54,861 [14182] DEBUG: Uploading base/4/13375_vm 2023-08-24 03:39:54,862 [14182] DEBUG: Uploading base/4/6175 2023-08-24 03:39:54,863 [14182] DEBUG: Uploading base/4/13380_vm 2023-08-24 03:39:54,863 [14182] DEBUG: Uploading base/4/PG_VERSION 2023-08-24 03:39:54,864 [14182] DEBUG: Uploading base/4/3609 2023-08-24 03:39:54,864 [14182] DEBUG: Uploading base/4/3603 2023-08-24 03:39:54,865 [14182] DEBUG: Uploading base/4/3394 2023-08-24 03:39:54,866 [14182] DEBUG: Uploading base/4/2703 2023-08-24 03:39:54,866 [14182] DEBUG: Uploading base/4/2607_vm 2023-08-24 03:39:54,867 [14182] DEBUG: Uploading base/4/2652 2023-08-24 03:39:54,867 [14182] DEBUG: Uploading base/4/828 2023-08-24 03:39:54,868 [14182] DEBUG: Uploading base/4/3541 2023-08-24 03:39:54,868 [14182] DEBUG: Uploading base/4/6116 2023-08-24 03:39:54,869 [14182] DEBUG: Uploading base/4/2693 2023-08-24 03:39:54,870 [14182] DEBUG: Uploading base/4/2659 2023-08-24 03:39:54,871 [14182] DEBUG: Uploading base/4/3467 2023-08-24 03:39:54,871 [14182] DEBUG: Uploading base/4/2840_fsm 2023-08-24 03:39:54,871 [14182] DEBUG: Uploading base/4/13383 2023-08-24 03:39:54,872 [14182] DEBUG: Uploading base/4/2661 2023-08-24 03:39:54,873 [14182] DEBUG: Uploading base/4/4154 2023-08-24 03:39:54,873 [14182] DEBUG: Uploading base/4/2603 2023-08-24 03:39:54,874 [14182] DEBUG: Uploading base/4/1418 2023-08-24 03:39:54,874 [14182] DEBUG: Uploading base/4/3085 2023-08-24 03:39:54,875 [14182] DEBUG: Uploading base/4/2617 2023-08-24 03:39:54,876 [14182] DEBUG: Uploading base/4/2753 2023-08-24 03:39:54,876 [14182] DEBUG: Uploading base/4/3079_vm 2023-08-24 03:39:54,877 [14182] DEBUG: Uploading base/4/1255_fsm 2023-08-24 03:39:54,878 [14182] DEBUG: Uploading base/4/6104 2023-08-24 03:39:54,878 [14182] DEBUG: Uploading base/4/2619_vm 2023-08-24 03:39:54,878 [14182] DEBUG: Uploading base/4/13389 2023-08-24 03:39:54,879 [14182] DEBUG: Uploading base/4/3433 2023-08-24 03:39:54,880 [14182] DEBUG: Uploading base/4/2655 2023-08-24 03:39:54,881 [14182] DEBUG: Uploading base/4/2833 2023-08-24 03:39:54,881 [14182] DEBUG: Uploading base/4/6112 2023-08-24 03:39:54,882 [14182] DEBUG: Uploading base/4/2681 2023-08-24 03:39:54,882 [14182] DEBUG: Uploading base/4/2662 2023-08-24 03:39:54,883 [14182] DEBUG: Uploading base/4/13385_vm 2023-08-24 03:39:54,883 [14182] DEBUG: Uploading base/4/3394_vm 2023-08-24 03:39:54,884 [14182] DEBUG: Uploading base/4/3164 2023-08-24 03:39:54,884 [14182] DEBUG: Uploading base/4/6106 2023-08-24 03:39:54,884 [14182] DEBUG: Uploading base/4/2610 2023-08-24 03:39:54,885 [14182] DEBUG: Uploading base/16385/4160 2023-08-24 03:39:54,886 [14182] DEBUG: Uploading base/16385/3256 2023-08-24 03:39:54,886 [14182] DEBUG: Uploading base/16385/2607_fsm 2023-08-24 03:39:54,887 [14182] DEBUG: Uploading base/16385/2603_vm 2023-08-24 03:39:54,887 [14182] DEBUG: Uploading base/16385/3439 2023-08-24 03:39:54,888 [14182] DEBUG: Uploading base/16385/2687 2023-08-24 03:39:54,888 [14182] DEBUG: Uploading base/16385/3350 2023-08-24 03:39:54,888 [14182] DEBUG: Uploading base/16385/2835 2023-08-24 03:39:54,889 [14182] DEBUG: Uploading base/16385/2682 2023-08-24 03:39:54,889 [14182] DEBUG: Uploading base/16385/2679 2023-08-24 03:39:54,890 [14182] DEBUG: Uploading base/16385/4146 2023-08-24 03:39:54,890 [14182] DEBUG: Uploading base/16385/548 2023-08-24 03:39:54,891 [14182] DEBUG: Uploading base/16385/175 2023-08-24 03:39:54,891 [14182] DEBUG: Uploading base/16385/2337 2023-08-24 03:39:54,892 [14182] DEBUG: Uploading base/16385/1249_vm 2023-08-24 03:39:54,892 [14182] DEBUG: Uploading base/16385/3576 2023-08-24 03:39:54,892 [14182] DEBUG: Uploading base/16385/3455 2023-08-24 03:39:54,893 [14182] DEBUG: Uploading base/16385/6102 2023-08-24 03:39:54,893 [14182] DEBUG: Uploading base/16385/2612_fsm 2023-08-24 03:39:54,894 [14182] DEBUG: Uploading base/16385/3381 2023-08-24 03:39:54,894 [14182] DEBUG: Uploading base/16385/4166 2023-08-24 03:39:54,895 [14182] DEBUG: Uploading base/16385/6229 2023-08-24 03:39:54,895 [14182] DEBUG: Uploading base/16385/3380 2023-08-24 03:39:54,896 [14182] DEBUG: Uploading base/16385/2579 2023-08-24 03:39:54,896 [14182] DEBUG: Uploading base/16385/2683 2023-08-24 03:39:54,896 [14182] DEBUG: Uploading base/16385/2600_fsm 2023-08-24 03:39:54,897 [14182] DEBUG: Uploading base/16385/13388 2023-08-24 03:39:54,897 [14182] DEBUG: Uploading base/16385/3541_fsm 2023-08-24 03:39:54,898 [14182] DEBUG: Uploading base/16385/4165 2023-08-24 03:39:54,898 [14182] DEBUG: Uploading base/16385/2618 2023-08-24 03:39:54,899 [14182] DEBUG: Uploading base/16385/2608_fsm 2023-08-24 03:39:54,900 [14182] DEBUG: Uploading base/16385/3602_fsm 2023-08-24 03:39:54,900 [14182] DEBUG: Uploading base/16385/2667 2023-08-24 03:39:54,900 [14182] DEBUG: Uploading base/16385/4159 2023-08-24 03:39:54,901 [14182] DEBUG: Uploading base/16385/13380_fsm 2023-08-24 03:39:54,901 [14182] DEBUG: Uploading base/16385/3601 2023-08-24 03:39:54,902 [14182] DEBUG: Uploading base/16385/6110 2023-08-24 03:39:54,902 [14182] DEBUG: Uploading base/16385/3118 2023-08-24 03:39:54,902 [14182] DEBUG: Uploading base/16385/2609 2023-08-24 03:39:54,903 [14182] DEBUG: Uploading base/16385/1247_fsm 2023-08-24 03:39:54,904 [14182] DEBUG: Uploading base/16385/4170 2023-08-24 03:39:54,904 [14182] DEBUG: Uploading base/16385/3468 2023-08-24 03:39:54,904 [14182] DEBUG: Uploading base/16385/3541_vm 2023-08-24 03:39:54,905 [14182] DEBUG: Uploading base/16385/4169 2023-08-24 03:39:54,905 [14182] DEBUG: Uploading base/16385/4150 2023-08-24 03:39:54,906 [14182] DEBUG: Uploading base/16385/2612 2023-08-24 03:39:54,906 [14182] DEBUG: Uploading base/16385/2691 2023-08-24 03:39:54,907 [14182] DEBUG: Uploading base/16385/2702 2023-08-24 03:39:54,907 [14182] DEBUG: Uploading base/16385/2839 2023-08-24 03:39:54,908 [14182] DEBUG: Uploading base/16385/1247_vm 2023-08-24 03:39:54,908 [14182] DEBUG: Uploading base/16385/3602 2023-08-24 03:39:54,908 [14182] DEBUG: Uploading base/16385/2836 2023-08-24 03:39:54,909 [14182] DEBUG: Uploading base/16385/4171 2023-08-24 03:39:54,909 [14182] DEBUG: Uploading base/16385/3430 2023-08-24 03:39:54,910 [14182] DEBUG: Uploading base/16385/2618_vm 2023-08-24 03:39:54,910 [14182] DEBUG: Uploading base/16385/2689 2023-08-24 03:39:54,911 [14182] DEBUG: Uploading base/16385/3119 2023-08-24 03:39:54,911 [14182] DEBUG: Uploading base/16385/3603_vm 2023-08-24 03:39:54,912 [14182] DEBUG: Uploading base/16385/4151 2023-08-24 03:39:54,912 [14182] DEBUG: Uploading base/16385/13385_fsm 2023-08-24 03:39:54,913 [14182] DEBUG: Uploading base/16385/1255 2023-08-24 03:39:54,915 [14182] DEBUG: Uploading base/16385/2618_fsm 2023-08-24 03:39:54,916 [14182] DEBUG: Uploading base/16385/2336 2023-08-24 03:39:54,916 [14182] DEBUG: Uploading base/16385/4153 2023-08-24 03:39:54,917 [14182] DEBUG: Uploading base/16385/3600 2023-08-24 03:39:54,917 [14182] DEBUG: Uploading base/16385/2753_vm 2023-08-24 03:39:54,917 [14182] DEBUG: Uploading base/16385/2602 2023-08-24 03:39:54,918 [14182] DEBUG: Uploading base/16385/2675 2023-08-24 03:39:54,919 [14182] DEBUG: Uploading base/16385/2619 2023-08-24 03:39:54,920 [14182] DEBUG: Uploading base/16385/6111 2023-08-24 03:39:54,921 [14182] DEBUG: Uploading base/16385/2678 2023-08-24 03:39:54,921 [14182] DEBUG: Uploading base/16385/6117 2023-08-24 03:39:54,922 [14182] DEBUG: Uploading base/16385/2684 2023-08-24 03:39:54,923 [14182] DEBUG: Uploading base/16385/2673 2023-08-24 03:39:54,923 [14182] DEBUG: Uploading base/16385/2613 2023-08-24 03:39:54,924 [14182] DEBUG: Uploading base/16385/2228 2023-08-24 03:39:54,924 [14182] DEBUG: Uploading base/16385/1255_vm 2023-08-24 03:39:54,924 [14182] DEBUG: Uploading base/16385/2836_fsm 2023-08-24 03:39:54,925 [14182] DEBUG: Uploading base/16385/3501 2023-08-24 03:39:54,926 [14182] DEBUG: Uploading base/16385/2753_fsm 2023-08-24 03:39:54,926 [14182] DEBUG: Uploading base/16385/2600 2023-08-24 03:39:54,926 [14182] DEBUG: Uploading base/16385/113 2023-08-24 03:39:54,927 [14182] DEBUG: Uploading base/16385/2688 2023-08-24 03:39:54,927 [14182] DEBUG: Uploading base/16385/1259_fsm 2023-08-24 03:39:54,928 [14182] DEBUG: Uploading base/16385/2692 2023-08-24 03:39:54,929 [14182] DEBUG: Uploading base/16385/6228 2023-08-24 03:39:54,929 [14182] DEBUG: Uploading base/16385/2328 2023-08-24 03:39:54,930 [14182] DEBUG: Uploading base/16385/2602_fsm 2023-08-24 03:39:54,930 [14182] DEBUG: Uploading base/16385/3456_vm 2023-08-24 03:39:54,930 [14182] DEBUG: Uploading base/16385/3600_vm 2023-08-24 03:39:54,931 [14182] DEBUG: Uploading base/16385/2755 2023-08-24 03:39:54,932 [14182] DEBUG: Uploading base/16385/3604 2023-08-24 03:39:54,932 [14182] DEBUG: Uploading base/16385/2620 2023-08-24 03:39:54,932 [14182] DEBUG: Uploading base/16385/2651 2023-08-24 03:39:54,933 [14182] DEBUG: Uploading base/16385/1417 2023-08-24 03:39:54,933 [14182] DEBUG: Uploading base/16385/2836_vm 2023-08-24 03:39:54,934 [14182] DEBUG: Uploading base/16385/2699 2023-08-24 03:39:54,934 [14182] DEBUG: Uploading base/16385/4145 2023-08-24 03:39:54,934 [14182] DEBUG: Uploading base/16385/3429 2023-08-24 03:39:54,935 [14182] DEBUG: Uploading base/16385/826 2023-08-24 03:39:54,935 [14182] DEBUG: Uploading base/16385/3502 2023-08-24 03:39:54,935 [14182] DEBUG: Uploading base/16385/4156 2023-08-24 03:39:54,936 [14182] DEBUG: Uploading base/16385/2606 2023-08-24 03:39:54,936 [14182] DEBUG: Uploading base/16385/3601_vm 2023-08-24 03:39:54,937 [14182] DEBUG: Uploading base/16385/2611 2023-08-24 03:39:54,937 [14182] DEBUG: Uploading base/16385/4172 2023-08-24 03:39:54,937 [14182] DEBUG: Uploading base/16385/13385 2023-08-24 03:39:54,938 [14182] DEBUG: Uploading base/16385/13380 2023-08-24 03:39:54,939 [14182] DEBUG: Uploading base/16385/3600_fsm 2023-08-24 03:39:54,939 [14182] DEBUG: Uploading base/16385/2605 2023-08-24 03:39:54,940 [14182] DEBUG: Uploading base/16385/2603_fsm 2023-08-24 03:39:54,940 [14182] DEBUG: Uploading base/16385/2616_fsm 2023-08-24 03:39:54,940 [14182] DEBUG: Uploading base/16385/3766 2023-08-24 03:39:54,941 [14182] DEBUG: Uploading base/16385/2602_vm 2023-08-24 03:39:54,941 [14182] DEBUG: Uploading base/16385/2838_fsm 2023-08-24 03:39:54,942 [14182] DEBUG: Uploading base/16385/4144 2023-08-24 03:39:54,942 [14182] DEBUG: Uploading base/16385/3608 2023-08-24 03:39:54,943 [14182] DEBUG: Uploading base/16385/2600_vm 2023-08-24 03:39:54,943 [14182] DEBUG: Uploading base/16385/2616 2023-08-24 03:39:54,944 [14182] DEBUG: Uploading base/16385/3440 2023-08-24 03:39:54,944 [14182] DEBUG: Uploading base/16385/2612_vm 2023-08-24 03:39:54,945 [14182] DEBUG: Uploading base/16385/13374 2023-08-24 03:39:54,945 [14182] DEBUG: Uploading base/16385/2831 2023-08-24 03:39:54,946 [14182] DEBUG: Uploading base/16385/3456_fsm 2023-08-24 03:39:54,946 [14182] DEBUG: Uploading base/16385/2608_vm 2023-08-24 03:39:54,947 [14182] DEBUG: Uploading base/16385/2605_vm 2023-08-24 03:39:54,947 [14182] DEBUG: Uploading base/16385/549 2023-08-24 03:39:54,947 [14182] DEBUG: Uploading base/16385/2840 2023-08-24 03:39:54,948 [14182] DEBUG: Uploading base/16385/4152 2023-08-24 03:39:54,948 [14182] DEBUG: Uploading base/16385/2658 2023-08-24 03:39:54,949 [14182] DEBUG: Uploading base/16385/13384 2023-08-24 03:39:54,949 [14182] DEBUG: Uploading base/16385/174 2023-08-24 03:39:54,950 [14182] DEBUG: Uploading base/16385/2686 2023-08-24 03:39:54,950 [14182] DEBUG: Uploading base/16385/2660 2023-08-24 03:39:54,951 [14182] DEBUG: Uploading base/16385/3601_fsm 2023-08-24 03:39:54,951 [14182] DEBUG: Uploading base/16385/2617_fsm 2023-08-24 03:39:54,951 [14182] DEBUG: Uploading base/16385/4173 2023-08-24 03:39:54,952 [14182] DEBUG: Uploading base/16385/6239 2023-08-24 03:39:54,952 [14182] DEBUG: Uploading base/16385/2837 2023-08-24 03:39:54,952 [14182] DEBUG: Uploading base/16385/2690 2023-08-24 03:39:54,953 [14182] DEBUG: Uploading base/16385/3534 2023-08-24 03:39:54,953 [14182] DEBUG: Uploading base/16385/2665 2023-08-24 03:39:54,954 [14182] DEBUG: Uploading base/16385/2619_fsm 2023-08-24 03:39:54,954 [14182] DEBUG: Uploading base/16385/2601 2023-08-24 03:39:54,954 [14182] DEBUG: Uploading base/16385/3080 2023-08-24 03:39:54,955 [14182] DEBUG: Uploading base/16385/2995 2023-08-24 03:39:54,955 [14182] DEBUG: Uploading base/16385/13375_fsm 2023-08-24 03:39:54,956 [14182] DEBUG: Uploading base/16385/13379 2023-08-24 03:39:54,956 [14182] DEBUG: Uploading base/16385/2669 2023-08-24 03:39:54,956 [14182] DEBUG: Uploading base/16385/2615 2023-08-24 03:39:54,957 [14182] DEBUG: Uploading base/16385/3258 2023-08-24 03:39:54,958 [14182] DEBUG: Uploading base/16385/6237 2023-08-24 03:39:54,958 [14182] DEBUG: Uploading base/16385/2996 2023-08-24 03:39:54,958 [14182] DEBUG: Uploading base/16385/4157 2023-08-24 03:39:54,959 [14182] DEBUG: Uploading base/16385/3257 2023-08-24 03:39:54,959 [14182] DEBUG: Uploading base/16385/1249 2023-08-24 03:39:54,960 [14182] DEBUG: Uploading base/16385/4148 2023-08-24 03:39:54,961 [14182] DEBUG: Uploading base/16385/3466 2023-08-24 03:39:54,961 [14182] DEBUG: Uploading base/16385/2601_vm 2023-08-24 03:39:54,961 [14182] DEBUG: Uploading base/16385/4158 2023-08-24 03:39:54,962 [14182] DEBUG: Uploading base/16385/4168 2023-08-24 03:39:54,963 [14182] DEBUG: Uploading base/16385/4174 2023-08-24 03:39:54,963 [14182] DEBUG: Uploading base/16385/2663 2023-08-24 03:39:54,963 [14182] DEBUG: Uploading base/16385/3712 2023-08-24 03:39:54,964 [14182] DEBUG: Uploading base/16385/2604 2023-08-24 03:39:54,964 [14182] DEBUG: Uploading base/16385/3606 2023-08-24 03:39:54,964 [14182] DEBUG: Uploading base/16385/2615_fsm 2023-08-24 03:39:54,965 [14182] DEBUG: Uploading base/16385/pg_filenode.map 2023-08-24 03:39:54,965 [14182] DEBUG: Uploading base/16385/2832 2023-08-24 03:39:54,965 [14182] DEBUG: Uploading base/16385/3597 2023-08-24 03:39:54,966 [14182] DEBUG: Uploading base/16385/2656 2023-08-24 03:39:54,966 [14182] DEBUG: Uploading base/16385/2607 2023-08-24 03:39:54,967 [14182] DEBUG: Uploading base/16385/13375 2023-08-24 03:39:54,967 [14182] DEBUG: Uploading base/16385/3596 2023-08-24 03:39:54,967 [14182] DEBUG: Uploading base/16385/1259 2023-08-24 03:39:54,968 [14182] DEBUG: Uploading base/16385/2754 2023-08-24 03:39:54,968 [14182] DEBUG: Uploading base/16385/2756 2023-08-24 03:39:54,969 [14182] DEBUG: Uploading base/16385/3081 2023-08-24 03:39:54,969 [14182] DEBUG: Uploading base/16385/3598 2023-08-24 03:39:54,969 [14182] DEBUG: Uploading base/16385/3574 2023-08-24 03:39:54,970 [14182] DEBUG: Uploading base/16385/3607 2023-08-24 03:39:54,971 [14182] DEBUG: Uploading base/16385/2670 2023-08-24 03:39:54,971 [14182] DEBUG: Uploading base/16385/13378 2023-08-24 03:39:54,971 [14182] DEBUG: Uploading base/16385/13370_fsm 2023-08-24 03:39:54,972 [14182] DEBUG: Uploading base/16385/3431 2023-08-24 03:39:54,973 [14182] DEBUG: Uploading base/16385/2609_fsm 2023-08-24 03:39:54,973 [14182] DEBUG: Uploading base/16385/2606_vm 2023-08-24 03:39:54,973 [14182] DEBUG: Uploading base/16385/4143 2023-08-24 03:39:54,974 [14182] DEBUG: Uploading base/16385/2610_fsm 2023-08-24 03:39:54,974 [14182] DEBUG: Uploading base/16385/2617_vm 2023-08-24 03:39:54,974 [14182] DEBUG: Uploading base/16385/2606_fsm 2023-08-24 03:39:54,975 [14182] DEBUG: Uploading base/16385/827 2023-08-24 03:39:54,976 [14182] DEBUG: Uploading base/16385/3602_vm 2023-08-24 03:39:54,976 [14182] DEBUG: Uploading base/16385/2601_fsm 2023-08-24 03:39:54,977 [14182] DEBUG: Uploading base/16385/3764 2023-08-24 03:39:54,977 [14182] DEBUG: Uploading base/16385/3603_fsm 2023-08-24 03:39:54,978 [14182] DEBUG: Uploading base/16385/3767 2023-08-24 03:39:54,979 [14182] DEBUG: Uploading base/16385/2830 2023-08-24 03:39:54,979 [14182] DEBUG: Uploading base/16385/2838 2023-08-24 03:39:54,980 [14182] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-24 03:39:54,980 [14182] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-24 03:39:54,981 [14182] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-24 03:39:54,981 [14182] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-24 03:39:54,981 [14182] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-24 03:39:54,981 [14182] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-24 03:39:54,981 [14182] DEBUG: Event before-parameter-build.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,981 [14182] DEBUG: Event before-parameter-build.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,981 [14182] DEBUG: Event before-parameter-build.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,981 [14182] DEBUG: Event before-parameter-build.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,981 [14182] DEBUG: Event before-parameter-build.s3.CreateMultipartUpload: calling handler > 2023-08-24 03:39:54,982 [14182] DEBUG: Event before-parameter-build.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,982 [14182] DEBUG: Event before-call.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,982 [14182] DEBUG: Event before-call.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,982 [14182] DEBUG: Event before-call.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,982 [14182] DEBUG: Making request for OperationModel(name=CreateMultipartUpload) with params: {'url_path': '/cnpg-cluster/base/20230824T033954/data.tar?uploads', 'query_string': {}, 'method': 'POST', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploads', 'url': 'https://.linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploads', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest', 'Key': 'cnpg-cluster/base/20230824T033954/data.tar'}}}} 2023-08-24 03:39:54,982 [14182] DEBUG: Event request-created.s3.CreateMultipartUpload: calling handler > 2023-08-24 03:39:54,982 [14182] DEBUG: Event choose-signer.s3.CreateMultipartUpload: calling handler > 2023-08-24 03:39:54,982 [14182] DEBUG: Event choose-signer.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,982 [14182] DEBUG: Event before-sign.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,982 [14182] DEBUG: Calculating signature using v4 auth. 2023-08-24 03:39:54,982 [14182] DEBUG: CanonicalRequest: POST /barmantest/cnpg-cluster/base/20230824T033954/data.tar uploads= host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230824T033954Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-24 03:39:54,982 [14182] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230824T033954Z 20230824//s3/aws4_request 31cdeac4a90353333eb00c9241cbd9ee47799426d763722421f0754ba0739f26 2023-08-24 03:39:54,983 [14182] DEBUG: Signature: 4d01c807f1d50afee278f73c130a02cc395f62caa3e8a0e1d52814e860c7c092 2023-08-24 03:39:54,983 [14182] DEBUG: Event request-created.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:54,983 [14182] DEBUG: Sending http request: .linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploads, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230824T033954Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230824//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=4d01c807f1d50afee278f73c130a02cc395f62caa3e8a0e1d52814e860c7c092', 'amz-sdk-invocation-id': b'469b41aa-87d8-4f25-be90-05e7cf9653ff', 'amz-sdk-request': b'attempt=1', 'Content-Length': '0'}> 2023-08-24 03:39:54,983 [14182] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-24 03:39:55,194 [14182] DEBUG: https://.linodeobjects.com:443 "POST /barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploads HTTP/1.1" 200 288 2023-08-24 03:39:55,195 [14182] DEBUG: Response headers: {'Date': 'Thu, 24 Aug 2023 03:39:55 GMT', 'Content-Type': 'application/xml', 'Content-Length': '288', 'Connection': 'keep-alive', 'x-amz-request-id': 'tx00000e7c24e62ec4b117a-0064e6d10b-47e16cb8-default'} 2023-08-24 03:39:55,195 [14182] DEBUG: Response body: b'cnpgbarmantest/cnpg-cluster/base/20230824T033954/data.tar2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon' 2023-08-24 03:39:55,195 [14182] DEBUG: Event needs-retry.s3.CreateMultipartUpload: calling handler 2023-08-24 03:39:55,195 [14182] DEBUG: No retry needed. 2023-08-24 03:39:55,195 [14182] DEBUG: Event needs-retry.s3.CreateMultipartUpload: calling handler > 2023-08-24 03:39:55,201 [14193] INFO: Upload process started (worker 0) 2023-08-24 03:39:55,202 [14194] INFO: Upload process started (worker 1) 2023-08-24 03:39:55,203 [14193] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-24 03:39:55,203 [14182] DEBUG: Uploading base/16385/3503 2023-08-24 03:39:55,204 [14194] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-24 03:39:55,204 [14182] DEBUG: Uploading base/16385/2187 2023-08-24 03:39:55,204 [14182] DEBUG: Uploading base/16385/2680 2023-08-24 03:39:55,205 [14194] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-24 03:39:55,205 [14193] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-24 03:39:55,205 [14182] DEBUG: Uploading base/16385/13370_vm 2023-08-24 03:39:55,205 [14194] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-24 03:39:55,205 [14193] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-24 03:39:55,205 [14182] DEBUG: Uploading base/16385/3079_fsm 2023-08-24 03:39:55,206 [14194] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-24 03:39:55,206 [14182] DEBUG: Uploading base/16385/2701 2023-08-24 03:39:55,206 [14194] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-24 03:39:55,206 [14194] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-24 03:39:55,206 [14182] DEBUG: Uploading base/16385/13370 2023-08-24 03:39:55,207 [14194] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-24 03:39:55,207 [14182] DEBUG: Uploading base/16385/2666 2023-08-24 03:39:55,207 [14193] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-24 03:39:55,207 [14193] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-24 03:39:55,207 [14182] DEBUG: Uploading base/16385/2704 2023-08-24 03:39:55,207 [14193] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-24 03:39:55,208 [14182] DEBUG: Uploading base/16385/3764_vm 2023-08-24 03:39:55,208 [14194] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-24 03:39:55,208 [14194] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-24 03:39:55,208 [14194] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-24 03:39:55,208 [14194] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-24 03:39:55,208 [14193] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-24 03:39:55,208 [14182] DEBUG: Uploading base/16385/2841 2023-08-24 03:39:55,209 [14182] DEBUG: Uploading base/16385/3379 2023-08-24 03:39:55,209 [14182] DEBUG: Uploading base/16385/3605 2023-08-24 03:39:55,210 [14182] DEBUG: Uploading base/16385/1247 2023-08-24 03:39:55,210 [14193] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-24 03:39:55,210 [14193] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-24 03:39:55,210 [14193] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-24 03:39:55,210 [14193] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-24 03:39:55,210 [14182] DEBUG: Uploading base/16385/3351 2023-08-24 03:39:55,211 [14182] DEBUG: Uploading base/16385/6238 2023-08-24 03:39:55,211 [14182] DEBUG: Uploading base/16385/2685 2023-08-24 03:39:55,212 [14182] DEBUG: Uploading base/16385/3575 2023-08-24 03:39:55,212 [14182] DEBUG: Uploading base/16385/2616_vm 2023-08-24 03:39:55,213 [14182] DEBUG: Uploading base/16385/3542 2023-08-24 03:39:55,213 [14182] DEBUG: Uploading base/16385/2653 2023-08-24 03:39:55,214 [14182] DEBUG: Uploading base/16385/3394_fsm 2023-08-24 03:39:55,214 [14182] DEBUG: Uploading base/16385/2657 2023-08-24 03:39:55,214 [14182] DEBUG: Uploading base/16385/3395 2023-08-24 03:39:55,215 [14182] DEBUG: Uploading base/16385/2605_fsm 2023-08-24 03:39:55,215 [14182] DEBUG: Uploading base/16385/112 2023-08-24 03:39:55,215 [14182] DEBUG: Uploading base/16385/4155 2023-08-24 03:39:55,216 [14182] DEBUG: Uploading base/16385/2224 2023-08-24 03:39:55,216 [14182] DEBUG: Uploading base/16385/4167 2023-08-24 03:39:55,217 [14182] DEBUG: Uploading base/16385/2834 2023-08-24 03:39:55,217 [14182] DEBUG: Uploading base/16385/2615_vm 2023-08-24 03:39:55,218 [14182] DEBUG: Uploading base/16385/2608 2023-08-24 03:39:55,218 [14182] DEBUG: Uploading base/16385/2838_vm 2023-08-24 03:39:55,218 [14182] DEBUG: Uploading base/16385/2654 2023-08-24 03:39:55,219 [14182] DEBUG: Uploading base/16385/4149 2023-08-24 03:39:55,219 [14182] DEBUG: Uploading base/16385/3599 2023-08-24 03:39:55,220 [14182] DEBUG: Uploading base/16385/2757 2023-08-24 03:39:55,220 [14182] DEBUG: Uploading base/16385/6176 2023-08-24 03:39:55,220 [14182] DEBUG: Uploading base/16385/5002 2023-08-24 03:39:55,221 [14182] DEBUG: Uploading base/16385/2664 2023-08-24 03:39:55,221 [14182] DEBUG: Uploading base/16385/2650 2023-08-24 03:39:55,222 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-24 03:39:55,222 [14182] DEBUG: Uploading base/16385/3456 2023-08-24 03:39:55,222 [14182] DEBUG: Uploading base/16385/2610_vm 2023-08-24 03:39:55,223 [14182] DEBUG: Uploading base/16385/13373 2023-08-24 03:39:55,223 [14182] DEBUG: Uploading base/16385/2840_vm 2023-08-24 03:39:55,224 [14182] DEBUG: Uploading base/16385/2609_vm 2023-08-24 03:39:55,224 [14182] DEBUG: Uploading base/16385/4164 2023-08-24 03:39:55,224 [14182] DEBUG: Uploading base/16385/2668 2023-08-24 03:39:55,225 [14182] DEBUG: Uploading base/16385/4163 2023-08-24 03:39:55,225 [14182] DEBUG: Uploading base/16385/2696 2023-08-24 03:39:55,225 [14194] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-24 03:39:55,226 [14182] DEBUG: Uploading base/16385/2674 2023-08-24 03:39:55,226 [14182] DEBUG: Uploading base/16385/6113 2023-08-24 03:39:55,226 [14194] DEBUG: Looking for credentials via: env 2023-08-24 03:39:55,226 [14194] INFO: Found credentials in environment variables. 2023-08-24 03:39:55,227 [14182] DEBUG: Uploading base/16385/4147 2023-08-24 03:39:55,227 [14182] DEBUG: Uploading base/16385/1249_fsm 2023-08-24 03:39:55,227 [14182] DEBUG: Uploading base/16385/3764_fsm 2023-08-24 03:39:55,227 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-24 03:39:55,228 [14182] DEBUG: Uploading base/16385/1259_vm 2023-08-24 03:39:55,228 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-24 03:39:55,228 [14182] DEBUG: Uploading base/16385/3997 2023-08-24 03:39:55,229 [14182] DEBUG: Uploading base/16385/3079 2023-08-24 03:39:55,229 [14182] DEBUG: Uploading base/16385/13375_vm 2023-08-24 03:39:55,230 [14182] DEBUG: Uploading base/16385/6175 2023-08-24 03:39:55,230 [14182] DEBUG: Uploading base/16385/13380_vm 2023-08-24 03:39:55,231 [14182] DEBUG: Uploading base/16385/PG_VERSION 2023-08-24 03:39:55,231 [14182] DEBUG: Uploading base/16385/3609 2023-08-24 03:39:55,232 [14182] DEBUG: Uploading base/16385/3603 2023-08-24 03:39:55,232 [14182] DEBUG: Uploading base/16385/3394 2023-08-24 03:39:55,232 [14182] DEBUG: Uploading base/16385/2703 2023-08-24 03:39:55,233 [14182] DEBUG: Uploading base/16385/2607_vm 2023-08-24 03:39:55,233 [14193] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-24 03:39:55,233 [14182] DEBUG: Uploading base/16385/2652 2023-08-24 03:39:55,234 [14182] DEBUG: Uploading base/16385/828 2023-08-24 03:39:55,234 [14182] DEBUG: Uploading base/16385/3541 2023-08-24 03:39:55,234 [14193] DEBUG: Looking for credentials via: env 2023-08-24 03:39:55,235 [14193] INFO: Found credentials in environment variables. 2023-08-24 03:39:55,235 [14182] DEBUG: Uploading base/16385/6116 2023-08-24 03:39:55,235 [14182] DEBUG: Uploading base/16385/2693 2023-08-24 03:39:55,235 [14182] DEBUG: Uploading base/16385/2659 2023-08-24 03:39:55,236 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-24 03:39:55,236 [14182] DEBUG: Uploading base/16385/3467 2023-08-24 03:39:55,237 [14182] DEBUG: Uploading base/16385/2840_fsm 2023-08-24 03:39:55,237 [14182] DEBUG: Uploading base/16385/13383 2023-08-24 03:39:55,237 [14182] DEBUG: Uploading base/16385/2661 2023-08-24 03:39:55,238 [14182] DEBUG: Uploading base/16385/4154 2023-08-24 03:39:55,239 [14182] DEBUG: Uploading base/16385/2603 2023-08-24 03:39:55,239 [14182] DEBUG: Uploading base/16385/1418 2023-08-24 03:39:55,239 [14182] DEBUG: Uploading base/16385/3085 2023-08-24 03:39:55,240 [14182] DEBUG: Uploading base/16385/2617 2023-08-24 03:39:55,241 [14182] DEBUG: Uploading base/16385/2753 2023-08-24 03:39:55,241 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-24 03:39:55,241 [14182] DEBUG: Uploading base/16385/3079_vm 2023-08-24 03:39:55,241 [14194] DEBUG: Event choose-service-name: calling handler 2023-08-24 03:39:55,242 [14182] DEBUG: Uploading base/16385/1255_fsm 2023-08-24 03:39:55,242 [14182] DEBUG: Uploading base/16385/6104 2023-08-24 03:39:55,243 [14182] DEBUG: Uploading base/16385/2619_vm 2023-08-24 03:39:55,243 [14182] DEBUG: Uploading base/16385/13389 2023-08-24 03:39:55,244 [14182] DEBUG: Uploading base/16385/3433 2023-08-24 03:39:55,244 [14182] DEBUG: Uploading base/16385/2655 2023-08-24 03:39:55,245 [14182] DEBUG: Uploading base/16385/2833 2023-08-24 03:39:55,245 [14182] DEBUG: Uploading base/16385/6112 2023-08-24 03:39:55,245 [14182] DEBUG: Uploading base/16385/2681 2023-08-24 03:39:55,246 [14182] DEBUG: Uploading base/16385/2662 2023-08-24 03:39:55,246 [14182] DEBUG: Uploading base/16385/13385_vm 2023-08-24 03:39:55,247 [14182] DEBUG: Uploading base/16385/3394_vm 2023-08-24 03:39:55,248 [14182] DEBUG: Uploading base/16385/3164 2023-08-24 03:39:55,248 [14182] DEBUG: Uploading base/16385/6106 2023-08-24 03:39:55,249 [14182] DEBUG: Uploading base/16385/2610 2023-08-24 03:39:55,250 [14182] DEBUG: Uploading base/5/4160 2023-08-24 03:39:55,250 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-24 03:39:55,250 [14193] DEBUG: Event choose-service-name: calling handler 2023-08-24 03:39:55,251 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-24 03:39:55,251 [14182] DEBUG: Uploading base/5/3256 2023-08-24 03:39:55,252 [14182] DEBUG: Uploading base/5/2607_fsm 2023-08-24 03:39:55,252 [14182] DEBUG: Uploading base/5/2603_vm 2023-08-24 03:39:55,253 [14182] DEBUG: Uploading base/5/3439 2023-08-24 03:39:55,253 [14182] DEBUG: Uploading base/5/2687 2023-08-24 03:39:55,254 [14182] DEBUG: Uploading base/5/3350 2023-08-24 03:39:55,254 [14182] DEBUG: Uploading base/5/2835 2023-08-24 03:39:55,255 [14182] DEBUG: Uploading base/5/2682 2023-08-24 03:39:55,255 [14182] DEBUG: Uploading base/5/2679 2023-08-24 03:39:55,256 [14182] DEBUG: Uploading base/5/4146 2023-08-24 03:39:55,256 [14182] DEBUG: Uploading base/5/548 2023-08-24 03:39:55,256 [14182] DEBUG: Uploading base/5/175 2023-08-24 03:39:55,257 [14182] DEBUG: Uploading base/5/2337 2023-08-24 03:39:55,257 [14182] DEBUG: Uploading base/5/1249_vm 2023-08-24 03:39:55,258 [14182] DEBUG: Uploading base/5/3576 2023-08-24 03:39:55,258 [14182] DEBUG: Uploading base/5/3455 2023-08-24 03:39:55,259 [14182] DEBUG: Uploading base/5/6102 2023-08-24 03:39:55,259 [14182] DEBUG: Uploading base/5/2612_fsm 2023-08-24 03:39:55,260 [14182] DEBUG: Uploading base/5/3381 2023-08-24 03:39:55,260 [14182] DEBUG: Uploading base/5/4166 2023-08-24 03:39:55,261 [14182] DEBUG: Uploading base/5/6229 2023-08-24 03:39:55,262 [14182] DEBUG: Uploading base/5/3380 2023-08-24 03:39:55,262 [14182] DEBUG: Uploading base/5/2579 2023-08-24 03:39:55,263 [14182] DEBUG: Uploading base/5/2683 2023-08-24 03:39:55,263 [14182] DEBUG: Uploading base/5/2600_fsm 2023-08-24 03:39:55,264 [14182] DEBUG: Uploading base/5/13388 2023-08-24 03:39:55,264 [14182] DEBUG: Uploading base/5/3541_fsm 2023-08-24 03:39:55,265 [14182] DEBUG: Uploading base/5/4165 2023-08-24 03:39:55,265 [14182] DEBUG: Uploading base/5/2618 2023-08-24 03:39:55,266 [14182] DEBUG: Uploading base/5/2608_fsm 2023-08-24 03:39:55,266 [14182] DEBUG: Uploading base/5/3602_fsm 2023-08-24 03:39:55,267 [14182] DEBUG: Uploading base/5/2667 2023-08-24 03:39:55,267 [14182] DEBUG: Uploading base/5/4159 2023-08-24 03:39:55,268 [14182] DEBUG: Uploading base/5/13380_fsm 2023-08-24 03:39:55,269 [14182] DEBUG: Uploading base/5/3601 2023-08-24 03:39:55,269 [14182] DEBUG: Uploading base/5/6110 2023-08-24 03:39:55,269 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-24 03:39:55,269 [14182] DEBUG: Uploading base/5/3118 2023-08-24 03:39:55,270 [14182] DEBUG: Uploading base/5/2609 2023-08-24 03:39:55,271 [14182] DEBUG: Uploading base/5/1247_fsm 2023-08-24 03:39:55,271 [14182] DEBUG: Uploading base/5/4170 2023-08-24 03:39:55,272 [14182] DEBUG: Uploading base/5/3468 2023-08-24 03:39:55,272 [14182] DEBUG: Uploading base/5/3541_vm 2023-08-24 03:39:55,272 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-24 03:39:55,273 [14182] DEBUG: Uploading base/5/4169 2023-08-24 03:39:55,273 [14182] DEBUG: Uploading base/5/4150 2023-08-24 03:39:55,273 [14182] DEBUG: Uploading base/5/2612 2023-08-24 03:39:55,274 [14182] DEBUG: Uploading base/5/2691 2023-08-24 03:39:55,275 [14182] DEBUG: Uploading base/5/2702 2023-08-24 03:39:55,275 [14182] DEBUG: Uploading base/5/2839 2023-08-24 03:39:55,275 [14182] DEBUG: Uploading base/5/1247_vm 2023-08-24 03:39:55,276 [14182] DEBUG: Uploading base/5/3602 2023-08-24 03:39:55,276 [14182] DEBUG: Uploading base/5/2836 2023-08-24 03:39:55,277 [14182] DEBUG: Uploading base/5/4171 2023-08-24 03:39:55,277 [14182] DEBUG: Uploading base/5/3430 2023-08-24 03:39:55,277 [14182] DEBUG: Uploading base/5/2618_vm 2023-08-24 03:39:55,277 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-24 03:39:55,278 [14182] DEBUG: Uploading base/5/2689 2023-08-24 03:39:55,278 [14182] DEBUG: Uploading base/5/3119 2023-08-24 03:39:55,278 [14194] DEBUG: Event creating-client-class.s3: calling handler 2023-08-24 03:39:55,278 [14194] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffff80d15940> 2023-08-24 03:39:55,278 [14194] DEBUG: Event creating-client-class.s3: calling handler 2023-08-24 03:39:55,279 [14182] DEBUG: Uploading base/5/3603_vm 2023-08-24 03:39:55,279 [14182] DEBUG: Uploading base/5/4151 2023-08-24 03:39:55,279 [14182] DEBUG: Uploading base/5/13385_fsm 2023-08-24 03:39:55,280 [14194] DEBUG: Setting s3 timeout as (60, 60) 2023-08-24 03:39:55,280 [14182] DEBUG: Uploading base/5/1255 2023-08-24 03:39:55,281 [14182] DEBUG: Uploading base/5/2618_fsm 2023-08-24 03:39:55,281 [14194] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-24 03:39:55,281 [14194] DEBUG: Registering retry handlers for service: s3 2023-08-24 03:39:55,281 [14182] DEBUG: Uploading base/5/2336 2023-08-24 03:39:55,282 [14194] DEBUG: Registering S3 region redirector handler 2023-08-24 03:39:55,282 [14194] DEBUG: Loading s3:s3 2023-08-24 03:39:55,282 [14182] DEBUG: Uploading base/5/4153 2023-08-24 03:39:55,282 [14182] DEBUG: Uploading base/5/3600 2023-08-24 03:39:55,283 [14182] DEBUG: Uploading base/5/2753_vm 2023-08-24 03:39:55,283 [14182] DEBUG: Uploading base/5/2602 2023-08-24 03:39:55,283 [14182] DEBUG: Uploading base/5/2675 2023-08-24 03:39:55,284 [14182] DEBUG: Uploading base/5/2619 2023-08-24 03:39:55,285 [14182] DEBUG: Uploading base/5/6111 2023-08-24 03:39:55,285 [14182] DEBUG: Uploading base/5/2678 2023-08-24 03:39:55,285 [14182] DEBUG: Uploading base/5/6117 2023-08-24 03:39:55,286 [14182] DEBUG: Uploading base/5/2684 2023-08-24 03:39:55,287 [14182] DEBUG: Uploading base/5/2673 2023-08-24 03:39:55,288 [14182] DEBUG: Uploading base/5/2613 2023-08-24 03:39:55,288 [14182] DEBUG: Uploading base/5/2228 2023-08-24 03:39:55,288 [14194] INFO: Uploading 'cnpg-cluster/base/20230824T033954/data.tar', part '1' (worker 1) 2023-08-24 03:39:55,288 [14182] DEBUG: Uploading base/5/1255_vm 2023-08-24 03:39:55,289 [14194] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-24 03:39:55,289 [14194] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-24 03:39:55,289 [14182] DEBUG: Uploading base/5/2836_fsm 2023-08-24 03:39:55,289 [14194] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-24 03:39:55,289 [14182] DEBUG: Uploading base/5/3501 2023-08-24 03:39:55,290 [14182] DEBUG: Uploading base/5/2753_fsm 2023-08-24 03:39:55,290 [14194] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-24 03:39:55,290 [14194] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-24 03:39:55,290 [14194] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-24 03:39:55,291 [14182] DEBUG: Uploading base/5/2600 2023-08-24 03:39:55,291 [14194] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:55,291 [14194] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:55,291 [14194] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:55,291 [14194] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:55,291 [14194] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler > 2023-08-24 03:39:55,291 [14194] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:55,291 [14182] DEBUG: Uploading base/5/113 2023-08-24 03:39:55,292 [14194] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:55,292 [14182] DEBUG: Uploading base/5/2688 2023-08-24 03:39:55,293 [14182] DEBUG: Uploading base/5/1259_fsm 2023-08-24 03:39:55,294 [14182] DEBUG: Uploading base/5/2692 2023-08-24 03:39:55,294 [14182] DEBUG: Uploading base/5/6228 2023-08-24 03:39:55,295 [14182] DEBUG: Uploading base/5/2328 2023-08-24 03:39:55,296 [14182] DEBUG: Uploading base/5/2602_fsm 2023-08-24 03:39:55,296 [14182] DEBUG: Uploading base/5/3456_vm 2023-08-24 03:39:55,296 [14182] DEBUG: Uploading base/5/3600_vm 2023-08-24 03:39:55,297 [14182] DEBUG: Uploading base/5/2755 2023-08-24 03:39:55,297 [14182] DEBUG: Uploading base/5/3604 2023-08-24 03:39:55,298 [14182] DEBUG: Uploading base/5/2620 2023-08-24 03:39:55,298 [14182] DEBUG: Uploading base/5/2651 2023-08-24 03:39:55,299 [14182] DEBUG: Uploading base/5/1417 2023-08-24 03:39:55,299 [14182] DEBUG: Uploading base/5/2836_vm 2023-08-24 03:39:55,299 [14182] DEBUG: Uploading base/5/2699 2023-08-24 03:39:55,300 [14182] DEBUG: Uploading base/5/4145 2023-08-24 03:39:55,300 [14182] DEBUG: Uploading base/5/3429 2023-08-24 03:39:55,301 [14182] DEBUG: Uploading base/5/826 2023-08-24 03:39:55,301 [14182] DEBUG: Uploading base/5/3502 2023-08-24 03:39:55,301 [14182] DEBUG: Uploading base/5/4156 2023-08-24 03:39:55,302 [14182] DEBUG: Uploading base/5/2606 2023-08-24 03:39:55,302 [14182] DEBUG: Uploading base/5/3601_vm 2023-08-24 03:39:55,303 [14182] DEBUG: Uploading base/5/2611 2023-08-24 03:39:55,303 [14182] DEBUG: Uploading base/5/4172 2023-08-24 03:39:55,304 [14182] DEBUG: Uploading base/5/13385 2023-08-24 03:39:55,304 [14182] DEBUG: Uploading base/5/13380 2023-08-24 03:39:55,305 [14182] DEBUG: Uploading base/5/3600_fsm 2023-08-24 03:39:55,305 [14182] DEBUG: Uploading base/5/2605 2023-08-24 03:39:55,305 [14182] DEBUG: Uploading base/5/2603_fsm 2023-08-24 03:39:55,306 [14182] DEBUG: Uploading base/5/2616_fsm 2023-08-24 03:39:55,306 [14182] DEBUG: Uploading base/5/3766 2023-08-24 03:39:55,307 [14182] DEBUG: Uploading base/5/2602_vm 2023-08-24 03:39:55,307 [14182] DEBUG: Uploading base/5/2838_fsm 2023-08-24 03:39:55,308 [14182] DEBUG: Uploading base/5/4144 2023-08-24 03:39:55,308 [14182] DEBUG: Uploading base/5/3608 2023-08-24 03:39:55,308 [14182] DEBUG: Uploading base/5/2600_vm 2023-08-24 03:39:55,309 [14182] DEBUG: Uploading base/5/2616 2023-08-24 03:39:55,309 [14182] DEBUG: Uploading base/5/3440 2023-08-24 03:39:55,310 [14182] DEBUG: Uploading base/5/2612_vm 2023-08-24 03:39:55,310 [14182] DEBUG: Uploading base/5/13374 2023-08-24 03:39:55,310 [14182] DEBUG: Uploading base/5/2831 2023-08-24 03:39:55,311 [14182] DEBUG: Uploading base/5/3456_fsm 2023-08-24 03:39:55,311 [14182] DEBUG: Uploading base/5/2608_vm 2023-08-24 03:39:55,312 [14182] DEBUG: Uploading base/5/2605_vm 2023-08-24 03:39:55,312 [14182] DEBUG: Uploading base/5/549 2023-08-24 03:39:55,312 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-24 03:39:55,312 [14182] DEBUG: Uploading base/5/2840 2023-08-24 03:39:55,313 [14182] DEBUG: Uploading base/5/4152 2023-08-24 03:39:55,314 [14182] DEBUG: Uploading base/5/2658 2023-08-24 03:39:55,315 [14182] DEBUG: Uploading base/5/13384 2023-08-24 03:39:55,315 [14182] DEBUG: Uploading base/5/174 2023-08-24 03:39:55,316 [14182] DEBUG: Uploading base/5/2686 2023-08-24 03:39:55,316 [14182] DEBUG: Uploading base/5/2660 2023-08-24 03:39:55,316 [14182] DEBUG: Uploading base/5/3601_fsm 2023-08-24 03:39:55,317 [14182] DEBUG: Uploading base/5/2617_fsm 2023-08-24 03:39:55,318 [14182] DEBUG: Uploading base/5/4173 2023-08-24 03:39:55,318 [14182] DEBUG: Uploading base/5/6239 2023-08-24 03:39:55,318 [14182] DEBUG: Uploading base/5/2837 2023-08-24 03:39:55,319 [14182] DEBUG: Uploading base/5/2690 2023-08-24 03:39:55,320 [14182] DEBUG: Uploading base/5/3534 2023-08-24 03:39:55,320 [14182] DEBUG: Uploading base/5/2665 2023-08-24 03:39:55,321 [14182] DEBUG: Uploading base/5/2619_fsm 2023-08-24 03:39:55,321 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-24 03:39:55,321 [14182] DEBUG: Uploading base/5/2601 2023-08-24 03:39:55,322 [14182] DEBUG: Uploading base/5/3080 2023-08-24 03:39:55,322 [14182] DEBUG: Uploading base/5/2995 2023-08-24 03:39:55,323 [14182] DEBUG: Uploading base/5/13375_fsm 2023-08-24 03:39:55,323 [14193] DEBUG: Event creating-client-class.s3: calling handler 2023-08-24 03:39:55,323 [14182] DEBUG: Uploading base/5/13379 2023-08-24 03:39:55,323 [14193] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffff80d15940> 2023-08-24 03:39:55,323 [14193] DEBUG: Event creating-client-class.s3: calling handler 2023-08-24 03:39:55,324 [14182] DEBUG: Uploading base/5/2669 2023-08-24 03:39:55,324 [14182] DEBUG: Uploading base/5/2615 2023-08-24 03:39:55,325 [14193] DEBUG: Setting s3 timeout as (60, 60) 2023-08-24 03:39:55,325 [14182] DEBUG: Uploading base/5/3258 2023-08-24 03:39:55,325 [14182] DEBUG: Uploading base/5/6237 2023-08-24 03:39:55,326 [14182] DEBUG: Uploading base/5/2996 2023-08-24 03:39:55,326 [14182] DEBUG: Uploading base/5/4157 2023-08-24 03:39:55,326 [14193] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-24 03:39:55,326 [14182] DEBUG: Uploading base/5/3257 2023-08-24 03:39:55,326 [14193] DEBUG: Registering retry handlers for service: s3 2023-08-24 03:39:55,326 [14193] DEBUG: Registering S3 region redirector handler 2023-08-24 03:39:55,327 [14182] DEBUG: Uploading base/5/1249 2023-08-24 03:39:55,327 [14193] DEBUG: Loading s3:s3 2023-08-24 03:39:55,328 [14182] DEBUG: Uploading base/5/4148 2023-08-24 03:39:55,329 [14182] DEBUG: Uploading base/5/3466 2023-08-24 03:39:55,329 [14182] DEBUG: Uploading base/5/2601_vm 2023-08-24 03:39:55,329 [14182] DEBUG: Uploading base/5/4158 2023-08-24 03:39:55,330 [14182] DEBUG: Uploading base/5/4168 2023-08-24 03:39:55,330 [14182] DEBUG: Uploading base/5/4174 2023-08-24 03:39:55,331 [14182] DEBUG: Uploading base/5/2663 2023-08-24 03:39:55,332 [14182] DEBUG: Uploading base/5/3712 2023-08-24 03:39:55,332 [14182] DEBUG: Uploading base/5/2604 2023-08-24 03:39:55,333 [14182] DEBUG: Uploading base/5/3606 2023-08-24 03:39:55,333 [14182] DEBUG: Uploading base/5/2615_fsm 2023-08-24 03:39:55,333 [14182] DEBUG: Uploading base/5/pg_filenode.map 2023-08-24 03:39:55,334 [14182] DEBUG: Uploading base/5/2832 2023-08-24 03:39:55,334 [14182] DEBUG: Uploading base/5/3597 2023-08-24 03:39:55,334 [14182] DEBUG: Uploading base/5/2656 2023-08-24 03:39:55,334 [14182] DEBUG: Uploading base/5/2607 2023-08-24 03:39:55,335 [14182] DEBUG: Uploading base/5/13375 2023-08-24 03:39:55,335 [14182] DEBUG: Uploading base/5/3596 2023-08-24 03:39:55,336 [14182] DEBUG: Uploading base/5/1259 2023-08-24 03:39:55,336 [14182] DEBUG: Uploading base/5/2754 2023-08-24 03:39:55,337 [14182] DEBUG: Uploading base/5/2756 2023-08-24 03:39:55,337 [14182] DEBUG: Uploading base/5/3081 2023-08-24 03:39:55,338 [14182] DEBUG: Uploading base/5/3598 2023-08-24 03:39:55,338 [14182] DEBUG: Uploading base/5/3574 2023-08-24 03:39:55,338 [14182] DEBUG: Uploading base/5/3607 2023-08-24 03:39:55,339 [14182] DEBUG: Uploading base/5/2670 2023-08-24 03:39:55,339 [14182] DEBUG: Uploading base/5/13378 2023-08-24 03:39:55,339 [14182] DEBUG: Uploading base/5/13370_fsm 2023-08-24 03:39:55,340 [14182] DEBUG: Uploading base/5/3431 2023-08-24 03:39:55,341 [14182] DEBUG: Uploading base/5/2609_fsm 2023-08-24 03:39:55,341 [14182] DEBUG: Uploading base/5/2606_vm 2023-08-24 03:39:55,342 [14182] DEBUG: Uploading base/5/4143 2023-08-24 03:39:55,342 [14182] DEBUG: Uploading base/5/2610_fsm 2023-08-24 03:39:55,342 [14182] DEBUG: Uploading base/5/2617_vm 2023-08-24 03:39:55,343 [14182] DEBUG: Uploading base/5/2606_fsm 2023-08-24 03:39:55,343 [14182] DEBUG: Uploading base/5/827 2023-08-24 03:39:55,344 [14182] DEBUG: Uploading base/5/3602_vm 2023-08-24 03:39:55,344 [14182] DEBUG: Uploading base/5/2601_fsm 2023-08-24 03:39:55,344 [14182] DEBUG: Uploading base/5/3764 2023-08-24 03:39:55,345 [14182] DEBUG: Uploading base/5/3603_fsm 2023-08-24 03:39:55,345 [14182] DEBUG: Uploading base/5/3767 2023-08-24 03:39:55,346 [14182] DEBUG: Uploading base/5/2830 2023-08-24 03:39:55,346 [14182] DEBUG: Uploading base/5/2838 2023-08-24 03:39:55,347 [14182] DEBUG: Uploading base/5/3503 2023-08-24 03:39:55,348 [14182] DEBUG: Uploading base/5/2187 2023-08-24 03:39:55,348 [14182] DEBUG: Uploading base/5/2680 2023-08-24 03:39:55,348 [14182] DEBUG: Uploading base/5/13370_vm 2023-08-24 03:39:55,349 [14182] DEBUG: Uploading base/5/3079_fsm 2023-08-24 03:39:55,349 [14182] DEBUG: Uploading base/5/2701 2023-08-24 03:39:55,349 [14194] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:55,349 [14194] DEBUG: Adding expect 100 continue header to request. 2023-08-24 03:39:55,349 [14194] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:55,349 [14194] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:55,349 [14182] DEBUG: Uploading base/5/13370 2023-08-24 03:39:55,349 [14194] DEBUG: Making request for OperationModel(name=UploadPart) with params: {'url_path': '/cnpg-cluster/base/20230824T033954/data.tar', 'query_string': {'uploadId': '2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon', 'partNumber': 1}, 'method': 'PUT', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'Content-MD5': '0rUTRemnf/lNxebBEKXexw==', 'Expect': '100-continue'}, 'body': <_io.BufferedReader name='/controller/barman-cloud-backup-ti2up2kp/barman-upload-b8h5wu5v.part'>, 'auth_path': '/barmantest/cnpg-cluster/base/20230824T033954/data.tar', 'url': 'https://.linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon&partNumber=1', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': True, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Body': <_io.BufferedReader name='/controller/barman-cloud-backup-ti2up2kp/barman-upload-b8h5wu5v.part'>, 'Bucket': 'barmantest', 'Key': 'cnpg-cluster/base/20230824T033954/data.tar', 'UploadId': '2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon', 'PartNumber': 1}}}} 2023-08-24 03:39:55,350 [14194] DEBUG: Event request-created.s3.UploadPart: calling handler > 2023-08-24 03:39:55,350 [14182] DEBUG: Uploading base/5/2666 2023-08-24 03:39:55,350 [14194] DEBUG: Event choose-signer.s3.UploadPart: calling handler > 2023-08-24 03:39:55,350 [14194] DEBUG: Event choose-signer.s3.UploadPart: calling handler 2023-08-24 03:39:55,350 [14194] DEBUG: Event before-sign.s3.UploadPart: calling handler 2023-08-24 03:39:55,351 [14194] DEBUG: Calculating signature using v4 auth. 2023-08-24 03:39:55,351 [14194] DEBUG: CanonicalRequest: PUT /barmantest/cnpg-cluster/base/20230824T033954/data.tar partNumber=1&uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon content-md5:0rUTRemnf/lNxebBEKXexw== host:.linodeobjects.com x-amz-content-sha256:UNSIGNED-PAYLOAD x-amz-date:20230824T033955Z content-md5;host;x-amz-content-sha256;x-amz-date UNSIGNED-PAYLOAD 2023-08-24 03:39:55,351 [14194] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230824T033955Z 20230824//s3/aws4_request 71908245c2e8211f6818c1a4fe82da9716a285d1393ba62d6982f28501978450 2023-08-24 03:39:55,351 [14194] DEBUG: Signature: a08029b76a8607e2e5e84ce3431012d946ae1a16aa5888e5b0db90889e14982f 2023-08-24 03:39:55,351 [14194] DEBUG: Event request-created.s3.UploadPart: calling handler 2023-08-24 03:39:55,351 [14194] DEBUG: Sending http request: .linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon&partNumber=1, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'Content-MD5': b'0rUTRemnf/lNxebBEKXexw==', 'Expect': b'100-continue', 'X-Amz-Date': b'20230824T033955Z', 'X-Amz-Content-SHA256': b'UNSIGNED-PAYLOAD', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230824//s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=a08029b76a8607e2e5e84ce3431012d946ae1a16aa5888e5b0db90889e14982f', 'amz-sdk-invocation-id': b'9404442f-a071-4fe3-86b1-5d31e5c89959', 'amz-sdk-request': b'attempt=1', 'Content-Length': '21495808'}> 2023-08-24 03:39:55,352 [14194] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-24 03:39:55,352 [14194] DEBUG: Starting new HTTPS connection (1): .linodeobjects.com:443 2023-08-24 03:39:55,351 [14182] DEBUG: Uploading base/5/2704 2023-08-24 03:39:55,353 [14182] DEBUG: Uploading base/5/3764_vm 2023-08-24 03:39:55,353 [14182] DEBUG: Uploading base/5/2841 2023-08-24 03:39:55,354 [14182] DEBUG: Uploading base/5/3379 2023-08-24 03:39:55,354 [14182] DEBUG: Uploading base/5/3605 2023-08-24 03:39:55,354 [14182] DEBUG: Uploading base/5/1247 2023-08-24 03:39:55,355 [14182] DEBUG: Uploading base/5/3351 2023-08-24 03:39:55,356 [14182] DEBUG: Uploading base/5/6238 2023-08-24 03:39:55,356 [14182] DEBUG: Uploading base/5/2685 2023-08-24 03:39:55,357 [14182] DEBUG: Uploading base/5/3575 2023-08-24 03:39:55,357 [14182] DEBUG: Uploading base/5/2616_vm 2023-08-24 03:39:55,358 [14182] DEBUG: Uploading base/5/3542 2023-08-24 03:39:55,358 [14182] DEBUG: Uploading base/5/2653 2023-08-24 03:39:55,359 [14182] DEBUG: Uploading base/5/3394_fsm 2023-08-24 03:39:55,359 [14182] DEBUG: Uploading base/5/2657 2023-08-24 03:39:55,360 [14182] DEBUG: Uploading base/5/3395 2023-08-24 03:39:55,360 [14182] DEBUG: Uploading base/5/2605_fsm 2023-08-24 03:39:55,361 [14182] DEBUG: Uploading base/5/112 2023-08-24 03:39:55,362 [14182] DEBUG: Uploading base/5/4155 2023-08-24 03:39:55,363 [14182] DEBUG: Uploading base/5/2224 2023-08-24 03:39:55,364 [14182] DEBUG: Uploading base/5/4167 2023-08-24 03:39:55,364 [14182] DEBUG: Uploading base/5/2834 2023-08-24 03:39:55,365 [14182] DEBUG: Uploading base/5/2615_vm 2023-08-24 03:39:55,366 [14182] DEBUG: Uploading base/5/2608 2023-08-24 03:39:55,366 [14182] DEBUG: Uploading base/5/2838_vm 2023-08-24 03:39:55,367 [14182] DEBUG: Uploading base/5/2654 2023-08-24 03:39:55,367 [14182] DEBUG: Uploading base/5/4149 2023-08-24 03:39:55,368 [14182] DEBUG: Uploading base/5/3599 2023-08-24 03:39:55,368 [14182] DEBUG: Uploading base/5/2757 2023-08-24 03:39:55,369 [14182] DEBUG: Uploading base/5/6176 2023-08-24 03:39:55,369 [14182] DEBUG: Uploading base/5/5002 2023-08-24 03:39:55,370 [14182] DEBUG: Uploading base/5/2664 2023-08-24 03:39:55,370 [14182] DEBUG: Uploading base/5/2650 2023-08-24 03:39:55,370 [14182] DEBUG: Uploading base/5/3456 2023-08-24 03:39:55,371 [14182] DEBUG: Uploading base/5/2610_vm 2023-08-24 03:39:55,372 [14182] DEBUG: Uploading base/5/13373 2023-08-24 03:39:55,373 [14182] DEBUG: Uploading base/5/2840_vm 2023-08-24 03:39:55,373 [14182] DEBUG: Uploading base/5/2609_vm 2023-08-24 03:39:55,374 [14182] DEBUG: Uploading base/5/4164 2023-08-24 03:39:55,374 [14182] DEBUG: Uploading base/5/2668 2023-08-24 03:39:55,375 [14182] DEBUG: Uploading base/5/4163 2023-08-24 03:39:55,376 [14182] DEBUG: Uploading base/5/2696 2023-08-24 03:39:55,376 [14182] DEBUG: Uploading base/5/2674 2023-08-24 03:39:55,376 [14182] DEBUG: Uploading base/5/6113 2023-08-24 03:39:55,377 [14182] DEBUG: Uploading base/5/4147 2023-08-24 03:39:55,377 [14182] DEBUG: Uploading base/5/1249_fsm 2023-08-24 03:39:55,377 [14182] DEBUG: Uploading base/5/3764_fsm 2023-08-24 03:39:55,378 [14182] DEBUG: Uploading base/5/1259_vm 2023-08-24 03:39:55,378 [14182] DEBUG: Uploading base/5/3997 2023-08-24 03:39:55,379 [14182] DEBUG: Uploading base/5/3079 2023-08-24 03:39:55,379 [14182] DEBUG: Uploading base/5/13375_vm 2023-08-24 03:39:55,379 [14182] DEBUG: Uploading base/5/6175 2023-08-24 03:39:55,380 [14182] DEBUG: Uploading base/5/13380_vm 2023-08-24 03:39:55,380 [14182] DEBUG: Uploading base/5/PG_VERSION 2023-08-24 03:39:55,380 [14182] DEBUG: Uploading base/5/3609 2023-08-24 03:39:55,381 [14182] DEBUG: Uploading base/5/3603 2023-08-24 03:39:55,381 [14182] DEBUG: Uploading base/5/3394 2023-08-24 03:39:55,382 [14182] DEBUG: Uploading base/5/2703 2023-08-24 03:39:55,382 [14182] DEBUG: Uploading base/5/2607_vm 2023-08-24 03:39:55,382 [14182] DEBUG: Uploading base/5/2652 2023-08-24 03:39:55,383 [14182] DEBUG: Uploading base/5/828 2023-08-24 03:39:55,383 [14182] DEBUG: Uploading base/5/3541 2023-08-24 03:39:55,384 [14182] DEBUG: Uploading base/5/6116 2023-08-24 03:39:55,384 [14182] DEBUG: Uploading base/5/2693 2023-08-24 03:39:55,384 [14182] DEBUG: Uploading base/5/2659 2023-08-24 03:39:55,385 [14182] DEBUG: Uploading base/5/3467 2023-08-24 03:39:55,385 [14182] DEBUG: Uploading base/5/2840_fsm 2023-08-24 03:39:55,386 [14182] DEBUG: Uploading base/5/13383 2023-08-24 03:39:55,386 [14182] DEBUG: Uploading base/5/2661 2023-08-24 03:39:55,387 [14182] DEBUG: Uploading base/5/4154 2023-08-24 03:39:55,387 [14182] DEBUG: Uploading base/5/2603 2023-08-24 03:39:55,387 [14182] DEBUG: Uploading base/5/1418 2023-08-24 03:39:55,388 [14182] DEBUG: Uploading base/5/3085 2023-08-24 03:39:55,388 [14182] DEBUG: Uploading base/5/2617 2023-08-24 03:39:55,389 [14182] DEBUG: Uploading base/5/2753 2023-08-24 03:39:55,389 [14182] DEBUG: Uploading base/5/3079_vm 2023-08-24 03:39:55,389 [14182] DEBUG: Uploading base/5/1255_fsm 2023-08-24 03:39:55,390 [14182] DEBUG: Uploading base/5/6104 2023-08-24 03:39:55,390 [14182] DEBUG: Uploading base/5/2619_vm 2023-08-24 03:39:55,391 [14182] DEBUG: Uploading base/5/13389 2023-08-24 03:39:55,391 [14182] DEBUG: Uploading base/5/3433 2023-08-24 03:39:55,391 [14182] DEBUG: Uploading base/5/2655 2023-08-24 03:39:55,392 [14182] DEBUG: Uploading base/5/2833 2023-08-24 03:39:55,392 [14182] DEBUG: Uploading base/5/6112 2023-08-24 03:39:55,392 [14182] DEBUG: Uploading base/5/2681 2023-08-24 03:39:55,393 [14182] DEBUG: Uploading base/5/2662 2023-08-24 03:39:55,393 [14182] DEBUG: Uploading base/5/13385_vm 2023-08-24 03:39:55,394 [14182] DEBUG: Uploading base/5/3394_vm 2023-08-24 03:39:55,394 [14182] DEBUG: Uploading base/5/3164 2023-08-24 03:39:55,395 [14182] DEBUG: Uploading base/5/6106 2023-08-24 03:39:55,395 [14182] DEBUG: Uploading base/5/2610 2023-08-24 03:39:55,397 [14182] DEBUG: Uploading global/2396_vm 2023-08-24 03:39:55,397 [14182] DEBUG: Uploading global/4186 2023-08-24 03:39:55,398 [14182] DEBUG: Uploading global/2966 2023-08-24 03:39:55,398 [14182] DEBUG: Uploading global/1213_vm 2023-08-24 03:39:55,399 [14182] DEBUG: Uploading global/1261_fsm 2023-08-24 03:39:55,399 [14182] DEBUG: Uploading global/4178 2023-08-24 03:39:55,400 [14182] DEBUG: Uploading global/2671 2023-08-24 03:39:55,400 [14182] DEBUG: Uploading global/6114 2023-08-24 03:39:55,400 [14182] DEBUG: Uploading global/4061 2023-08-24 03:39:55,401 [14182] DEBUG: Uploading global/4184 2023-08-24 03:39:55,401 [14182] DEBUG: Uploading global/6000 2023-08-24 03:39:55,401 [14182] DEBUG: Uploading global/1261 2023-08-24 03:39:55,402 [14182] DEBUG: Uploading global/1262 2023-08-24 03:39:55,402 [14182] DEBUG: Uploading global/4060 2023-08-24 03:39:55,403 [14182] DEBUG: Uploading global/2695 2023-08-24 03:39:55,403 [14182] DEBUG: Uploading global/4177 2023-08-24 03:39:55,403 [14182] DEBUG: Uploading global/2967 2023-08-24 03:39:55,404 [14182] DEBUG: Uploading global/1261_vm 2023-08-24 03:39:55,404 [14182] DEBUG: Uploading global/4183 2023-08-24 03:39:55,404 [14182] DEBUG: Uploading global/4175 2023-08-24 03:39:55,405 [14182] DEBUG: Uploading global/2697 2023-08-24 03:39:55,406 [14182] DEBUG: Uploading global/1262_vm 2023-08-24 03:39:55,406 [14182] DEBUG: Uploading global/1262_fsm 2023-08-24 03:39:55,407 [14182] DEBUG: Uploading global/1213_fsm 2023-08-24 03:39:55,407 [14182] DEBUG: Uploading global/6243 2023-08-24 03:39:55,407 [14182] DEBUG: Uploading global/6100 2023-08-24 03:39:55,408 [14182] DEBUG: Uploading global/2396_fsm 2023-08-24 03:39:55,408 [14182] DEBUG: Uploading global/1260_vm 2023-08-24 03:39:55,408 [14182] DEBUG: Uploading global/4182 2023-08-24 03:39:55,409 [14182] DEBUG: Uploading global/2397 2023-08-24 03:39:55,409 [14182] DEBUG: Uploading global/pg_filenode.map 2023-08-24 03:39:55,410 [14182] DEBUG: Uploading global/6244 2023-08-24 03:39:55,410 [14182] DEBUG: Uploading global/2396 2023-08-24 03:39:55,410 [14182] DEBUG: Uploading global/1260 2023-08-24 03:39:55,411 [14182] DEBUG: Uploading global/2676 2023-08-24 03:39:55,411 [14182] DEBUG: Uploading global/3593 2023-08-24 03:39:55,411 [14182] DEBUG: Uploading global/6245 2023-08-24 03:39:55,411 [14182] DEBUG: Uploading global/2677 2023-08-24 03:39:55,412 [14182] DEBUG: Uploading global/1213 2023-08-24 03:39:55,412 [14182] DEBUG: Uploading global/2965 2023-08-24 03:39:55,413 [14182] DEBUG: Uploading global/6247 2023-08-24 03:39:55,413 [14182] DEBUG: Uploading global/2698 2023-08-24 03:39:55,414 [14182] DEBUG: Uploading global/4181 2023-08-24 03:39:55,414 [14182] DEBUG: Uploading global/1260_fsm 2023-08-24 03:39:55,415 [14182] DEBUG: Uploading global/2672 2023-08-24 03:39:55,415 [14182] DEBUG: Uploading global/1214 2023-08-24 03:39:55,415 [14182] DEBUG: Uploading global/2964 2023-08-24 03:39:55,416 [14182] DEBUG: Uploading global/3592 2023-08-24 03:39:55,416 [14182] DEBUG: Uploading global/6002 2023-08-24 03:39:55,416 [14182] DEBUG: Uploading global/6246 2023-08-24 03:39:55,417 [14182] DEBUG: Uploading global/2694 2023-08-24 03:39:55,417 [14182] DEBUG: Uploading global/4185 2023-08-24 03:39:55,418 [14182] DEBUG: Uploading global/6001 2023-08-24 03:39:55,418 [14182] DEBUG: Uploading global/1233 2023-08-24 03:39:55,418 [14182] DEBUG: Uploading global/1232 2023-08-24 03:39:55,419 [14182] DEBUG: Uploading global/6115 2023-08-24 03:39:55,419 [14182] DEBUG: Uploading global/2846 2023-08-24 03:39:55,419 [14182] DEBUG: Uploading global/2847 2023-08-24 03:39:55,420 [14182] DEBUG: Uploading global/4176 2023-08-24 03:39:55,423 [14182] DEBUG: Uploading pg_multixact/members/0000 2023-08-24 03:39:55,423 [14182] DEBUG: Uploading pg_multixact/offsets/0000 2023-08-24 03:39:55,425 [14182] DEBUG: Uploading pg_xact/0000 2023-08-24 03:39:55,426 [14182] DEBUG: Uploading pg_logical/replorigin_checkpoint 2023-08-24 03:39:55,427 [14182] INFO: Uploading 'pg_control' file from '/var/lib/postgresql/data/pgdata/global/pg_control' to 'data.tar' with path 'global/pg_control' 2023-08-24 03:39:55,428 [14182] DEBUG: Config file 'postgresql.conf' already in PGDATA 2023-08-24 03:39:55,428 [14182] DEBUG: Config file 'pg_hba.conf' already in PGDATA 2023-08-24 03:39:55,428 [14182] DEBUG: Config file 'pg_ident.conf' already in PGDATA 2023-08-24 03:39:55,428 [14182] INFO: Stopping backup '20230824T033954' 2023-08-24 03:39:55,428 [14182] DEBUG: Stop of native concurrent backup 2023-08-24 03:39:55,984 [14194] DEBUG: Waiting for 100 Continue response. 2023-08-24 03:39:56,188 [14194] DEBUG: 100 Continue response seen, now sending request body. 2023-08-24 03:39:56,453 [14182] INFO: Restore point 'barman_20230824T033954' successfully created 2023-08-24 03:39:56,453 [14182] INFO: Uploading 'backup_label' file to 'data.tar' with path 'backup_label' 2023-08-24 03:39:56,454 [14182] INFO: Marking all the uploaded archives as 'completed' 2023-08-24 03:39:56,455 [14193] INFO: Uploading 'cnpg-cluster/base/20230824T033954/data.tar', part '2' (worker 0) 2023-08-24 03:39:56,456 [14193] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-24 03:39:56,456 [14193] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-24 03:39:56,457 [14193] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-24 03:39:56,459 [14193] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-24 03:39:56,459 [14193] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-24 03:39:56,459 [14193] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-24 03:39:56,459 [14193] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:56,459 [14193] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:56,460 [14193] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:56,460 [14193] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:56,460 [14193] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler > 2023-08-24 03:39:56,460 [14193] DEBUG: Event before-parameter-build.s3.UploadPart: calling handler 2023-08-24 03:39:56,460 [14193] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:56,498 [14193] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:56,499 [14193] DEBUG: Adding expect 100 continue header to request. 2023-08-24 03:39:56,499 [14193] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:56,499 [14193] DEBUG: Event before-call.s3.UploadPart: calling handler 2023-08-24 03:39:56,499 [14193] DEBUG: Making request for OperationModel(name=UploadPart) with params: {'url_path': '/cnpg-cluster/base/20230824T033954/data.tar', 'query_string': {'uploadId': '2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon', 'partNumber': 2}, 'method': 'PUT', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'Content-MD5': 'XmpljQ1qhzZcEy7CMx32Rw==', 'Expect': '100-continue'}, 'body': <_io.BufferedReader name='/controller/barman-cloud-backup-ti2up2kp/barman-upload-npyx6g0s.part'>, 'auth_path': '/barmantest/cnpg-cluster/base/20230824T033954/data.tar', 'url': 'https://.linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon&partNumber=2', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': True, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Body': <_io.BufferedReader name='/controller/barman-cloud-backup-ti2up2kp/barman-upload-npyx6g0s.part'>, 'Bucket': 'barmantest', 'Key': 'cnpg-cluster/base/20230824T033954/data.tar', 'UploadId': '2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon', 'PartNumber': 2}}}} 2023-08-24 03:39:56,499 [14193] DEBUG: Event request-created.s3.UploadPart: calling handler > 2023-08-24 03:39:56,499 [14193] DEBUG: Event choose-signer.s3.UploadPart: calling handler > 2023-08-24 03:39:56,499 [14193] DEBUG: Event choose-signer.s3.UploadPart: calling handler 2023-08-24 03:39:56,499 [14193] DEBUG: Event before-sign.s3.UploadPart: calling handler 2023-08-24 03:39:56,500 [14193] DEBUG: Calculating signature using v4 auth. 2023-08-24 03:39:56,500 [14193] DEBUG: CanonicalRequest: PUT /barmantest/cnpg-cluster/base/20230824T033954/data.tar partNumber=2&uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon content-md5:XmpljQ1qhzZcEy7CMx32Rw== host:.linodeobjects.com x-amz-content-sha256:UNSIGNED-PAYLOAD x-amz-date:20230824T033956Z content-md5;host;x-amz-content-sha256;x-amz-date UNSIGNED-PAYLOAD 2023-08-24 03:39:56,500 [14193] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230824T033956Z 20230824//s3/aws4_request e7db0506bbd16a09657093c60f86a9872b47dff0c7b75b02bd7fa4655cb9a1a6 2023-08-24 03:39:56,500 [14193] DEBUG: Signature: a543ad99c3d6f6c676c360d85da1e6ce41d01f60335da6e7615532ccc6e072b3 2023-08-24 03:39:56,500 [14193] DEBUG: Event request-created.s3.UploadPart: calling handler 2023-08-24 03:39:56,500 [14193] DEBUG: Sending http request: .linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon&partNumber=2, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'Content-MD5': b'XmpljQ1qhzZcEy7CMx32Rw==', 'Expect': b'100-continue', 'X-Amz-Date': b'20230824T033956Z', 'X-Amz-Content-SHA256': b'UNSIGNED-PAYLOAD', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230824//s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=a543ad99c3d6f6c676c360d85da1e6ce41d01f60335da6e7615532ccc6e072b3', 'amz-sdk-invocation-id': b'9336cd46-d45b-46d3-adb0-d294554b0a15', 'amz-sdk-request': b'attempt=1', 'Content-Length': '11057152'}> 2023-08-24 03:39:56,501 [14193] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-24 03:39:56,501 [14193] DEBUG: Starting new HTTPS connection (1): .linodeobjects.com:443 2023-08-24 03:39:57,117 [14193] DEBUG: Waiting for 100 Continue response. 2023-08-24 03:39:57,321 [14193] DEBUG: 100 Continue response seen, now sending request body. 2023-08-24 03:40:04,729 [14194] DEBUG: https://.linodeobjects.com:443 "PUT /barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon&partNumber=1 HTTP/1.1" 200 0 2023-08-24 03:40:04,729 [14194] DEBUG: Response headers: {'Date': 'Thu, 24 Aug 2023 03:40:04 GMT', 'Content-Length': '0', 'Connection': 'keep-alive', 'ETag': '"d2b51345e9a77ff94dc5e6c110a5dec7"', 'Accept-Ranges': 'bytes', 'x-amz-request-id': 'tx00000996f7d59e761b2f1-0064e6d10c-47fda627-default'} 2023-08-24 03:40:04,729 [14194] DEBUG: Response body: b'' 2023-08-24 03:40:04,729 [14194] DEBUG: Event needs-retry.s3.UploadPart: calling handler 2023-08-24 03:40:04,729 [14194] DEBUG: No retry needed. 2023-08-24 03:40:04,730 [14194] DEBUG: Event needs-retry.s3.UploadPart: calling handler > 2023-08-24 03:40:11,769 [14193] DEBUG: https://.linodeobjects.com:443 "PUT /barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon&partNumber=2 HTTP/1.1" 200 0 2023-08-24 03:40:11,771 [14193] DEBUG: Response headers: {'Date': 'Thu, 24 Aug 2023 03:40:11 GMT', 'Content-Length': '0', 'Connection': 'keep-alive', 'ETag': '"5e6a658d0d6a87365c132ec2331df647"', 'Accept-Ranges': 'bytes', 'x-amz-request-id': 'tx00000afdfb525d05f9093-0064e6d10d-48182316-default'} 2023-08-24 03:40:11,771 [14193] DEBUG: Response body: b'' 2023-08-24 03:40:11,775 [14193] DEBUG: Event needs-retry.s3.UploadPart: calling handler 2023-08-24 03:40:11,775 [14193] DEBUG: No retry needed. 2023-08-24 03:40:11,776 [14193] DEBUG: Event needs-retry.s3.UploadPart: calling handler > 2023-08-24 03:40:11,786 [14194] INFO: Completing 'cnpg-cluster/base/20230824T033954/data.tar' (worker 1) 2023-08-24 03:40:11,787 [14194] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-24 03:40:11,787 [14194] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-24 03:40:11,789 [14194] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-24 03:40:11,789 [14194] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-24 03:40:11,789 [14194] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-24 03:40:11,790 [14194] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-24 03:40:11,790 [14194] DEBUG: Event before-parameter-build.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,791 [14194] DEBUG: Event before-parameter-build.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,791 [14194] DEBUG: Event before-parameter-build.s3.CompleteMultipartUpload: calling handler > 2023-08-24 03:40:11,791 [14194] DEBUG: Event before-parameter-build.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,793 [14194] DEBUG: Event before-call.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,794 [14194] DEBUG: Event before-call.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,794 [14194] DEBUG: Event before-call.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,794 [14194] DEBUG: Making request for OperationModel(name=CompleteMultipartUpload) with params: {'url_path': '/cnpg-cluster/base/20230824T033954/data.tar', 'query_string': {'uploadId': '2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon'}, 'method': 'POST', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'1"d2b51345e9a77ff94dc5e6c110a5dec7"2"5e6a658d0d6a87365c132ec2331df647"', 'auth_path': '/barmantest/cnpg-cluster/base/20230824T033954/data.tar', 'url': 'https://.linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest', 'Key': 'cnpg-cluster/base/20230824T033954/data.tar', 'UploadId': '2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon', 'MultipartUpload': {'Parts': [{'PartNumber': 1, 'ETag': '"d2b51345e9a77ff94dc5e6c110a5dec7"'}, {'PartNumber': 2, 'ETag': '"5e6a658d0d6a87365c132ec2331df647"'}]}}}}} 2023-08-24 03:40:11,795 [14194] DEBUG: Event request-created.s3.CompleteMultipartUpload: calling handler > 2023-08-24 03:40:11,795 [14194] DEBUG: Event choose-signer.s3.CompleteMultipartUpload: calling handler > 2023-08-24 03:40:11,795 [14194] DEBUG: Event choose-signer.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,795 [14194] DEBUG: Event before-sign.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,796 [14194] DEBUG: Calculating signature using v4 auth. 2023-08-24 03:40:11,796 [14194] DEBUG: CanonicalRequest: POST /barmantest/cnpg-cluster/base/20230824T033954/data.tar uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon host:.linodeobjects.com x-amz-content-sha256:7c44c32acf39a5b5d9b3b7da204fa2555d78ae9531d1f33628718acea19872ac x-amz-date:20230824T034011Z host;x-amz-content-sha256;x-amz-date 7c44c32acf39a5b5d9b3b7da204fa2555d78ae9531d1f33628718acea19872ac 2023-08-24 03:40:11,797 [14194] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230824T034011Z 20230824//s3/aws4_request d6cd454dab2355dbc2c229b964b0bd8120275b0749b69e35f523d8b74f4b4baa 2023-08-24 03:40:11,797 [14194] DEBUG: Signature: 40a708c7bdb83fdc687af0a6c9966af55c7d50e1d2df174450ac1f04ad56dc50 2023-08-24 03:40:11,797 [14194] DEBUG: Event request-created.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:11,797 [14194] DEBUG: Sending http request: .linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230824T034011Z', 'X-Amz-Content-SHA256': b'7c44c32acf39a5b5d9b3b7da204fa2555d78ae9531d1f33628718acea19872ac', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230824//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=40a708c7bdb83fdc687af0a6c9966af55c7d50e1d2df174450ac1f04ad56dc50', 'amz-sdk-invocation-id': b'4c203e07-e1f2-48af-bdd2-bc8dd649accd', 'amz-sdk-request': b'attempt=1', 'Content-Length': '271'}> 2023-08-24 03:40:11,798 [14194] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-24 03:40:12,056 [14194] DEBUG: https://.linodeobjects.com:443 "POST /barmantest/cnpg-cluster/base/20230824T033954/data.tar?uploadId=2~e6dDc5FsDCBP6V7ZtDZG8YFdpgB1aon HTTP/1.1" 200 388 2023-08-24 03:40:12,056 [14194] DEBUG: Response headers: {'Date': 'Thu, 24 Aug 2023 03:40:12 GMT', 'Content-Type': 'application/xml', 'Content-Length': '388', 'Connection': 'keep-alive', 'x-amz-request-id': 'tx00000eb842e2db7da655d-0064e6d11c-48181b32-default'} 2023-08-24 03:40:12,056 [14194] DEBUG: Response body: b'.linodeobjects.com/cnpg/barmantest/cnpg-cluster/base/20230824T033954/data.tarcnpgbarmantest/cnpg-cluster/base/20230824T033954/data.tar8a7cb8a8b68df3caa5cbc4d008464ed8-2' 2023-08-24 03:40:12,056 [14194] DEBUG: Event needs-retry.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:12,057 [14194] DEBUG: Event needs-retry.s3.CompleteMultipartUpload: calling handler 2023-08-24 03:40:12,057 [14194] DEBUG: No retry needed. 2023-08-24 03:40:12,057 [14194] DEBUG: Event needs-retry.s3.CompleteMultipartUpload: calling handler > 2023-08-24 03:40:12,058 [14182] INFO: Calculating backup statistics 2023-08-24 03:40:12,059 [14182] DEBUG: Calculating statistics for file data, index 0, data: { "end_time": "Thu Aug 24 03:40:12 2023", "key": "cnpg-cluster/base/20230824T033954/data.tar", "parts": { "1": { "end_time": "Thu Aug 24 03:40:04 2023", "part_number": 1, "start_time": "Thu Aug 24 03:39:55 2023" }, "2": { "end_time": "Thu Aug 24 03:40:11 2023", "part_number": 2, "start_time": "Thu Aug 24 03:39:56 2023" } }, "start_time": "Thu Aug 24 03:39:55 2023", "status": "done" } 2023-08-24 03:40:12,059 [14182] INFO: Uploading 'cnpg-cluster/base/20230824T033954/backup.info' 2023-08-24 03:40:12,060 [14182] DEBUG: Acquiring 0 2023-08-24 03:40:12,061 [14182] DEBUG: UploadSubmissionTask(transfer_id=0, {'transfer_future': }) about to wait for the following futures [] 2023-08-24 03:40:12,061 [14182] DEBUG: UploadSubmissionTask(transfer_id=0, {'transfer_future': }) done waiting for dependent futures 2023-08-24 03:40:12,061 [14182] DEBUG: Executing task UploadSubmissionTask(transfer_id=0, {'transfer_future': }) with kwargs {'client': , 'config': , 'osutil': , 'request_executor': , 'transfer_future': } 2023-08-24 03:40:12,061 [14182] DEBUG: Submitting task PutObjectTask(transfer_id=0, {'bucket': 'barmantest', 'key': 'cnpg-cluster/base/20230824T033954/backup.info', 'extra_args': {}}) to executor for transfer request: 0. 2023-08-24 03:40:12,061 [14182] DEBUG: Acquiring 0 2023-08-24 03:40:12,062 [14182] DEBUG: PutObjectTask(transfer_id=0, {'bucket': 'barmantest', 'key': 'cnpg-cluster/base/20230824T033954/backup.info', 'extra_args': {}}) about to wait for the following futures [] 2023-08-24 03:40:12,062 [14182] DEBUG: PutObjectTask(transfer_id=0, {'bucket': 'barmantest', 'key': 'cnpg-cluster/base/20230824T033954/backup.info', 'extra_args': {}}) done waiting for dependent futures 2023-08-24 03:40:12,063 [14182] DEBUG: Executing task PutObjectTask(transfer_id=0, {'bucket': 'barmantest', 'key': 'cnpg-cluster/base/20230824T033954/backup.info', 'extra_args': {}}) with kwargs {'client': , 'fileobj': , 'bucket': 'barmantest', 'key': 'cnpg-cluster/base/20230824T033954/backup.info', 'extra_args': {}} 2023-08-24 03:40:12,063 [14182] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-24 03:40:12,064 [14182] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-24 03:40:12,064 [14182] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-24 03:40:12,064 [14182] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-24 03:40:12,064 [14182] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler > 2023-08-24 03:40:12,064 [14182] DEBUG: Event before-parameter-build.s3.PutObject: calling handler 2023-08-24 03:40:12,065 [14182] DEBUG: Event before-call.s3.PutObject: calling handler 2023-08-24 03:40:12,065 [14182] DEBUG: Event before-call.s3.PutObject: calling handler 2023-08-24 03:40:12,065 [14182] DEBUG: Adding expect 100 continue header to request. 2023-08-24 03:40:12,065 [14182] DEBUG: Event before-call.s3.PutObject: calling handler 2023-08-24 03:40:12,065 [14182] DEBUG: Event before-call.s3.PutObject: calling handler 2023-08-24 03:40:12,065 [14182] DEBUG: Making request for OperationModel(name=PutObject) with params: {'url_path': '/cnpg-cluster/base/20230824T033954/backup.info', 'query_string': {}, 'method': 'PUT', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'Content-MD5': 'x/3j64atM7XzqtvSMaP7JA==', 'Expect': '100-continue'}, 'body': , 'auth_path': '/barmantest/cnpg-cluster/base/20230824T033954/backup.info', 'url': 'https://.linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/backup.info', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': True, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest', 'Key': 'cnpg-cluster/base/20230824T033954/backup.info', 'Body': }}}} 2023-08-24 03:40:12,066 [14182] DEBUG: Event request-created.s3.PutObject: calling handler 2023-08-24 03:40:12,066 [14182] DEBUG: Event request-created.s3.PutObject: calling handler > 2023-08-24 03:40:12,066 [14182] DEBUG: Event choose-signer.s3.PutObject: calling handler > 2023-08-24 03:40:12,066 [14182] DEBUG: Event choose-signer.s3.PutObject: calling handler 2023-08-24 03:40:12,066 [14182] DEBUG: Event before-sign.s3.PutObject: calling handler 2023-08-24 03:40:12,066 [14182] DEBUG: Calculating signature using v4 auth. 2023-08-24 03:40:12,066 [14182] DEBUG: CanonicalRequest: PUT /barmantest/cnpg-cluster/base/20230824T033954/backup.info content-md5:x/3j64atM7XzqtvSMaP7JA== host:.linodeobjects.com x-amz-content-sha256:UNSIGNED-PAYLOAD x-amz-date:20230824T034012Z content-md5;host;x-amz-content-sha256;x-amz-date UNSIGNED-PAYLOAD 2023-08-24 03:40:12,066 [14182] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230824T034012Z 20230824//s3/aws4_request d70b350923aa77a25db4896537c55de26a4486b0f3170269f05d76b9a762dd8c 2023-08-24 03:40:12,066 [14182] DEBUG: Signature: c6f0a3e9934604409eb8d38bb9cf7fd34ed366e5ac24efb2a2f42638540d8743 2023-08-24 03:40:12,066 [14182] DEBUG: Event request-created.s3.PutObject: calling handler 2023-08-24 03:40:12,066 [14182] DEBUG: Event request-created.s3.PutObject: calling handler 2023-08-24 03:40:12,067 [14182] DEBUG: Sending http request: .linodeobjects.com/barmantest/cnpg-cluster/base/20230824T033954/backup.info, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'Content-MD5': b'x/3j64atM7XzqtvSMaP7JA==', 'Expect': b'100-continue', 'X-Amz-Date': b'20230824T034012Z', 'X-Amz-Content-SHA256': b'UNSIGNED-PAYLOAD', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230824//s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=c6f0a3e9934604409eb8d38bb9cf7fd34ed366e5ac24efb2a2f42638540d8743', 'amz-sdk-invocation-id': b'ee49593b-b15d-48aa-a5ec-6b515887a671', 'amz-sdk-request': b'attempt=1', 'Content-Length': '1311'}> 2023-08-24 03:40:12,067 [14182] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-24 03:40:12,067 [14182] DEBUG: Waiting for 100 Continue response. 2023-08-24 03:40:12,063 [14182] DEBUG: Releasing acquire 0/None 2023-08-24 03:40:12,271 [14182] DEBUG: 100 Continue response seen, now sending request body. 2023-08-24 03:40:12,481 [14182] DEBUG: https://.linodeobjects.com:443 "PUT /barmantest/cnpg-cluster/base/20230824T033954/backup.info HTTP/1.1" 200 0 2023-08-24 03:40:12,481 [14182] DEBUG: Response headers: {'Date': 'Thu, 24 Aug 2023 03:40:12 GMT', 'Content-Length': '0', 'Connection': 'keep-alive', 'ETag': '"c7fde3eb86ad33b5f3aadbd231a3fb24"', 'Accept-Ranges': 'bytes', 'x-amz-request-id': 'tx000005266c3d667ecba9f-0064e6d11c-46db096f-default'} 2023-08-24 03:40:12,482 [14182] DEBUG: Response body: b'' 2023-08-24 03:40:12,482 [14182] DEBUG: Event needs-retry.s3.PutObject: calling handler 2023-08-24 03:40:12,483 [14182] DEBUG: No retry needed. 2023-08-24 03:40:12,483 [14182] DEBUG: Event needs-retry.s3.PutObject: calling handler > 2023-08-24 03:40:12,483 [14182] DEBUG: Releasing acquire 0/None 2023-08-24 03:40:12,485 [14182] INFO: Backup end at LSN: 0/B000100 (00000001000000000000000B, 00000100) 2023-08-24 03:40:12,485 [14182] INFO: Backup completed (start time: 2023-08-24 03:39:54.503234, elapsed time: 17 seconds) 2023-08-24 03:40:12,485 [14193] INFO: Upload process stopped (worker 0) 2023-08-24 03:40:12,486 [14194] INFO: Upload process stopped (worker 1) ```

I've additionally tried to get the objects for you programmatically, via

aws s3 ls s3://barmantest/ --endpoint-url=https://<my-endpoint>.linodeobjects.com --profile linode

but the response is empty, as if there were no objects. However, navigating to that end point/bucket in the linode UI, I can see that it is not empty.

barmantest/cnpg-cluster/base:

20230823T032123
20230823T032223
20230823T032323
20230823T032423
20230823T032523
20230823T032623
20230823T033032
20230824T033440
20230824T033954

Each of which contains tiny backup.info and data.tar files.

Additionally, there is a barmantest/cnpg-cluster/wals/0000000100000000/ directory, with these contents:

000000010000000000000005
000000010000000000000006
000000010000000000000006.00000028.backup

Apologies if that's not the information you were asking for.

mikewallace1979 commented 1 year ago

@btxbtx thanks - this was exactly the information I wanted because I hoped to verify:

  1. the underlying backup process is completing successfully
  2. the underlying backup process is actually writing objects into the object store

Both of these things are true which is good but unfortunately means we're no closer to understanding why the barman-cloud-backup-show command fails.

The fact that both barman-cloud-backup-list and aws s3 ls both act as if running against an empty bucket feels significant but I can't think of any obvious reasons why this would happen - I'll read through the logs in more detail give it some more thought.

btxbtx commented 1 year ago

Thank you for the help. Just to rule one more thing out--

Given that list and ls are failing, I thought it might be a credential issue (somehow granting write access without granting read access) but the permissions look good to me.

From the Linode dash:

This key has unlimited access to all buckets on your account.
mikewallace1979 commented 1 year ago

Thanks for ruling out permissions - I forgot to say in my last response but yes, the cnpg manifest files would also be useful.

mikewallace1979 commented 1 year ago

I can see something which might be significant:

In the output for barman-cloud-backup we can see a POST request being made to /barmantest/cnpg-cluster/base/20230824T033954/data.tar - this matches the location of the backups listed by the linode UI: barmantest/cnpg-cluster/base.

However, in the output for both barman-cloud-backup-list and barman-cloud-backup-show there are GET requests to /barmantest which are providing a path of cnpg-cluster-1, e.g.:

GET
/barmantest
delimiter=%2F&encoding-type=url&list-type=2&prefix=cnpg-cluster-1%2Fbase%2F

cnpt-cluster-1 is the pod name rather than the cluster name so if the operator is also running those commands with the pod name instead of the cluster name then that would explain why it can't find the backups, however I also realise this ouput is from manual commands rather than those run by the operator so we can't make that conclusion just yet.

Can you check the options for the Can't extract backup id log line for a recent failed backup and check the cluster name definitely matches the cluster name in the options of the Backup started log line for that same backup?

btxbtx commented 1 year ago

Here are the manifests, starting with the resources I've created. I've changed the s3 destination path from barmantest to barman, otherwise the config is the same as before.

The schedule has an arbitrary value in it right now.

Cluster and ScheduledBackup definition ```yaml # Source: cnpg-cluster/templates/cluster.yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cnpg-cluster namespace: "database" labels: app.kubernetes.io/name: cnpg-cluster helm.sh/chart: cnpg-cluster-0.1.3 app.kubernetes.io/instance: cluster app.kubernetes.io/managed-by: Helm spec: instances: 1 primaryUpdateStrategy: unsupervised superuserSecret: name: superuser-creds managed: roles: - name: readonly ensure: present login: true superuser: false passwordSecret: name: readonly-creds inRoles: - pg_read_all_data backup: barmanObjectStore: destinationPath: "s3://barman/" endpointURL: https://cnpg.us-east-1.linodeobjects.com s3Credentials: accessKeyId: name: es-cnpg key: ACCESS_KEY_ID secretAccessKey: name: es-cnpg key: ACCESS_SECRET_KEY storage: size: 1Gi --- # Source: cnpg-cluster/templates/backup.tpl apiVersion: postgresql.cnpg.io/v1 kind: ScheduledBackup metadata: name: cnpg-backup spec: schedule: "38 * * * * *" backupOwnerReference: self cluster: name: cnpg-cluster ```

Including the non-CRD manifests generated by the template command, in case they are useful.

Operator manifests ```yaml --- # Source: cloudnative-pg/templates/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: cnpg-operator labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm --- # Source: cloudnative-pg/templates/config.yaml # # Copyright The CloudNativePG Contributors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # apiVersion: v1 kind: ConfigMap metadata: name: cnpg-controller-manager-config labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm data: {} --- # Source: cloudnative-pg/templates/monitoring-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: cnpg-default-monitoring labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm cnpg.io/reload: "" data: queries: | backends: query: | SELECT sa.datname , sa.usename , sa.application_name , states.state , COALESCE(sa.count, 0) AS total , COALESCE(sa.max_tx_secs, 0) AS max_tx_duration_seconds FROM ( VALUES ('active') , ('idle') , ('idle in transaction') , ('idle in transaction (aborted)') , ('fastpath function call') , ('disabled') ) AS states(state) LEFT JOIN ( SELECT datname , state , usename , COALESCE(application_name, '') AS application_name , COUNT(*) , COALESCE(EXTRACT (EPOCH FROM (max(now() - xact_start))), 0) AS max_tx_secs FROM pg_catalog.pg_stat_activity GROUP BY datname, state, usename, application_name ) sa ON states.state = sa.state WHERE sa.usename IS NOT NULL metrics: - datname: usage: "LABEL" description: "Name of the database" - usename: usage: "LABEL" description: "Name of the user" - application_name: usage: "LABEL" description: "Name of the application" - state: usage: "LABEL" description: "State of the backend" - total: usage: "GAUGE" description: "Number of backends" - max_tx_duration_seconds: usage: "GAUGE" description: "Maximum duration of a transaction in seconds" backends_waiting: query: | SELECT count(*) AS total FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.granted metrics: - total: usage: "GAUGE" description: "Total number of backends that are currently waiting on other queries" pg_database: query: | SELECT datname , pg_catalog.pg_database_size(datname) AS size_bytes , pg_catalog.age(datfrozenxid) AS xid_age , pg_catalog.mxid_age(datminmxid) AS mxid_age FROM pg_catalog.pg_database metrics: - datname: usage: "LABEL" description: "Name of the database" - size_bytes: usage: "GAUGE" description: "Disk space used by the database" - xid_age: usage: "GAUGE" description: "Number of transactions from the frozen XID to the current one" - mxid_age: usage: "GAUGE" description: "Number of multiple transactions (Multixact) from the frozen XID to the current one" pg_postmaster: query: | SELECT EXTRACT(EPOCH FROM pg_postmaster_start_time) AS start_time FROM pg_catalog.pg_postmaster_start_time() metrics: - start_time: usage: "GAUGE" description: "Time at which postgres started (based on epoch)" pg_replication: query: "SELECT CASE WHEN NOT pg_catalog.pg_is_in_recovery() THEN 0 ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_catalog.pg_last_xact_replay_timestamp()))) END AS lag, pg_catalog.pg_is_in_recovery() AS in_recovery, EXISTS (TABLE pg_stat_wal_receiver) AS is_wal_receiver_up, (SELECT count(*) FROM pg_stat_replication) AS streaming_replicas" metrics: - lag: usage: "GAUGE" description: "Replication lag behind primary in seconds" - in_recovery: usage: "GAUGE" description: "Whether the instance is in recovery" - is_wal_receiver_up: usage: "GAUGE" description: "Whether the instance wal_receiver is up" - streaming_replicas: usage: "GAUGE" description: "Number of streaming replicas connected to the instance" pg_replication_slots: query: | SELECT slot_name, slot_type, database, active, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), restart_lsn) FROM pg_catalog.pg_replication_slots WHERE NOT temporary metrics: - slot_name: usage: "LABEL" description: "Name of the replication slot" - slot_type: usage: "LABEL" description: "Type of the replication slot" - database: usage: "LABEL" description: "Name of the database" - active: usage: "GAUGE" description: "Flag indicating whether the slot is active" - pg_wal_lsn_diff: usage: "GAUGE" description: "Replication lag in bytes" pg_stat_archiver: query: | SELECT archived_count , failed_count , COALESCE(EXTRACT(EPOCH FROM (now() - last_archived_time)), -1) AS seconds_since_last_archival , COALESCE(EXTRACT(EPOCH FROM (now() - last_failed_time)), -1) AS seconds_since_last_failure , COALESCE(EXTRACT(EPOCH FROM last_archived_time), -1) AS last_archived_time , COALESCE(EXTRACT(EPOCH FROM last_failed_time), -1) AS last_failed_time , COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_archived_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_archived_wal_start_lsn , COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_failed_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_failed_wal_start_lsn , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time FROM pg_catalog.pg_stat_archiver metrics: - archived_count: usage: "COUNTER" description: "Number of WAL files that have been successfully archived" - failed_count: usage: "COUNTER" description: "Number of failed attempts for archiving WAL files" - seconds_since_last_archival: usage: "GAUGE" description: "Seconds since the last successful archival operation" - seconds_since_last_failure: usage: "GAUGE" description: "Seconds since the last failed archival operation" - last_archived_time: usage: "GAUGE" description: "Epoch of the last time WAL archiving succeeded" - last_failed_time: usage: "GAUGE" description: "Epoch of the last time WAL archiving failed" - last_archived_wal_start_lsn: usage: "GAUGE" description: "Archived WAL start LSN" - last_failed_wal_start_lsn: usage: "GAUGE" description: "Last failed WAL LSN" - stats_reset_time: usage: "GAUGE" description: "Time at which these statistics were last reset" pg_stat_bgwriter: query: | SELECT checkpoints_timed , checkpoints_req , checkpoint_write_time , checkpoint_sync_time , buffers_checkpoint , buffers_clean , maxwritten_clean , buffers_backend , buffers_backend_fsync , buffers_alloc FROM pg_catalog.pg_stat_bgwriter metrics: - checkpoints_timed: usage: "COUNTER" description: "Number of scheduled checkpoints that have been performed" - checkpoints_req: usage: "COUNTER" description: "Number of requested checkpoints that have been performed" - checkpoint_write_time: usage: "COUNTER" description: "Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds" - checkpoint_sync_time: usage: "COUNTER" description: "Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds" - buffers_checkpoint: usage: "COUNTER" description: "Number of buffers written during checkpoints" - buffers_clean: usage: "COUNTER" description: "Number of buffers written by the background writer" - maxwritten_clean: usage: "COUNTER" description: "Number of times the background writer stopped a cleaning scan because it had written too many buffers" - buffers_backend: usage: "COUNTER" description: "Number of buffers written directly by a backend" - buffers_backend_fsync: usage: "COUNTER" description: "Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write)" - buffers_alloc: usage: "COUNTER" description: "Number of buffers allocated" pg_stat_database: query: | SELECT datname , xact_commit , xact_rollback , blks_read , blks_hit , tup_returned , tup_fetched , tup_inserted , tup_updated , tup_deleted , conflicts , temp_files , temp_bytes , deadlocks , blk_read_time , blk_write_time FROM pg_catalog.pg_stat_database metrics: - datname: usage: "LABEL" description: "Name of this database" - xact_commit: usage: "COUNTER" description: "Number of transactions in this database that have been committed" - xact_rollback: usage: "COUNTER" description: "Number of transactions in this database that have been rolled back" - blks_read: usage: "COUNTER" description: "Number of disk blocks read in this database" - blks_hit: usage: "COUNTER" description: "Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache)" - tup_returned: usage: "COUNTER" description: "Number of rows returned by queries in this database" - tup_fetched: usage: "COUNTER" description: "Number of rows fetched by queries in this database" - tup_inserted: usage: "COUNTER" description: "Number of rows inserted by queries in this database" - tup_updated: usage: "COUNTER" description: "Number of rows updated by queries in this database" - tup_deleted: usage: "COUNTER" description: "Number of rows deleted by queries in this database" - conflicts: usage: "COUNTER" description: "Number of queries canceled due to conflicts with recovery in this database" - temp_files: usage: "COUNTER" description: "Number of temporary files created by queries in this database" - temp_bytes: usage: "COUNTER" description: "Total amount of data written to temporary files by queries in this database" - deadlocks: usage: "COUNTER" description: "Number of deadlocks detected in this database" - blk_read_time: usage: "COUNTER" description: "Time spent reading data file blocks by backends in this database, in milliseconds" - blk_write_time: usage: "COUNTER" description: "Time spent writing data file blocks by backends in this database, in milliseconds" pg_stat_replication: primary: true query: | SELECT usename , COALESCE(application_name, '') AS application_name , COALESCE(client_addr::text, '') AS client_addr , EXTRACT(EPOCH FROM backend_start) AS backend_start , COALESCE(pg_catalog.age(backend_xmin), 0) AS backend_xmin_age , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), sent_lsn) AS sent_diff_bytes , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), write_lsn) AS write_diff_bytes , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), flush_lsn) AS flush_diff_bytes , COALESCE(pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), replay_lsn),0) AS replay_diff_bytes , COALESCE((EXTRACT(EPOCH FROM write_lag)),0)::float AS write_lag_seconds , COALESCE((EXTRACT(EPOCH FROM flush_lag)),0)::float AS flush_lag_seconds , COALESCE((EXTRACT(EPOCH FROM replay_lag)),0)::float AS replay_lag_seconds FROM pg_catalog.pg_stat_replication metrics: - usename: usage: "LABEL" description: "Name of the replication user" - application_name: usage: "LABEL" description: "Name of the application" - client_addr: usage: "LABEL" description: "Client IP address" - backend_start: usage: "COUNTER" description: "Time when this process was started" - backend_xmin_age: usage: "COUNTER" description: "The age of this standby's xmin horizon" - sent_diff_bytes: usage: "GAUGE" description: "Difference in bytes from the last write-ahead log location sent on this connection" - write_diff_bytes: usage: "GAUGE" description: "Difference in bytes from the last write-ahead log location written to disk by this standby server" - flush_diff_bytes: usage: "GAUGE" description: "Difference in bytes from the last write-ahead log location flushed to disk by this standby server" - replay_diff_bytes: usage: "GAUGE" description: "Difference in bytes from the last write-ahead log location replayed into the database on this standby server" - write_lag_seconds: usage: "GAUGE" description: "Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it" - flush_lag_seconds: usage: "GAUGE" description: "Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it" - replay_lag_seconds: usage: "GAUGE" description: "Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it" pg_settings: query: | SELECT name, CASE setting WHEN 'on' THEN '1' WHEN 'off' THEN '0' ELSE setting END AS setting FROM pg_catalog.pg_settings WHERE vartype IN ('integer', 'real', 'bool') ORDER BY 1 metrics: - name: usage: "LABEL" description: "Name of the setting" - setting: usage: "GAUGE" description: "Setting value" --- # Source: cloudnative-pg/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cnpg-operator labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm rules: - apiGroups: - "" resources: - configmaps verbs: - create - delete - get - list - patch - update - watch - apiGroups: - "" resources: - configmaps/status verbs: - get - patch - update - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "" resources: - namespaces verbs: - get - list - watch - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - persistentvolumeclaims verbs: - create - delete - get - list - patch - watch - apiGroups: - "" resources: - pods verbs: - create - delete - get - list - patch - watch - apiGroups: - "" resources: - pods/exec verbs: - create - delete - get - list - patch - watch - apiGroups: - "" resources: - pods/status verbs: - get - apiGroups: - "" resources: - secrets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - "" resources: - secrets/status verbs: - get - patch - update - apiGroups: - "" resources: - serviceaccounts verbs: - create - get - list - patch - update - watch - apiGroups: - "" resources: - services verbs: - create - delete - get - list - patch - update - watch - apiGroups: - admissionregistration.k8s.io resources: - mutatingwebhookconfigurations verbs: - get - list - patch - update - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - list - patch - update - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - list - update - apiGroups: - apps resources: - deployments verbs: - create - delete - get - list - patch - update - watch - apiGroups: - batch resources: - jobs verbs: - create - delete - get - list - patch - watch - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - get - update - apiGroups: - monitoring.coreos.com resources: - podmonitors verbs: - create - delete - get - list - patch - watch - apiGroups: - policy resources: - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - backups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - backups/status verbs: - get - patch - update - apiGroups: - postgresql.cnpg.io resources: - clusters verbs: - create - delete - get - list - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - clusters/finalizers verbs: - update - apiGroups: - postgresql.cnpg.io resources: - clusters/status verbs: - get - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - poolers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - poolers/finalizers verbs: - update - apiGroups: - postgresql.cnpg.io resources: - poolers/status verbs: - get - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - scheduledbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - postgresql.cnpg.io resources: - scheduledbackups/status verbs: - get - patch - update - apiGroups: - rbac.authorization.k8s.io resources: - rolebindings verbs: - create - get - list - patch - update - watch - apiGroups: - rbac.authorization.k8s.io resources: - roles verbs: - create - get - list - patch - update - watch --- # Source: cloudnative-pg/templates/rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cnpg-operator labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cnpg-operator subjects: - kind: ServiceAccount name: cnpg-operator namespace: database --- # Source: cloudnative-pg/templates/service.yaml apiVersion: v1 kind: Service metadata: name: cnpg-webhook-service labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 443 targetPort: webhook-server name: webhook-server selector: app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator --- # Source: cloudnative-pg/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: cnpg-operator labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator template: metadata: labels: app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator spec: containers: - args: - controller - --leader-elect - --config-map-name=cnpg-controller-manager-config - --secret-name=cnpg-controller-manager-config - --webhook-port=9443 command: - /manager env: - name: OPERATOR_IMAGE_NAME value: "ghcr.io/cloudnative-pg/cloudnative-pg:1.20.0" - name: OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MONITORING_QUERIES_CONFIGMAP value: "cnpg-default-monitoring" image: "ghcr.io/cloudnative-pg/cloudnative-pg:1.20.0" imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /readyz port: 9443 scheme: HTTPS initialDelaySeconds: 3 name: manager ports: - containerPort: 8080 name: metrics protocol: TCP - containerPort: 9443 name: webhook-server protocol: TCP readinessProbe: httpGet: path: /readyz port: 9443 scheme: HTTPS initialDelaySeconds: 3 resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsGroup: 10001 runAsUser: 10001 volumeMounts: - mountPath: /controller name: scratch-data - mountPath: /run/secrets/cnpg.io/webhook name: webhook-certificates securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccountName: cnpg-operator terminationGracePeriodSeconds: 10 volumes: - emptyDir: {} name: scratch-data - name: webhook-certificates secret: defaultMode: 420 optional: true secretName: cnpg-webhook-cert --- # Source: cloudnative-pg/templates/mutatingwebhookconfiguration.yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: cnpg-mutating-webhook-configuration labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm webhooks: - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /mutate-postgresql-cnpg-io-v1-backup port: 443 failurePolicy: Fail name: mbackup.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - backups sideEffects: None - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /mutate-postgresql-cnpg-io-v1-cluster port: 443 failurePolicy: Fail name: mcluster.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - clusters sideEffects: None - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /mutate-postgresql-cnpg-io-v1-scheduledbackup port: 443 failurePolicy: Fail name: mscheduledbackup.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - scheduledbackups sideEffects: None --- # Source: cloudnative-pg/templates/validatingwebhookconfiguration.yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: cnpg-validating-webhook-configuration labels: helm.sh/chart: cloudnative-pg-0.18.0 app.kubernetes.io/name: cloudnative-pg app.kubernetes.io/instance: pg-operator app.kubernetes.io/version: "1.20.0" app.kubernetes.io/managed-by: Helm webhooks: - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /validate-postgresql-cnpg-io-v1-backup port: 443 failurePolicy: Fail name: vbackup.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - backups sideEffects: None - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /validate-postgresql-cnpg-io-v1-cluster port: 443 failurePolicy: Fail name: vcluster.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - clusters sideEffects: None - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /validate-postgresql-cnpg-io-v1-scheduledbackup port: 443 failurePolicy: Fail name: vscheduledbackup.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - scheduledbackups sideEffects: None - admissionReviewVersions: - v1 clientConfig: service: name: cnpg-webhook-service namespace: database path: /validate-postgresql-cnpg-io-v1-pooler port: 443 failurePolicy: Fail name: vpooler.kb.io rules: - apiGroups: - postgresql.cnpg.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - poolers sideEffects: None ```

Not included are a role and rolebinding I had to make for the service account to read the secrets.

Great catch re: the pod name (rather than cluster name) being used, I indeed used the wrong arguments. I am trying to re-create the issue but now running into a loop of still waiting for all required WAL segments to be archived; logs given below.

Waiting on WAL segments ```json cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:46Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 02:56:46.799 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "822", "connection_from": "[local]", "session_id": "64e817f6.336", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 02:54:46 UTC", "virtual_transaction_id": "3/872", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:56Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:56Z", "msg": "Backup started", "backupName": "cnpg-backup-1692932216", "backupNamespace": "cnpg-backup-1692932216", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692932216", "--endpoint-url", "https://.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barman/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:57Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 02:56:57.514 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "863", "connection_from": "[local]", "session_id": "64e81801.35f", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 02:54:57 UTC", "virtual_transaction_id": "4/211", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:57Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 02:56:57.563 UTC", "process_id": "25", "session_id": "64e81310.19", "session_line_num": "15", "session_start_time": "2023-08-25 02:33:52 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:57Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 02:56:57.600 UTC", "process_id": "25", "session_id": "64e81310.19", "session_line_num": "16", "session_start_time": "2023-08-25 02:33:52 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.004 s, sync=0.001 s, total=0.038 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=45624 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T02:56:57Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 02:56:57.711 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "922", "connection_from": "[local]", "session_id": "64e8183d.39a", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 02:55:57 UTC", "virtual_transaction_id": "5/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } ```

And the manual backup attempts are failing with a brand-new 403:

manual backup, using `barman` destination ``` 2023-08-25 03:29:56,698 [610] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-25 03:29:56,699 [610] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-25 03:29:56,699 [610] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-25 03:29:56,700 [610] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-25 03:29:56,700 [610] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-25 03:29:56,700 [610] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-25 03:29:56,700 [610] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-25 03:29:56,701 [610] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-25 03:29:56,702 [610] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-25 03:29:56,702 [610] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-25 03:29:56,702 [610] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-25 03:29:56,722 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-25 03:29:56,724 [610] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-25 03:29:56,725 [610] DEBUG: Looking for credentials via: env 2023-08-25 03:29:56,725 [610] INFO: Found credentials in environment variables. 2023-08-25 03:29:56,727 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-25 03:29:56,737 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-25 03:29:56,737 [610] DEBUG: Event choose-service-name: calling handler 2023-08-25 03:29:56,747 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-25 03:29:56,764 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-25 03:29:56,769 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-25 03:29:56,770 [610] DEBUG: Event creating-client-class.s3: calling handler 2023-08-25 03:29:56,770 [610] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffff985214c0> 2023-08-25 03:29:56,778 [610] DEBUG: Event creating-client-class.s3: calling handler 2023-08-25 03:29:56,789 [610] DEBUG: Setting s3 timeout as (60, 60) 2023-08-25 03:29:56,790 [610] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-25 03:29:56,791 [610] DEBUG: Registering retry handlers for service: s3 2023-08-25 03:29:56,791 [610] DEBUG: Registering S3 region redirector handler 2023-08-25 03:29:56,791 [610] DEBUG: Loading s3:s3 2023-08-25 03:29:56,791 [610] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-25 03:29:56,792 [610] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-25 03:29:56,792 [610] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barman', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-25 03:29:56,792 [610] DEBUG: Endpoint provider result: https://.linodeobjects.com/barman 2023-08-25 03:29:56,792 [610] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-25 03:29:56,792 [610] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-25 03:29:56,792 [610] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-25 03:29:56,792 [610] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-25 03:29:56,792 [610] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler > 2023-08-25 03:29:56,792 [610] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-25 03:29:56,793 [610] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-25 03:29:56,793 [610] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-25 03:29:56,793 [610] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-25 03:29:56,793 [610] DEBUG: Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barman/', 'url': 'https://.linodeobjects.com/barman', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barman', 'params': {'Bucket': 'barman'}}}} 2023-08-25 03:29:56,793 [610] DEBUG: Event request-created.s3.HeadBucket: calling handler > 2023-08-25 03:29:56,793 [610] DEBUG: Event choose-signer.s3.HeadBucket: calling handler > 2023-08-25 03:29:56,793 [610] DEBUG: Event choose-signer.s3.HeadBucket: calling handler 2023-08-25 03:29:56,793 [610] DEBUG: Event before-sign.s3.HeadBucket: calling handler 2023-08-25 03:29:56,793 [610] DEBUG: Calculating signature using v4 auth. 2023-08-25 03:29:56,793 [610] DEBUG: CanonicalRequest: HEAD /barman host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230825T032956Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-25 03:29:56,793 [610] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230825T032956Z 20230825//s3/aws4_request 7d2e2652a5c8ad420fdc3f127faed46de72051b41ced2b57b57cb18a6b709459 2023-08-25 03:29:56,794 [610] DEBUG: Signature: 007a1a4e97a4c5cd32f1bfe914fc2e5c9ad195a2238cdb5946f4a80d8dcd02ad 2023-08-25 03:29:56,794 [610] DEBUG: Event request-created.s3.HeadBucket: calling handler 2023-08-25 03:29:56,794 [610] DEBUG: Sending http request: .linodeobjects.com/barman, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230825T032956Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=AWS_ACCESS_KEY_ID=/20230825//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=007a1a4e97a4c5cd32f1bfe914fc2e5c9ad195a2238cdb5946f4a80d8dcd02ad', 'amz-sdk-invocation-id': b'00024999-3339-4c0e-bf89-0546b6309b0c', 'amz-sdk-request': b'attempt=1'}> 2023-08-25 03:29:56,794 [610] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-25 03:29:56,794 [610] DEBUG: Starting new HTTPS connection (1): .linodeobjects.com:443 2023-08-25 03:29:57,985 [610] DEBUG: https://.linodeobjects.com:443 "HEAD /barman HTTP/1.1" 403 0 2023-08-25 03:29:57,986 [610] DEBUG: Response headers: {'Date': 'Fri, 25 Aug 2023 03:29:57 GMT', 'Content-Type': 'application/xml', 'Content-Length': '199', 'Connection': 'keep-alive', 'x-amz-request-id': 'tx000000b445e30bdf658b9-0064e82035-4753c6a0-default', 'Accept-Ranges': 'bytes'} 2023-08-25 03:29:57,986 [610] DEBUG: Response body: b'' 2023-08-25 03:29:57,989 [610] DEBUG: Event needs-retry.s3.HeadBucket: calling handler 2023-08-25 03:29:57,990 [610] DEBUG: No retry needed. 2023-08-25 03:29:57,990 [610] DEBUG: Event needs-retry.s3.HeadBucket: calling handler > 2023-08-25 03:29:57,990 [610] ERROR: Barman cloud backup exception: An error occurred (403) when calling the HeadBucket operation: Forbidden 2023-08-25 03:29:57,990 [610] DEBUG: Exception details: Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/barman/clients/cloud_backup.py", line 155, in main if not cloud_interface.test_connectivity(): File "/usr/local/lib/python3.9/dist-packages/barman/cloud_providers/aws_s3.py", line 179, in test_connectivity self.bucket_exists = self._check_bucket_existence() File "/usr/local/lib/python3.9/dist-packages/barman/cloud_providers/aws_s3.py", line 194, in _check_bucket_existence self.s3.meta.client.head_bucket(Bucket=self.bucket_name) File "/usr/local/lib/python3.9/dist-packages/botocore/client.py", line 530, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.9/dist-packages/botocore/client.py", line 960, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden ```

Here is another attempt at the list, this time using cnpg-cluster over cnpg-cluster-1. I used the barmantest in this case since I can't seem to backup to my new barman destination.

barman-cloud-backup-list, barmantest destination ```json 2023-08-25 03:19:45,443 [389] DEBUG: Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane 2023-08-25 03:19:45,443 [389] DEBUG: Changing event name from before-call.apigateway to before-call.api-gateway 2023-08-25 03:19:45,444 [389] DEBUG: Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict 2023-08-25 03:19:45,445 [389] DEBUG: Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration 2023-08-25 03:19:45,445 [389] DEBUG: Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 2023-08-25 03:19:45,445 [389] DEBUG: Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search 2023-08-25 03:19:45,445 [389] DEBUG: Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section 2023-08-25 03:19:45,446 [389] DEBUG: Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask 2023-08-25 03:19:45,446 [389] DEBUG: Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section 2023-08-25 03:19:45,446 [389] DEBUG: Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search 2023-08-25 03:19:45,447 [389] DEBUG: Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section 2023-08-25 03:19:45,456 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/boto3/data/s3/2006-03-01/resources-1.json 2023-08-25 03:19:45,457 [389] DEBUG: IMDS ENDPOINT: http://169.254.169.254/ 2023-08-25 03:19:45,458 [389] DEBUG: Looking for credentials via: env 2023-08-25 03:19:45,458 [389] INFO: Found credentials in environment variables. 2023-08-25 03:19:45,459 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/endpoints.json 2023-08-25 03:19:45,468 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/sdk-default-configuration.json 2023-08-25 03:19:45,469 [389] DEBUG: Event choose-service-name: calling handler 2023-08-25 03:19:45,477 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/service-2.json 2023-08-25 03:19:45,492 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz 2023-08-25 03:19:45,496 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/partitions.json 2023-08-25 03:19:45,498 [389] DEBUG: Event creating-client-class.s3: calling handler 2023-08-25 03:19:45,498 [389] DEBUG: Event creating-client-class.s3: calling handler ._handler at 0xffffa3fa99d0> 2023-08-25 03:19:45,504 [389] DEBUG: Event creating-client-class.s3: calling handler 2023-08-25 03:19:45,505 [389] DEBUG: Setting s3 timeout as (60, 60) 2023-08-25 03:19:45,506 [389] DEBUG: Loading JSON file: /usr/local/lib/python3.9/dist-packages/botocore/data/_retry.json 2023-08-25 03:19:45,506 [389] DEBUG: Registering retry handlers for service: s3 2023-08-25 03:19:45,507 [389] DEBUG: Registering S3 region redirector handler 2023-08-25 03:19:45,507 [389] DEBUG: Loading s3:s3 2023-08-25 03:19:45,507 [389] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-25 03:19:45,507 [389] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-25 03:19:45,508 [389] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-25 03:19:45,508 [389] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-25 03:19:45,508 [389] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-25 03:19:45,508 [389] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-25 03:19:45,508 [389] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-25 03:19:45,508 [389] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-25 03:19:45,508 [389] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler > 2023-08-25 03:19:45,508 [389] DEBUG: Event before-parameter-build.s3.HeadBucket: calling handler 2023-08-25 03:19:45,508 [389] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-25 03:19:45,508 [389] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-25 03:19:45,508 [389] DEBUG: Event before-call.s3.HeadBucket: calling handler 2023-08-25 03:19:45,509 [389] DEBUG: Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest/', 'url': 'https://.linodeobjects.com/barmantest', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest'}}}} 2023-08-25 03:19:45,509 [389] DEBUG: Event request-created.s3.HeadBucket: calling handler > 2023-08-25 03:19:45,509 [389] DEBUG: Event choose-signer.s3.HeadBucket: calling handler > 2023-08-25 03:19:45,509 [389] DEBUG: Event choose-signer.s3.HeadBucket: calling handler 2023-08-25 03:19:45,509 [389] DEBUG: Event before-sign.s3.HeadBucket: calling handler 2023-08-25 03:19:45,509 [389] DEBUG: Calculating signature using v4 auth. 2023-08-25 03:19:45,509 [389] DEBUG: CanonicalRequest: HEAD /barmantest host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230825T031945Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-25 03:19:45,509 [389] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230825T031945Z 20230825//s3/aws4_request 834c4d14a3470c15bcbb1268fc4f831de361690d17d3fb941787c4b290d5f408 2023-08-25 03:19:45,509 [389] DEBUG: Signature: 10b09bbf4e32a4b21156cb9cf43a776e2eecf16f8e8523197e8831367bfacec0 2023-08-25 03:19:45,509 [389] DEBUG: Event request-created.s3.HeadBucket: calling handler 2023-08-25 03:19:45,509 [389] DEBUG: Sending http request: .linodeobjects.com/barmantest, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230825T031945Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230825//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=10b09bbf4e32a4b21156cb9cf43a776e2eecf16f8e8523197e8831367bfacec0', 'amz-sdk-invocation-id': b'bb646b90-c87d-4dee-8459-8be71d47529f', 'amz-sdk-request': b'attempt=1'}> 2023-08-25 03:19:45,510 [389] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-25 03:19:45,510 [389] DEBUG: Starting new HTTPS connection (1): .linodeobjects.com:443 2023-08-25 03:19:46,584 [389] DEBUG: https://.linodeobjects.com:443 "HEAD /barmantest HTTP/1.1" 200 0 2023-08-25 03:19:46,585 [389] DEBUG: Response headers: {'Date': 'Fri, 25 Aug 2023 03:19:46 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx000003bc386b2547307b2-0064e81dd2-46db096f-default'} 2023-08-25 03:19:46,586 [389] DEBUG: Response body: b'' 2023-08-25 03:19:46,586 [389] DEBUG: Event needs-retry.s3.HeadBucket: calling handler 2023-08-25 03:19:46,586 [389] DEBUG: No retry needed. 2023-08-25 03:19:46,587 [389] DEBUG: Event needs-retry.s3.HeadBucket: calling handler > 2023-08-25 03:19:46,587 [389] DEBUG: Event before-endpoint-resolution.s3: calling handler 2023-08-25 03:19:46,587 [389] DEBUG: Event before-endpoint-resolution.s3: calling handler > 2023-08-25 03:19:46,588 [389] DEBUG: Calling endpoint provider with parameters: {'Bucket': 'barmantest', 'Region': '', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'https://.linodeobjects.com', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True} 2023-08-25 03:19:46,588 [389] DEBUG: Endpoint provider result: https://.linodeobjects.com/barmantest 2023-08-25 03:19:46,588 [389] DEBUG: Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "None" 2023-08-25 03:19:46,588 [389] DEBUG: Selected auth type "v4" as "v4" with signing context params: {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True} 2023-08-25 03:19:46,588 [389] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,588 [389] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,588 [389] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,588 [389] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler > 2023-08-25 03:19:46,589 [389] DEBUG: Event before-parameter-build.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,589 [389] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,589 [389] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,589 [389] DEBUG: Event before-call.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,590 [389] DEBUG: Making request for OperationModel(name=ListObjectsV2) with params: {'url_path': '?list-type=2', 'query_string': {'prefix': 'cnpg-cluster/base/', 'delimiter': '/', 'encoding-type': 'url'}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource'}, 'body': b'', 'auth_path': '/barmantest?list-type=2', 'url': 'https://.linodeobjects.com/barmantest?list-type=2&prefix=cnpg-cluster%2Fbase%2F&delimiter=%2F&encoding-type=url', 'context': {'client_region': '', 'client_config': , 'has_streaming_input': False, 'auth_type': 'v4', 'signing': {'region': '', 'signing_name': 's3', 'disableDoubleEncoding': True}, 'encoding_type_auto_set': True, 's3_redirect': {'redirected': False, 'bucket': 'barmantest', 'params': {'Bucket': 'barmantest', 'Prefix': 'cnpg-cluster/base/', 'Delimiter': '/', 'EncodingType': 'url'}}}} 2023-08-25 03:19:46,590 [389] DEBUG: Event request-created.s3.ListObjectsV2: calling handler > 2023-08-25 03:19:46,590 [389] DEBUG: Event choose-signer.s3.ListObjectsV2: calling handler > 2023-08-25 03:19:46,590 [389] DEBUG: Event choose-signer.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,590 [389] DEBUG: Event before-sign.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,591 [389] DEBUG: Calculating signature using v4 auth. 2023-08-25 03:19:46,591 [389] DEBUG: CanonicalRequest: GET /barmantest delimiter=%2F&encoding-type=url&list-type=2&prefix=cnpg-cluster%2Fbase%2F host:.linodeobjects.com x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20230825T031946Z host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-08-25 03:19:46,591 [389] DEBUG: StringToSign: AWS4-HMAC-SHA256 20230825T031946Z 20230825//s3/aws4_request f41c39cffa53076dcd2913e511b0fd717cd88729920a7ba0b9804d3dec0c1ba6 2023-08-25 03:19:46,591 [389] DEBUG: Signature: 6d346708f96ec2fce61ed74db21510f51f3a7c8c64bc969b42b88dbb932b2e1d 2023-08-25 03:19:46,591 [389] DEBUG: Event request-created.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,591 [389] DEBUG: Sending http request: .linodeobjects.com/barmantest?list-type=2&prefix=cnpg-cluster%2Fbase%2F&delimiter=%2F&encoding-type=url, headers={'User-Agent': b'Boto3/1.26.132 Python/3.9.2 Linux/5.10.104-linuxkit Botocore/1.29.132 Resource', 'X-Amz-Date': b'20230825T031946Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=/20230825//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=6d346708f96ec2fce61ed74db21510f51f3a7c8c64bc969b42b88dbb932b2e1d', 'amz-sdk-invocation-id': b'910dbc52-a83c-4867-9728-ad7958e2439a', 'amz-sdk-request': b'attempt=1'}> 2023-08-25 03:19:46,592 [389] DEBUG: Certificate path: /usr/local/lib/python3.9/dist-packages/certifi/cacert.pem 2023-08-25 03:19:46,862 [389] DEBUG: https://.linodeobjects.com:443 "GET /barmantest?list-type=2&prefix=cnpg-cluster%2Fbase%2F&delimiter=%2F&encoding-type=url HTTP/1.1" 200 0 2023-08-25 03:19:46,863 [389] DEBUG: Response headers: {'Date': 'Fri, 25 Aug 2023 03:19:46 GMT', 'Content-Type': 'binary/octet-stream', 'Content-Length': '0', 'Connection': 'keep-alive', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Wed, 23 Aug 2023 03:21:23 GMT', 'x-rgw-object-type': 'Normal', 'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'x-amz-request-id': 'tx00000d6fead79f8670048-0064e81dd2-48204254-default'} 2023-08-25 03:19:46,863 [389] DEBUG: Response body: b'' 2023-08-25 03:19:46,864 [389] DEBUG: Event needs-retry.s3.ListObjectsV2: calling handler 2023-08-25 03:19:46,864 [389] DEBUG: No retry needed. 2023-08-25 03:19:46,888 [389] DEBUG: Event needs-retry.s3.ListObjectsV2: calling handler > 2023-08-25 03:19:46,888 [389] DEBUG: Event after-call.s3.ListObjectsV2: calling handler Backup ID End Time Begin Wal Archival Status Name ```

I will have to do some debugging to see why the 403s are appearing on manual backups, and why my WAL archiving seems to have stopped working (both on my previous test cluster and a fresh one).

However, given the manifests, I think it is safe to say the operator is using cnpg-cluster rather than cnpg-cluster-1, and the latter was just a user error when I ran the manual commands. I will try and get proof of this as a sanity check, after debugging further.

mikewallace1979 commented 1 year ago

I have just reproduced this 🎉 :

Operator logs ```js { "level": "info", "ts": "2023-08-25T09:54:25Z", "msg": "WAL archiving is working", "logging_pod": "cluster-example-2" } { "level": "info", "ts": "2023-08-25T09:54:25Z", "msg": "Backup started", "backupName": "backup-17", "backupNamespace": "backup-17", "logging_pod": "cluster-example-2", "options": [ "--user", "postgres", "--name", "backup-1692957265", "--gzip", "--endpoint-url", "https://barman.us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barman/", "cluster-example" ] } { "level": "info", "ts": "2023-08-25T09:54:36Z", "msg": "Backup completed", "backupName": "backup-17", "backupNamespace": "backup-17", "logging_pod": "cluster-example-2" } { "level": "error", "ts": "2023-08-25T09:54:36Z", "logger": "barman", "msg": "Can't extract backup id", "logging_pod": "cluster-example-2", "command": "barman-cloud-backup-show", "options": [ "--format", "json", "--endpoint-url", "https://barman.us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barman/", "cluster-example", "backup-1692957265" ], "stdout": "", "stderr": "2023-08-25 09:54:36,834 [17349] ERROR: Barman cloud backup show exception: Unknown backup 'backup-1692957265' for server 'cluster-example'\n", "error": "exit status 4", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman.executeQueryCommand\n\tpkg/management/barman/backuplist.go:87\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman.GetBackupByName\n\tpkg/management/barman/backuplist.go:140\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).getExecutedBackupInfo\n\tpkg/management/postgres/backup.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).takeBackup\n\tpkg/management/postgres/backup.go:352\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).run\n\tpkg/management/postgres/backup.go:267" } { "level": "error", "ts": "2023-08-25T09:54:36Z", "msg": "Backup failed", "backupName": "backup-17", "backupNamespace": "backup-17", "logging_pod": "cluster-example-2", "error": "exit status 4", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/postgres.(*BackupCommand).run\n\tpkg/management/postgres/backup.go:271" } ```

The issue turned out to be the endpointURL value - if I use https://us-east-1.linodeobjects.com instead of https://barman.us-east-1.linodeobjects.com then everything works as expected.

Hopefully if you make the same change to your endpointURL that will resolve the original issue.

I don't quite understand why using the bucket URL as the endpoint URL works, or why it behaves differently between the barman-cloud-backup and barman-cloud-backup-show commands - presumably there's some undefined behaviour somewhere which means that PUT requests via the multipart API will still end up in the right place but GET and HEAD requests won't. I'll do a bit more digging there.

btxbtx commented 1 year ago

Nice! 👏 I can now successfully upload backups, when done manually after exec'ing into one of the cnpg cluster pods and running barman-cloud-backup myself:

Backup ID           End Time                 Begin Wal                     Archival Status  Name
20230825T105554     2023-08-25 10:55:55      000000010000000000000009                       backup-1692960953
20230825T110022     2023-08-25 11:00:23      000000010000000000000014                       backup-test
20230825T110609     2023-08-25 11:06:10      000000010000000000000017                       backup-1692961567
20230825T110810     2023-08-25 11:08:11      00000001000000000000001B                       backup-1692961687
20230825T110908     2023-08-25 11:09:09      00000001000000000000001E                       backup-1692961747

backup-test exists as expected! The other backups I believe are from my ScheduledBackup tests, which are still failing.

I think it's safe to say the can't extract backup id issue is resolved. However, my scheduled backups still fail due to continuously waiting on the WAL segments to be archived. I've included a full log here (nothing needed to be redacted), though it might be better suited as part of a separate issue. If you agree, please feel free to close this issue as resolved.

WAL segment logs ```json cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:16:16Z", "msg": "Triggering the first WAL file to be archived", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:16:16Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:16:16.251 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "3", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: immediate force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:16:16Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:16:16.260 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "4", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.002 s, total=0.009 s; sync files=2, longest=0.002 s, average=0.001 s; distance=4 kB, estimate=88474 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:16:23Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002A", "startTime": "2023-08-25T11:16:16Z", "endTime": "2023-08-25T11:16:23Z", "elapsedWalTime": 7.131268253 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962176", "backupNamespace": "cnpg-backup-1692962176", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962176", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962236", "backupNamespace": "cnpg-backup-1692962236", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962236", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:17:17.783 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "5", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:17:17.795 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "6", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.002 s, sync=0.001 s, total=0.013 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32763 kB, estimate=82903 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:17:17.927 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "7", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:17:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:17:17.935 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "8", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, sync=0.001 s, total=0.008 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16384 kB, estimate=76251 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962296", "backupNamespace": "cnpg-backup-1692962296", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962296", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:18:17.657 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "9", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:18:17.674 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "10", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 3 recycled; write=0.004 s, sync=0.001 s, total=0.018 s; sync files=0, longest=0.000 s, average=0.000 s; distance=49152 kB, estimate=73541 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:18:17.847 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "274", "connection_from": "[local]", "session_id": "64e88dbd.112", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:17:17 UTC", "virtual_transaction_id": "3/326", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:18:17.961 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "284", "connection_from": "[local]", "session_id": "64e88dbd.11c", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:17:17 UTC", "virtual_transaction_id": "4/59", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:18:44Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002B", "startTime": "2023-08-25T11:17:17Z", "endTime": "2023-08-25T11:18:44Z", "elapsedWalTime": 86.252344164 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962356", "backupNamespace": "cnpg-backup-1692962356", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962356", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:19:17.637 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "11", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:19:17.648 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "12", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.002 s, sync=0.001 s, total=0.011 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=69463 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:19:17.932 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "274", "connection_from": "[local]", "session_id": "64e88dbd.112", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:17:17 UTC", "virtual_transaction_id": "3/326", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:19:18.036 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "284", "connection_from": "[local]", "session_id": "64e88dbd.11c", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:17:17 UTC", "virtual_transaction_id": "4/59", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:19:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:19:18.664 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "337", "connection_from": "[local]", "session_id": "64e88df9.151", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:18:17 UTC", "virtual_transaction_id": "5/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:00Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002C", "startTime": "2023-08-25T11:18:44Z", "endTime": "2023-08-25T11:20:00Z", "elapsedWalTime": 76.513315785 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:05Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002C.00000028.backup", "startTime": "2023-08-25T11:20:00Z", "endTime": "2023-08-25T11:20:05Z", "elapsedWalTime": 4.235454543 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962416", "backupNamespace": "cnpg-backup-1692962416", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962416", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:20:17.471 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "13", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:20:17.483 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "14", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.003 s, sync=0.001 s, total=0.014 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=65794 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:20:17.640 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "398", "connection_from": "[local]", "session_id": "64e88e35.18e", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:19:17 UTC", "virtual_transaction_id": "6/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:20:18.739 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "337", "connection_from": "[local]", "session_id": "64e88df9.151", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:18:17 UTC", "virtual_transaction_id": "5/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:33Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002D", "startTime": "2023-08-25T11:20:05Z", "endTime": "2023-08-25T11:20:33Z", "elapsedWalTime": 28.51918393 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:34Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:20:34.062 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "274", "connection_from": "[local]", "session_id": "64e88dbd.112", "session_line_num": "3", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:17:17 UTC", "virtual_transaction_id": "3/346", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "restore point \"barman_20230825T111717\" created at 0/36000090", "query": "SELECT pg_create_restore_point('barman_20230825T111717')", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:20:35Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002D.00000028.backup", "startTime": "2023-08-25T11:20:33Z", "endTime": "2023-08-25T11:20:35Z", "elapsedWalTime": 1.793890751 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:01Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002E", "startTime": "2023-08-25T11:20:35Z", "endTime": "2023-08-25T11:21:01Z", "elapsedWalTime": 25.959960137 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:02Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:21:02.214 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "284", "connection_from": "[local]", "session_id": "64e88dbd.11c", "session_line_num": "3", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:17:17 UTC", "virtual_transaction_id": "4/79", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "restore point \"barman_20230825T111717\" created at 0/37000090", "query": "SELECT pg_create_restore_point('barman_20230825T111717')", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962476", "backupNamespace": "cnpg-backup-1692962476", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962476", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:21:17.710 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "15", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:21:17.721 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "16", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 4 recycled; write=0.002 s, sync=0.001 s, total=0.012 s; sync files=0, longest=0.000 s, average=0.000 s; distance=65536 kB, estimate=65768 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:21:17.737 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "398", "connection_from": "[local]", "session_id": "64e88e35.18e", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:19:17 UTC", "virtual_transaction_id": "6/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:21:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:21:18.517 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "478", "connection_from": "[local]", "session_id": "64e88e71.1de", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:20:17 UTC", "virtual_transaction_id": "7/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962536", "backupNamespace": "cnpg-backup-1692962536", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962536", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:22:17.706 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "17", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:22:17.717 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "18", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.002 s, sync=0.001 s, total=0.011 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=62468 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:22:17.745 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "567", "connection_from": "[local]", "session_id": "64e88ead.237", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:21:17 UTC", "virtual_transaction_id": "8/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:22:18.597 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "478", "connection_from": "[local]", "session_id": "64e88e71.1de", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:20:17 UTC", "virtual_transaction_id": "7/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:22:18.952 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "337", "connection_from": "[local]", "session_id": "64e88df9.151", "session_line_num": "3", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:18:17 UTC", "virtual_transaction_id": "5/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (240 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:29Z", "msg": "Backup completed", "backupName": "cnpg-backup-1692962236", "backupNamespace": "cnpg-backup-1692962236", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:43Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000002F", "startTime": "2023-08-25T11:21:01Z", "endTime": "2023-08-25T11:22:43Z", "elapsedWalTime": 101.346784004 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "2023-08-25 11:22:55,976 [286] ERROR: Upload error: Connection was closed before we received a valid response from endpoint URL: \"https://us-east-1.linodeobjects.com/barmannewtest/cnpg-cluster/base/20230825T111717/data.tar?uploadId=2~zz7oLdil_38iWXhtwnpJUG8a7HCpqrq&partNumber=1\". (worker 0)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "2023-08-25 11:22:55,982 [245] ERROR: Error received from upload worker: Connection was closed before we received a valid response from endpoint URL: \"https://us-east-1.linodeobjects.com/barmannewtest/cnpg-cluster/base/20230825T111717/data.tar?uploadId=2~zz7oLdil_38iWXhtwnpJUG8a7HCpqrq&partNumber=1\".", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "Process Process-1:", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "Traceback (most recent call last):", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/process.py\", line 315, in _bootstrap", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " self.run()", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "Process Process-2:", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/process.py\", line 108, in run", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " self._target(*self._args, **self._kwargs)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/local/lib/python3.9/dist-packages/barman/cloud.py\", line 808, in _worker_process_main", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " task = self.queue.get()", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/queues.py\", line 102, in get", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " with self._rlock:", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/synchronize.py\", line 95, in __enter__", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " return self._semlock.__enter__()", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "KeyboardInterrupt", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "Traceback (most recent call last):", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/process.py\", line 315, in _bootstrap", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " self.run()", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/process.py\", line 108, in run", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " self._target(*self._args, **self._kwargs)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/local/lib/python3.9/dist-packages/barman/cloud.py\", line 808, in _worker_process_main", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " task = self.queue.get()", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/queues.py\", line 103, in get", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " res = self._recv_bytes()", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/connection.py\", line 221, in recv_bytes", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " buf = self._recv_bytes(maxlength)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/connection.py\", line 419, in _recv_bytes", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " buf = self._recv(4)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " File \"/usr/lib/python3.9/multiprocessing/connection.py\", line 384, in _recv", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": " chunk = read(handle, remaining)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:55Z", "logger": "barman-cloud-backup", "msg": "KeyboardInterrupt", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:22:56Z", "logger": "barman-cloud-backup", "msg": "2023-08-25 11:22:56,026 [245] ERROR: Backup failed uploading data (Connection was closed before we received a valid response from endpoint URL: \"https://us-east-1.linodeobjects.com/barmannewtest/cnpg-cluster/base/20230825T111717/data.tar?uploadId=2~zz7oLdil_38iWXhtwnpJUG8a7HCpqrq&partNumber=1\".)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962596", "backupNamespace": "cnpg-backup-1692962596", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962596", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:23:17.746 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "613", "connection_from": "[local]", "session_id": "64e88ee9.265", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:22:17 UTC", "virtual_transaction_id": "9/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:23:17.835 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "19", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:23:17.843 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "567", "connection_from": "[local]", "session_id": "64e88ead.237", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:21:17 UTC", "virtual_transaction_id": "8/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:23:17.845 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "20", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.002 s, sync=0.001 s, total=0.011 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=59498 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:23:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:23:17.896 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "398", "connection_from": "[local]", "session_id": "64e88e35.18e", "session_line_num": "3", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:19:17 UTC", "virtual_transaction_id": "6/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (240 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:16Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:16Z", "msg": "Backup started", "backupName": "cnpg-backup-1692962656", "backupNamespace": "cnpg-backup-1692962656", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692962656", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:16Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:24:16.841 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "717", "connection_from": "[local]", "session_id": "64e88f25.2cd", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:23:17 UTC", "virtual_transaction_id": "3/415", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:24:17.854 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "613", "connection_from": "[local]", "session_id": "64e88ee9.265", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:22:17 UTC", "virtual_transaction_id": "9/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:24:17.913 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "21", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:24:17.923 UTC", "process_id": "25", "session_id": "64e88cec.19", "session_line_num": "22", "session_start_time": "2023-08-25 11:13:48 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.002 s, sync=0.001 s, total=0.011 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=56825 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 11:24:18.782 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "478", "connection_from": "[local]", "session_id": "64e88e71.1de", "session_line_num": "3", "command_tag": "SELECT", "session_start_time": "2023-08-25 11:20:17 UTC", "virtual_transaction_id": "7/69", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (240 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T11:24:28Z", "logger": "barman-cloud-backup", "msg": "2023-08-25 11:24:28,658 [399] ERROR: Upload error: Connection was closed before we received a valid response from endpoint URL: \"https://us-east-1.linodeobjects.com/barmannewtest/cnpg-cluster/base/20230825T111917/data.tar?uploadId=2~V6O6BQn73IqXIAelxPfLKIngCR3HInA&partNumber=1\". (worker 0)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } ```

Thank you for the time you've spent on this issue, it was a huge help.

mikewallace1979 commented 1 year ago

We can keep debugging in this issue for now since the WAL archiving may be related somehow.

Observations from the latest logs:

It would be useful to know the current wal segment - can you run the following command on the primary pod:

psql -c 'SELECT pg_walfile_name(pg_current_wal_lsn());'

It would also be useful to know the most recent WAL segment in the linode object store - can you list the contents of the barmannewtest bucket under the cnpg-cluster/wals prefix?

btxbtx commented 1 year ago

I've modified the cron schedule to run once an hour and then re-ran the scheduled backup. I'll attach its logs to this post. Hopefully this has not affected the data you were looking for.

psql -c 'SELECT pg_walfile_name(pg_current_wal_lsn());'
     pg_walfile_name
--------------------------
 000000010000000000000004
(1 row)

And the contents of barmannewtest/cnpg-cluster/wals/:

barmannewtest/
└── cnpg-cluster/
    └── wals/
        └── 0000000100000000/
            ├── 000000010000000000000002               16 MB    2023-08-25 10:56
            ├── 000000010000000000000003               16 MB    2023-08-25 10:56
            ├── 000000010000000000000003.00000028.backup 348 bytes 2023-08-25 10:56
            ├── 000000010000000000000004               16 MB    2023-08-25 10:59
            ├── 000000010000000000000016               16 MB    2023-08-25 11:06
            ├── 000000010000000000000017               16 MB    2023-08-25 11:08
            ├── 000000010000000000000017.00000028.backup 351 bytes 2023-08-25 11:08
            ├── 000000010000000000000018               16 MB    2023-08-25 11:11
            ├── 000000010000000000000019               16 MB    2023-08-25 11:12
            ├── 00000001000000000000002A               16 MB    2023-08-25 11:16
            ├── 00000001000000000000002B               16 MB    2023-08-25 11:18
            ├── 00000001000000000000002C               16 MB    2023-08-25 11:20
            ├── 00000001000000000000002C.00000028.backup 351 bytes 2023-08-25 11:20
            ├── 00000001000000000000002D               16 MB    2023-08-25 11:20
            ├── 00000001000000000000002D.00000028.backup 351 bytes 2023-08-25 11:20
            ├── 00000001000000000000002E               16 MB    2023-08-25 11:21
            ├── 00000001000000000000002F               16 MB    2023-08-25 11:22
            └── 000000010000000000000049               16 MB    2023-08-25 15:04
scheduled backup logs ```json cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:00Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:00Z", "msg": "Backup started", "backupName": "cnpg-backup-1692976980", "backupNamespace": "cnpg-backup-1692976980", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1692976980", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:02Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:02.695 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "1", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:02Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:04Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:23:04,790 [334] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:04Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:04Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:04Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:04Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:04.833 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "1", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:05Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:07Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:23:07,627 [355] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:07Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:07Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:07Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:07Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:07.660 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "2", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:08Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:08.665 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "2", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 62 buffers (0.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=5.957 s, sync=0.008 s, total=5.971 s; sync files=28, longest=0.003 s, average=0.001 s; distance=16384 kB, estimate=16384 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:08Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:10Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:23:10,263 [368] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:10Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:10Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:10Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:10Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:10.299 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "3", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:10Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:10.299 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "4", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:10Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:12Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:23:12,066 [382] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:12Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:12Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:12Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:12Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:12.095 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "5", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:13Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:15Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:23:15,737 [392] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:15Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:15Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:15Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:15Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:15.786 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "6", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:16Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:18Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:23:18,596 [402] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:18Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:18Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:23:18Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:18.643 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "7", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:23:18Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:23:18.643 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "8", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:08Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:24:08.747 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "324", "connection_from": "[local]", "session_id": "64e8c756.144", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-25 15:23:02 UTC", "virtual_transaction_id": "3/471", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (60 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:18Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:20Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:24:20,655 [436] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:20Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:20Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:20Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:20Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:24:20.694 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "9", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:21Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:23Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:24:23,525 [446] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:23Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:23Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:23Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:23Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:24:23.553 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "10", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:24Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:26Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:24:26,599 [456] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:26Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:26Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:24:26Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:26Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:24:26.637 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "11", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:24:26Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:24:26.637 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "12", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:08Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:25:08.841 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "324", "connection_from": "[local]", "session_id": "64e8c756.144", "session_line_num": "2", "command_tag": "SELECT", "session_start_time": "2023-08-25 15:23:02 UTC", "virtual_transaction_id": "3/471", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (120 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:26Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:28Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:25:28,739 [488] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:28Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:28Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:28Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:28Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:25:28.776 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "13", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:29Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:33Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:25:33,837 [502] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:33Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:33Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:33Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:33Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:25:33.877 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "14", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:34Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:37Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:25:37,025 [512] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:37Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:37Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:25:37Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:37Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:25:37.056 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "15", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:25:37Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:25:37.056 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "16", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:37Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:38Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:26:38,868 [543] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:38Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:38Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:38Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:38Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:26:38.899 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "17", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:39Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:41Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:26:41,882 [556] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:41Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:41Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:41Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:41Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:26:41.921 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "18", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:43Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:44Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:26:44,633 [566] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:44Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:44Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:26:44Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:44Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:26:44.665 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "19", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:26:44Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:26:44.665 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "20", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:09Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:27:09.029 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "324", "connection_from": "[local]", "session_id": "64e8c756.144", "session_line_num": "3", "command_tag": "SELECT", "session_start_time": "2023-08-25 15:23:02 UTC", "virtual_transaction_id": "3/471", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "still waiting for all required WAL segments to be archived (240 seconds elapsed)", "hint": "Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:44Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:46Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:27:46,589 [597] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:46Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:46Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:46Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:46Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:27:46.634 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "21", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:47Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:49Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:27:49,475 [609] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:49Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:49Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:49Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:49Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:27:49.518 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "22", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:50Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:52Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:27:52,506 [622] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:52Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:52Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:27:52Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:52Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:27:52.548 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "23", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:27:52Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:27:52.548 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "24", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:02Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:28:02.812 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "3", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: time", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:02Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:28:02.824 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "4", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.012 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16384 kB, estimate=16384 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:10Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:11Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:28:11,752 [639] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:11Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:11Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:11Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:11Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:28:11.783 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "25", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:12Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:14Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:28:14,350 [649] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:14Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:14Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:14Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:14Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:28:14.380 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "26", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:15Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:17Z", "logger": "barman-cloud-check-wal-archive", "msg": "2023-08-25 15:28:17,859 [659] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:17Z", "logger": "wal-archive", "msg": "Error invoking barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "currentPrimary": "cnpg-cluster-1", "targetPrimary": "cnpg-cluster-1", "options": [ "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://barmannewtest/", "cnpg-cluster" ], "exitCode": -1, "error": "exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:383\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:17Z", "msg": "while barman-cloud-check-wal-archive", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:384\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:169\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:81\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "error", "ts": "2023-08-25T15:28:17Z", "logger": "wal-archive", "msg": "failed to run wal-archive command", "logging_pod": "cnpg-cluster-1", "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1", "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:28:17.891 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "27", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "archive command failed with exit code 1", "detail": "The failed archive command was: /controller/manager wal-archive --log-destination /controller/log/postgres.json pg_wal/000000010000000000000002", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:17Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-25 15:28:17.891 UTC", "process_id": "32", "session_id": "64e8c672.20", "session_line_num": "28", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "WARNING", "sql_state_code": "01000", "message": "archiving write-ahead log file \"000000010000000000000002\" failed too many times, will try again later", "backend_type": "archiver", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:28:19Z", "logger": "barman-cloud-backup", "msg": "2023-08-25 15:28:19,837 [369] ERROR: Upload error: Connection was closed before we received a valid response from endpoint URL: \"https://us-east-1.linodeobjects.com/barmannewtest/cnpg-cluster/base/20230825T152302/data.tar?uploadId=2~zlkZEA7SvQm6-KtRK6Kmh0GG6-XS4o-&partNumber=1\". (worker 0)", "pipe": "stderr", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-25T15:29:18Z", "logger": "wal-archive", "msg": "barman-cloud-check-wal-archive checking the first wal", "logging_pod":"cnpg-cluste ```
mikewallace1979 commented 1 year ago

I haven't had a chance to look through the latest logs in detail but the following line suggests you're trying to start a new cnpg cluster but you're pointing to a bucket which already contains WALs:

}
cnpg-cluster-1 postgres {
  "level": "info",
  "ts": "2023-08-25T15:23:04Z",
  "logger": "barman-cloud-check-wal-archive",
  "msg": "2023-08-25 15:23:04,790 [334] ERROR: WAL archive check failed for server cnpg-cluster: Expected empty archive",
  "pipe": "stderr",
  "logging_pod": "cnpg-cluster-1"
}

This check is supposed to fail if there are WALs already present to prevent a WAL archive from containing WAL segments from different clusters - such a WAL archive would not be useful at recovery time.

I wonder if the following WAL archiving errors is related - it could be that PostgreSQL is trying to archive a WAL segment which has the same name as a WAL archived by an earlier instance of the cluster:

cnpg-cluster-1 postgres {
  "level": "error",
  "ts": "2023-08-25T15:23:04Z",
  "logger": "wal-archive",
  "msg": "failed to run wal-archive command",
  "logging_pod": "cnpg-cluster-1",
  "error": "unexpected failure invoking barman-cloud-wal-archive: exit status 1",
  "stacktrace": "github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:83\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.3/x64/src/runtime/proc.go:250"
}

It's probably worth starting over with an empty bucket at this point and seeing what the logs look like after that.

btxbtx commented 1 year ago

@mikewallace1979 it looks like the backup completed without issue in the new bucket. 🥳

first run ```json cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:00Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:00Z", "msg": "Backup started", "backupName": "cnpg-backup-1693363440", "backupNamespace": "cnpg-backup-1693363440", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1693363440", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://walbucket01/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:01Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-30 02:44:01.451 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "5", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:01Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-30 02:44:01.491 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "6", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.001 s, total=0.041 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32767 kB, estimate=32767 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:14Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000005", "startTime": "2023-08-30T02:44:01Z", "endTime": "2023-08-30T02:44:14Z", "elapsedWalTime": 12.942701631 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:19Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000006", "startTime": "2023-08-30T02:44:14Z", "endTime": "2023-08-30T02:44:19Z", "elapsedWalTime": 4.980317126 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:21Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000006.00000028.backup", "startTime": "2023-08-30T02:44:19Z", "endTime": "2023-08-30T02:44:21Z", "elapsedWalTime": 1.527192709 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:21Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-30 02:44:21.440 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "57344", "connection_from": "[local]", "session_id": "64eeacf1.e000", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-30 02:44:01 UTC", "virtual_transaction_id": "3/644", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "restore point \"barman_20230830T024401\" created at 0/8000090", "query": "SELECT pg_create_restore_point('barman_20230830T024401')", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:26Z", "msg": "Backup completed", "backupName": "cnpg-backup-1693363440", "backupNamespace": "cnpg-backup-1693363440", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:44:26Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000007", "startTime": "2023-08-30T02:44:21Z", "endTime": "2023-08-30T02:44:26Z", "elapsedWalTime": 5.39412292 } ```
second run ```json cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:00Z", "msg": "WAL archiving is working", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:00Z", "msg": "Backup started", "backupName": "cnpg-backup-1693363740", "backupNamespace": "cnpg-backup-1693363740", "logging_pod": "cnpg-cluster-1", "options": [ "--user", "postgres", "--name", "backup-1693363740", "--endpoint-url", "https://us-east-1.linodeobjects.com", "--cloud-provider", "aws-s3", "s3://walbucket01/", "cnpg-cluster" ] } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:01Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-30 02:49:01.330 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "7", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint starting: force wait time", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:01Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-30 02:49:01.381 UTC", "process_id": "26", "session_id": "64e8c672.1a", "session_line_num": "8", "session_start_time": "2023-08-25 15:19:14 UTC", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.004 s, sync=0.001 s, total=0.052 s; sync files=0, longest=0.000 s, average=0.000 s; distance=49152 kB, estimate=49152 kB", "backend_type": "checkpointer", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:16Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000008", "startTime": "2023-08-30T02:49:01Z", "endTime": "2023-08-30T02:49:16Z", "elapsedWalTime": 15.149023258 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:30Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000009", "startTime": "2023-08-30T02:49:16Z", "endTime": "2023-08-30T02:49:30Z", "elapsedWalTime": 14.170324256 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:32Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/000000010000000000000009.00000028.backup", "startTime": "2023-08-30T02:49:30Z", "endTime": "2023-08-30T02:49:32Z", "elapsedWalTime": 1.447885209 } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:33Z", "logger": "postgres", "msg": "record", "logging_pod": "cnpg-cluster-1", "record": { "log_time": "2023-08-30 02:49:33.340 UTC", "user_name": "postgres", "database_name": "postgres", "process_id": "57568", "connection_from": "[local]", "session_id": "64eeae1d.e0e0", "session_line_num": "1", "command_tag": "SELECT", "session_start_time": "2023-08-30 02:49:01 UTC", "virtual_transaction_id": "3/868", "transaction_id": "0", "error_severity": "LOG", "sql_state_code": "00000", "message": "restore point \"barman_20230830T024901\" created at 0/B000090", "query": "SELECT pg_create_restore_point('barman_20230830T024901')", "application_name": "barman_cloud_backup", "backend_type": "client backend", "query_id": "0" } } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:38Z", "msg": "Backup completed", "backupName": "cnpg-backup-1693363740", "backupNamespace": "cnpg-backup-1693363740", "logging_pod": "cnpg-cluster-1" } cnpg-cluster-1 postgres { "level": "info", "ts": "2023-08-30T02:49:47Z", "logger": "wal-archive", "msg": "Archived WAL file", "logging_pod": "cnpg-cluster-1", "walName": "pg_wal/00000001000000000000000A", "startTime": "2023-08-30T02:49:33Z", "endTime": "2023-08-30T02:49:47Z", "elapsedWalTime": 14.56521609 } ```
mikewallace1979 commented 1 year ago

Excellent news!

gaetancollaud commented 3 months ago

For those using Digital Ocean Spaces Object Storage and have the same issue, just configure your bucket like this:

      destinationPath: "s3://{BUCKET_NAME}/{FOLDER_NAME}/"
      endpointURL: "https://{REGION}.digitaloceanspaces.com"

ex

      destinationPath: "s3://backup-test/test-db/"
      endpointURL: "https://fra1.digitaloceanspaces.com"