IBM / core-dump-handler

Save core dumps from a Kubernetes Service or RedHat OpenShift to an S3 protocol compatible object store
https://ibm.github.io/core-dump-handler/
MIT License
136 stars 40 forks source link

Unable to use ODF S3 compatible storage in Red hat openshift for IBM Core dump handler #124

Closed sanasz91mdev closed 1 year ago

sanasz91mdev commented 1 year ago

i am trying to use ODF S3 compatible object bucket created via object bucket claim using claim's secret :

AWS_ACCESS_KEY_ID: WjBmTHVPQXlGMXRjRUlSQlV4Tks= AWS_SECRET_ACCESS_KEY: VzBWa0FBZG5PQlZoK2sxdXIvT2lJa2toRG13QVBDT3IzallBR2VCag==

but i am unable to cofigure this s3 compatible storage for IBM Core dump handler: https://github.com/IBM/core-dump-handler/blob/main/charts/core-dump-handler/README.md

i used following install command after decoding above values:

helm install --repo https://ibm.github.io/core-dump-handler/ core-dump-handler --generate-name --namespace observe --set daemonset.s3AccessKey=Z0fLuOAyF1tcEIRBUxNK --set daemonset.s3Secret=W0VkAAdnOBVh+k1ur/OiIkkhDmwAPCOr3jYAG --set daemonset.s3BucketName=obc-default-my-obj-bucket-claim --set daemonset.s3Region=us-east-1 --values values.openshift.yaml

Error logs:

[2023-01-02T12:38:59Z INFO  core_dump_agent] no .env file found 
     That's ok if running in kubernetes
[2023-01-02T12:38:59Z INFO  core_dump_agent] Setting host location to: /mnt/core-dump-handler
[2023-01-02T12:38:59Z INFO  core_dump_agent] Current Directory for setup is /app
[2023-01-02T12:38:59Z INFO  core_dump_agent] Copying the composer from ./vendor/default/cdc to /mnt/core-dump-handler/cdc
[2023-01-02T12:38:59Z INFO  core_dump_agent] Starting sysctl for kernel.core_pattern /mnt/core-dump-handler/core_pattern.bak with |/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/mnt/core-dump-handler/cores -h=%h -E=%E
[2023-01-02T12:38:59Z INFO  core_dump_agent] Getting sysctl for kernel.core_pattern
[2023-01-02T12:38:59Z INFO  core_dump_agent] Created Backup of /mnt/core-dump-handler/core_pattern.bak
kernel.core_pattern = |/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/mnt/core-dump-handler/cores -h=%h -E=%E
[2023-01-02T12:38:59Z INFO  core_dump_agent] Starting sysctl for kernel.core_pipe_limit /mnt/core-dump-handler/core_pipe_limit.bak with 128
[2023-01-02T12:38:59Z INFO  core_dump_agent] Getting sysctl for kernel.core_pipe_limit
[2023-01-02T12:38:59Z INFO  core_dump_agent] Created Backup of /mnt/core-dump-handler/core_pipe_limit.bak
kernel.core_pipe_limit = 128
[2023-01-02T12:38:59Z INFO  core_dump_agent] Starting sysctl for fs.suid_dumpable /mnt/core-dump-handler/suid_dumpable.bak with 2
[2023-01-02T12:38:59Z INFO  core_dump_agent] Getting sysctl for fs.suid_dumpable
[2023-01-02T12:38:59Z INFO  core_dump_agent] Created Backup of /mnt/core-dump-handler/suid_dumpable.bak
fs.suid_dumpable = 2
[2023-01-02T12:38:59Z INFO  core_dump_agent] Creating /mnt/core-dump-handler/.env file with LOG_LEVEL=Warn
[2023-01-02T12:38:59Z INFO  core_dump_agent] Writing composer .env 
    LOG_LEVEL=Warn
    IGNORE_CRIO=false
    CRIO_IMAGE_CMD=images
    USE_CRIO_CONF=false
    FILENAME_TEMPLATE={uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}
    LOG_LENGTH=500
    POD_SELECTOR_LABEL=
    TIMEOUT=600
    COMPRESSION=true
    CORE_EVENTS=false
    EVENT_DIRECTORY=/var/mnt/core-dump-handler/events

[2023-01-02T12:38:59Z INFO  core_dump_agent] Executing Agent with location : /mnt/core-dump-handler/cores
[2023-01-02T12:38:59Z INFO  core_dump_agent] Dir Content ["/mnt/core-dump-handler/cores/457107a4-05bd-453e-80be-7cd61bf1de5b-dump-1672660419-segfaulter-segfaulter-1-4.zip", "/mnt/core-dump-handler/cores/50e67181-e592-417c-874d-4213ab044870-dump-1672661714-segfaulter-segfaulter-1-4.zip"]
[2023-01-02T12:38:59Z INFO  core_dump_agent] Uploading: /mnt/core-dump-handler/cores/457107a4-05bd-453e-80be-7cd61bf1de5b-dump-1672660419-segfaulter-segfaulter-1-4.zip
[2023-01-02T12:38:59Z INFO  core_dump_agent] zip size is 29904
[2023-01-02T12:39:04Z ERROR core_dump_agent] Upload Failed Got HTTP 403 with content '<?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>Z0fLuOAyF1tcEIRBUxNK</AWSAccessKeyId><RequestId>8PF7D25NKGN1W3VX</RequestId><HostId>9xJyD9y7pIRbJAfmKHERG6i8ovVwaPCXo9nPjNOhmugvkRvbGzp7xtZXnhngKXrjsWmbUn+gzU0=</HostId></Error>'
[2023-01-02T12:39:04Z INFO  core_dump_agent] Uploading: /mnt/core-dump-handler/cores/50e67181-e592-417c-874d-4213ab044870-dump-1672661714-segfaulter-segfaulter-1-4.zip
[2023-01-02T12:39:04Z INFO  core_dump_agent] zip size is 29858
[2023-01-02T12:39:04Z ERROR core_dump_agent] Upload Failed Got HTTP 403 with content '<?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>Z0fLuOAyF1tcEIRBUxNK</AWSAccessKeyId><RequestId>8PF2SJD5V5ESJ27A</RequestId><HostId>1u2Qcl6o6xKXlcULy2c0mW5wqXe/y1QyMzOly0CCmIqD3MNtgDbIAxwF/K7KSEodPZoVFLt9PiA=</HostId></Error>'
[2023-01-02T12:39:04Z INFO  core_dump_agent] INotify Starting...
[2023-01-02T12:39:04Z INFO  core_dump_agent] INotify Initialised...
[2023-01-02T12:39:04Z INFO  core_dump_agent] INotify watching : /mnt/core-dump-handler/cores
[2023-01-02T12:39:32Z INFO  core_dump_agent] Uploading: /mnt/core-dump-handler/cores/54447af1-3354-4345-82c5-a6aedebaa7ee-dump-1672663172-segfaulter-segfaulter-1-4.zip
[2023-01-02T12:39:32Z INFO  core_dump_agent] zip size is 29859
[2023-01-02T12:39:32Z ERROR core_dump_agent] Upload Failed Got HTTP 403 with content '<?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>Z0fLuOAyF1tcEIRBUxNK</AWSAccessKeyId><RequestId>HGXJ5KADW61J5H9K</RequestId><HostId>JevZvO579zRZKQA5cAXULenpSbGLoTA53CA/1g15gsx060EhW0DIWOtG9vXVovEgv+nriba9HAo=</HostId></Error>'
No9 commented 1 year ago

Hey @sanasz91mdev Just to check: The ODF S3 endpoint is hosted on the same cluster you are running core-dump-handler? If so the configuration of daemonset.s3Region=us-east-1 is incorrect as us-east-1 will try and use the AWS Public S3 service.

The s3Region option is badly named as it is can also be the host name of the service.

If this is hosted on the same cluster and the ODF S3 endpoint has a ClusterIP service, (You can check that with oc get svc --all-namespaces) you should be able to use that name in the daemonset.s3Region but I haven't tested it.

sanasz91mdev commented 1 year ago

is hosted on the same cluster you are running core-dump-handle

s3 Endpoint, IBM Core dump handler is on same cluster .. i have an S3 endpoint available:

ODF-Routes-S3

Do you suggest me to use value of S3 Route defined in above image in daemonset.s3Region or you want me to use service behind it in this place holder?

No9 commented 1 year ago

You should be able to use either of them - the service call would have less hops but I'm not 100% where the credentials are enforced in ODF so you might want to use the public S3 route displayed above. [Edit] Just put in the host name don't include the https://

sanasz91mdev commented 1 year ago

You should be able to use either of them - the service call would have less hops but I'm not 100% where the credentials are enforced in ODF so you might want to use the public S3 route displayed above. [Edit] Just put in the host name don't include the https://

Yes thanks! this worked ... can you please update docs to reflect that s3Region can be host name of S3 service?

No9 commented 1 year ago

Excellent glad you got it sorted. Updated the chart README https://github.com/IBM/core-dump-handler/commit/ac737112fb6d2ba046942befb226c1f645f56d33

@sanasz91mdev Don't forget to change the credentials for the service! :smiley:

gonzalesraul commented 1 year ago

FWIW, S3_ENDPOINT should be used for that manner instead of S3_REGION.