IBM / core-dump-handler

Save core dumps from a Kubernetes Service or RedHat OpenShift to an S3 protocol compatible object store
https://ibm.github.io/core-dump-handler/
MIT License
136 stars 40 forks source link

Azure Storage Support #74

Open hari-narayanan94 opened 2 years ago

hari-narayanan94 commented 2 years ago

This Tool looks like the exact thing i need now for my AKS Clusters , but unfortunately Im not able to find any option to send the core dump to any Native Azure Solution . I see s3 is supported and is there any way to send those dumps over to Azure storage Blob or File share ? Azure natively doesn't support s3 protocol

No9 commented 2 years ago

Hey @hari-narayanan94 Thanks for the feedback. If you needed this today you could look at a the microsoft opensource blog that suggests using a minio shim to create an S3 compatible API for blob. https://cloudblogs.microsoft.com/opensource/2017/11/09/s3cmd-amazon-s3-compatible-apps-azure-storage/#:~:text=Using%20s3cmd%20and%20other%20Amazon%20S3%2Dcompatible%20apps%20with%20Azure%20Blob%20Storage,-November%209%2C%202017&text=Object%20storage%20is%20one%20of,unstructured%20data%2C%20conveniently%20and%20flexibly.

Medium Term (Next few of months) I'll look at the azure sdk https://crates.io/crates/azure_storage_blobs so that it's a first class citizen.

Also note that you can disable the S3 Storage upload aspect and provide your own uploader as documented here. https://github.com/IBM/core-dump-handler/blob/main/FAQ.md#how-should-i-integrate-my-own-uploader It's a bit cumbersome as you have to deal with file semantics right now but I plan to integrate and event api as part of the next release that should make this a lot more straight forward. https://github.com/IBM/core-dump-handler/discussions/61

hari-narayanan94 commented 2 years ago

Thanks for the Quick response , I will try to get it setup on my AKS Cluster with the workaround today and share my experiences .

hari-narayanan94 commented 2 years ago

I tried the Minio and everything looks to be working on that side and now i have a functioning s3 endpoint . When i installed core dump hander on my AKS Cluster and simulated a crash , the ZIP gets generated but it fails to upload with the following error .

[2022-03-10T14:25:22Z INFO core_dump_agent] Uploading: /var/mnt/core-dump-handler/cores/3d7bc367-569a-497e-b929-3b6c33b5dd8c-dump-1646921760-segfaulter-segfaulter-1-4.zip [2022-03-10T14:25:22Z INFO core_dump_agent] zip size is 28070 [2022-03-10T14:25:22Z ERROR core_dump_agent] Upload Failed custom: missing field Bucket [2022-03-10T14:25:22Z INFO core_dump_agent] Uploading: /var/mnt/core-dump-handler/cores/75c2971f-d165-497c-955b-93be45edde1a-dump-1646921675-segfaulter-segfaulter-1-4.zip [2022-03-10T14:25:22Z INFO core_dump_agent] zip size is 28085 [2022-03-10T14:25:23Z ERROR core_dump_agent] Upload Failed custom: missing field Bucket [2022-03-10T14:25:23Z INFO core_dump_agent] Uploading: /var/mnt/core-dump-handler/cores/e1184d7f-f110-49b7-ad50-a87bbb5c005e-dump-1646921436-segfaulter-segfaulter-1-4.zip [2022-03-10T14:25:23Z INFO core_dump_agent] zip size is 28087 [2022-03-10T14:25:23Z ERROR core_dump_agent] Upload Failed custom: missing field Bucket [2022-03-10T14:25:23Z INFO core_dump_agent] INotify Starting... [2022-03-10T14:25:23Z INFO core_dump_agent] INotify Initialised... [2022-03-10T14:25:23Z INFO core_dump_agent] INotify watching : /var/mnt/core-dump-handler/cores [2022-03-10T14:25:54Z INFO core_dump_agent] Uploading: /var/mnt/core-dump-handler/cores/67e1aa29-d87f-44c9-9bb5-6f8f71e72a4b-dump-1646922354-segfaulter-segfaulter-1-4.zip [2022-03-10T14:25:54Z INFO core_dump_agent] zip size is 28071 [2022-03-10T14:25:55Z ERROR core_dump_agent] Upload Failed custom: missing field Bucket

No9 commented 2 years ago

Did you install the chart with the bucket option? --set daemonset.s3BucketName=NAME_OF_BUCKET And created the bucket in the minio server?

hari-narayanan94 commented 2 years ago

Yes , I did use that while applying the helm , I used the Webapp URL as the s3bucketname . Below is the helm installation command i used

helm.exe install my-core-dump-handler core-dump-handler/core-dump-handler --set daemonset.s3AccessKey=storageaccountname --set daemonset.s3Secret=xxxxx --set daemonset.s3BucketName=https://xxxx.azurewebsites.net --set daemonset.s3Region=us-east-1

FYI , Since I only have Azure I used the suggested link to expose s3 protocol for my storage account using minio shim .

https://cloudblogs.microsoft.com/opensource/2017/11/09/s3cmd-amazon-s3-compatible-apps-azure-storage/#:~:text=Using%20s3cmd%20and%20other%20Amazon%20S3%2Dcompatible%20apps%20with%20Azure%20Blob%20Storage,-November%209%2C%202017&text=Object%20storage%20is%20one%20of,unstructured%20data%2C%20conveniently%20and%20flexibly.

I dont see anywhere in minio where it asks me to setup a bucket name !! When i try to verify minio server by logging in via a s3 browser it shows me all the blobs in the storage account .

No9 commented 2 years ago

OK so using https://xxxx.azurewebsites.net/ as a bucket won't work.

According to the article you should be able to use the s3cmd to make a bucket.

$ ./s3cmd mb s3://testbucket

You should then be able to use this option --set daemonset.s3BucketName=testbucket

No9 commented 1 year ago

As external events landed in the v8.9.0 release the ground work is in place to create an agent that uploads to azure using azure blobs library or any other azure integration. The idea is to disable the uploading agent by setting useINotify: false and implement a container that looks for files in the event folder once events has been enabled I won't be starting work on the azure agent anytime soon but happy to work with someone who picks it up.