kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
7.91k stars 3.92k forks source link

Add Oracle Cloud Infrastructure as provider #2857

Closed pranaypratyush closed 4 years ago

pranaypratyush commented 4 years ago

Kinda surprised no one asked this already.

MaciekPytel commented 4 years ago

Support for different cloud providers is generally added by people involved with given platform, not CA developers. We're happy to accept it if Oracle Cloud (or an OSS contributor) wants to contribute one.

pranaypratyush commented 4 years ago

Sorry, I didn't know about this policy. Correct me if I am wrong, this module basically uses the provider SDK to scale node groups based on Kubernetes cluster state. If so then what's stopping someone from implementing this for OCI? Are there legal restrictions?

MaciekPytel commented 4 years ago

Nothing :) I'm not aware of any formal reasons stopping anyone from adding a provider to CA (assuming they sign Kubernetes CLA, etc). It's just a lot of work to add (and maintain) cloud provider integration (though amount of work depends on how many features you want to support, what are your scalability requirements, etc). There are only so many core CA developers and we're no longer actively involved even in maintaining existing integrations. Rather every existing cloudprovider has it's own owner who maintains it. So for a new provider integration to happen someone needs to step up, implement it and own it later on.

pranaypratyush commented 4 years ago

Ah, I see. Not to take your time even further, is there a guide or something to help me get started on implementing OCI provider? (Can't promise anything, I am new to kubernetes, 95% odds I won't be able to do this)

On Wed, 26 Feb, 2020, 5:36 pm MaciekPytel, notifications@github.com wrote:

Nothing :) I'm not aware of any formal reasons stopping anyone from adding a provider to CA (assuming they sign Kubernetes CLA, etc). It's just a lot of work to add (and maintain) cloud provider integration (though amount of work depends on how many features you want to support, what are your scalability requirements, etc). There are only so many core CA developers and we're no longer actively involved even in maintaining existing integrations. Rather every existing cloudprovider has it's own owner who maintains it. So for a new provider integration to happen someone needs to step up, implement it and own it later on.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/autoscaler/issues/2857?email_source=notifications&email_token=ACQYXS7ESNJ6LTW3NOVJK23REZLM5A5CNFSM4K3JJ6OKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEM77COQ#issuecomment-591393082, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQYXS7A3K55FTDC2L4GGC3REZLM5ANCNFSM4K3JJ6OA .

MaciekPytel commented 4 years ago

There is no documentation unfortunately (if you end up implementing OCI provider and you feel like writing down some notes that would be very appreciated).

In general what you need to do is implement the interfaces in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/cloud_provider.go. Probably the best way is using one of existing providers as a reference. I'd recommend either digitialocean (a young and relatively simple implementation) or GCE (first implementation, it's maintained by core CA developers, e2e tests used to qualify new CA release run on GCE - all of this makes it a good reference implementation, but it's also one of more complex with lots of features, caching, etc).

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

caiohasouza commented 4 years ago

+1

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/autoscaler/issues/2857#issuecomment-675782215): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.