kubernetes-retired / contrib

[EOL] This is a place for various components in the Kubernetes ecosystem that aren't part of the Kubernetes core.
Apache License 2.0
2.46k stars 1.68k forks source link

How to change cluster IP in a replication controller run time #2955

Closed kamesh2229 closed 5 years ago

kamesh2229 commented 5 years ago

Hi,

I am using Kubernetes 1.0.3 where a master and 5 minion nodes deployed. I have an Elasricsearch application that is deployed on 3 nodes using a replication controller and service is defined. Now i have added a new minion node to the cluster and wanted to run the container elasticsearch on the new node. I am scaling my replication controller to 4 so that based on the node label the elasticsearch container is deployed on new node.Below is my issue and please let me k ow if there is any solution ?

1) The cluster IP defined in the RC is wrong as it is not the same in service.yaml file. 2) Now when I scale the RC new node is installed with the ES container pointing to the wrong Cluster IP due to which the new node is not joining the ES cluster. 3) Is there any way that I can modify the cluster IP of deployed RC so that when I scale the RC the image is deployed on new node with the correct cluster IP ?

Since I am using old version I don't see kubectl edit command and I tried changing using kubectl patch command but the IP didn't change.

The problem is that I need to do this on a production cluster so I can't delete the existing pods but only option is to change the cluster IP of deployed RC and then scale so that it will take the new IP and image is started accordingly.

Please let me know if any way I can do this ?

kamesh2229 commented 5 years ago

Hi

Can someone let me know if this can be done? I tried with kubectl patch command but it is not changing as well no error received...

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/contrib/issues/2955#issuecomment-466930972): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.