gocd / kubernetes-elastic-agents

Kubernetes Elastic agent plugin for GoCD
https://www.gocd.org
Apache License 2.0
34 stars 31 forks source link

Unable to deploy elasticagent in eks from gocd server #413

Open HanumanthaRAON opened 1 month ago

HanumanthaRAON commented 1 month ago

Hi Team,

We have installed the GoCD server, and we have three EKS clusters (dev, pp, prod) for deploying workloads. However, we have not created a context; instead, we are using direct configuration with kubectl from the command line when running pipelines.

Currently, we are attempting to configure elastic agents, and we have set up the configuration. However, there is no option in the GoCD elastic agent configuration to specify which Kubernetes config to use, as GoCD attempts to deploy agents using the default context, which we have not set.

Is there a way we can instruct the GoCD server to utilize a specific Kubernetes configuration file?"

Below is the error screenshot image

Can someone help us resolving the issue

chadlwilson commented 1 month ago

Is your server running in kubernetes itself? How are you deploying the server? (Helm? Some other way?)

I don't quite follow how you are configuring the plugin. Which instructions did you follow?

In general the plugin expects you to configure various k8s clusters you may want to create elastic agents in by using its own cluster profiles, not relying on kubeconfig or k8s native concepts. it's not primarily designed to be 'reliant' on kubeconfig at all - since it has its own concept of cluster profiles which may point to different clusters and it's intended to be largely independent.

But currently it does rely on https://github.com/fabric8io/kubernetes-client which has some defaults to pull bits from kubeconfig so there's always a possibility it's gotten confused. Similarly, theoretically setting the KUBECONFIG env var may allow you to change how this client falls back. But this is untested and unsupported, the supported config approach is via cluster profiles for now.

Anyway, can you step back and share what you're trying to achieve and why the documented install instructions don't seem to work - rather than why you wish to achieve it using a kubeconfig? It's not clear whether you have a bug (plugin relying or getting confused with default config when it should not) or are trying to do something in a way that is unintended or unsupported.

HanumanthaRAON commented 1 month ago

Hi @chadlwilson,

Thanks for your reply. Let me explain my setup so that you can suggest how to move forward with this issue.

We have three AWS accounts (dev, pp, prod), and our build server is deployed in the prod AWS account on an EC2 server. Currently, we are planning to utilize the Elastic Agent concept in GOCD.

We have communication established between the three AWS accounts. From the build server, we can access three EKS clusters (dev, pp, prod). We have not configured any Kubernetes contexts to access the clusters. Instead, we have updated small functions in the bashrc file and call them when we need to access the respective clusters. Below are the lines we have added to the bashrc file:

function devkubectl() {
  kubectl --kubeconfig=/home/ec2-user/.kube/dev-new-config "$@"
}

function ppkubectl() {
  kubectl --kubeconfig=/home/ec2-user/.kube/pp-new-config "$@"
}

function prodkubectl() {
  kubectl --kubeconfig=/home/ec2-user/.kube/prod-config "$@"
}

We have followed below documentation to setup elastic agent cluster and agent profile.

https://github.com/gocd/kubernetes-elastic-agents

Below is the screen shot for profile

image

kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE

chadlwilson commented 1 month ago

That won't work, as there's no way to get the plugin to run an arbitrary script to switch kube contexts to decide where an elastic agent is to be created - and the plugin is not kube context aware.

The plugin needs to know which cluster is which (by its cluster URL and or CA cert) or it can't keep track of which agents are running where, and expected to pick up which jobs. If switching was done with some separate hack/script, it wouldn't know how to tell the agents to be created with the right GoCD resource and environment tags, and does not have an option to configure the cluster profiles to point toa kube contexts right now. This would be a nice enhancement, but given community contribution levels for GoCD and its "official" plugins this seems unlikely to be implemented in the short term. (I don't even have capacity to test/validate the very useful PR currently submitted for agent re-use).

In the current state, you would probably need to create 3x GoCD cluster profiles (one for each of these clusters), with 1 or more elastic profiles per cluster.