kubernetes-sigs / cluster-api-provider-aws

Kubernetes Cluster API Provider AWS provides consistent deployment and day 2 operations of "self-managed" and EKS Kubernetes clusters on AWS.
http://cluster-api-aws.sigs.k8s.io/
Apache License 2.0
646 stars 570 forks source link

make test does not terminate kube-apiserver and etcd processes #2753

Open nab-gha opened 3 years ago

nab-gha commented 3 years ago

/kind bug

What steps did you take and what happened: forked repo and ran make test. After run kube-apiserver and etcd processes remain

What did you expect to happen: make test to clean up these processes

Environment:

Cluster-api-provider-aws version: v7.0 Kubernetes version: (use kubectl version): v1.19.2 OS (e.g. from /etc/os-release): Ubuntu 20.04

richardcase commented 3 years ago

There is code to teardown: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e7d56364b37620241546835b67332ecf6be275bf/bootstrap/eks/controllers/suite_test.go

Do you have the source code in the GOPATH? How and where do you have etcd/api-server installed?

nab-gha commented 3 years ago

AS per #2752 I was using wrong directory but now have cloned my fork into $GOPATH/sigs.k8s.io/cluster-api-provider-aws

However still seeing processes left over

ps -ax -o pid,pgid,cmd | grep "/tmp/kubebuilder"  | grep -v grep
1054517 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:35447 --data-dir=/tmp/k8s_test_framework_310196060 --listen-client-urls=http://127.0.0.1:35447 --listen-peer-urls=http://localhost:0
1054904 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_027252491 --client-ca-file=/tmp/k8s_test_framework_027252491/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:35447 --insecure-port=0 --secure-port=38511 --service-account-issuer=https://127.0.0.1:38511/ --service-account-key-file=/tmp/k8s_test_framework_027252491/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_027252491/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
1055191 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:35873 --data-dir=/tmp/k8s_test_framework_439884657 --listen-client-urls=http://127.0.0.1:35873 --listen-peer-urls=http://localhost:0
1055287 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:36017 --data-dir=/tmp/k8s_test_framework_224035017 --listen-client-urls=http://127.0.0.1:36017 --listen-peer-urls=http://localhost:0
1055386 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_796444188 --client-ca-file=/tmp/k8s_test_framework_796444188/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:35873 --insecure-port=0 --secure-port=43193 --service-account-issuer=https://127.0.0.1:43193/ --service-account-key-file=/tmp/k8s_test_framework_796444188/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_796444188/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
1055519 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_199601556 --client-ca-file=/tmp/k8s_test_framework_199601556/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:36017 --insecure-port=0 --secure-port=45401 --service-account-issuer=https://127.0.0.1:45401/ --service-account-key-file=/tmp/k8s_test_framework_199601556/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_199601556/sa-signer.key --service-cluster-ip-range=10.0.0.0/24

I have etcd and kube-apiserver binaries in /usr/local/bin but it seems to be installing them in /tmp/kubebuilder/bin and using them.

go test -v ./...
fetching tools
kubebuilder/
kubebuilder/bin/
kubebuilder/bin/etcd
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
setting up env vars
randomvariable commented 3 years ago

Yeah, this has always happened to me too, and i've had to killall -9 etcd separately etc...

@sbueringer has been touching some of the envtest stuff in the core CAPI repo so I don't know if there's something better we could be doing here.

sbueringer commented 3 years ago

Not sure. fyi an issue in the main repo which could be related: https://github.com/kubernetes-sigs/cluster-api/issues/4278

Apart from that: My recent changes where mostly about centralizing envtest setup code (https://github.com/kubernetes-sigs/cluster-api/blob/3e54e8c939090c97a718a553ee6dce5b4c054731/internal/envtest/environment.go#L95-L124) and making it possible to run integration tests with a local kind cluster (i.e. not using envtest at all): https://github.com/kubernetes-sigs/cluster-api/pull/5102

randomvariable commented 3 years ago

Yup, I think https://github.com/kubernetes-sigs/cluster-api/issues/4278 is pretty much the same as I get.

randomvariable commented 3 years ago

/priority important-longterm /area testing

richardcase commented 2 years ago

/triage accepted /milestone backlog

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

sedefsavas commented 2 years ago

/remove-lifecycle stale

invidian commented 2 years ago

I also got hit by this couple of times to the point my machine was very slow due to 10+ instances of api server and etcd running.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

invidian commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

invidian commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

invidian commented 1 year ago

/remove-lifecycle stale

k8s-triage-robot commented 9 months ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten