Open maleck13 opened 7 years ago
Take a look at how I wired together the integration tests for the CRD server here: https://github.com/kubernetes/apiextensions-apiserver/blob/master/test/integration/basic_test.go#L45 . Basically it stands up an insecure server for a local integration test of just that server.
It does seem like this would be an ideal location to demonstrate that concept. @maleck13 do you feel up to opening a pull into k8s.io/kubernetes that adds an integration test like that to show people how they can test?
@deads2k sure I will take a look at doing that.
I have added something similar here https://github.com/openshift/open-service-broker-sdk/pull/29/files
I will base the PR for this repo from that work
@deads2k Looking for some guidance. Fairly new to Kubernetes apiservers. In this repo the clientsets are not generated yet, should the generated clientsets be part of the sample-apiserver? In your example, which I used as a base for creating the referenced PR against the open-service-broker-sdk, it makes use of the clientsets, so it seems these would need to be generated before the test could be executed.
correct, the sample-apiserver
lacks the clientsets
. In fact I am going to need them in my upcoming PR. It would be really nice if you could add them :) Here you should find some info:
https://github.com/kubernetes/community/blob/master/contributors/devel/generating-clientset.md
@p0lyn0mial ok thanks, I had looked at those, having trouble with the update-codegen as it seems to want to be run from the root of kubernetes. Hoping to give this some time tomorrow.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
/kind feature
hi, almost 2 years late :)
KUBERNETES_VERSION=v1.13.10 minikube=v1.12.0
am trying to get a stand-alone api server
up and running with no authn, authz (originally the goal was to have it running with oidc config, independent of the minikube's api server, but gave up after several attempts).
Following is how I start it: 1.
sample-apiserver --etcd-prefix=blah
--etcd-servers=http://localhost:2379
--v=7
--client-ca-file=/cax509.crt
--kubeconfig=/dummy-kube-config
--authentication-kubeconfig=/dummy-kube-config
--authorization-kubeconfig=/dummy-kube-config
--disable-admission-plugins
"NamespaceLifecycle,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
dummy-kube-config refers to a dummy cluster (obviously) - http://127.1.2.3:12345
2.
I later port-forward the svc (k port-forward svc/api 8001:443 -n wardle
)
curl -k https://localhost:8001/apis
Internal Server Error: "/apis": Post "http://127.1.2.3:12345/apis/authorization.k8s.io/v1/subjectaccessreviews": dial tcp 127.1.2.3:12345: connect: connection ref
I tried --authorization-skip-lookup
but its not recognized at all
is it possible to get the api server running independent of the actual kubernetes api server at all?
@jot-hub as far as I know, without any changes sample-apiserver can't be brought up independently. I got it working independently by stripping out all authn/authz/admission functionality in start.go—but that was possible because my use case didn't require that functionality.
Hi, I had another look at this (I agree, its been a long time). Just some minor typos caused a confusion. I fixed those minor typos ☝️ - hopefully it helps someone. c.c: @sttts
This is a great resource and very useful, thanks for putting it together. Something that is not clear to me however is what is a good approach to testing the external api of the apiserver (integration, blackbox). Is there a good example of this? When developing a non kubernetes api server, I would normally use the httptest package, start the server with a mocked implementation of the storage layer and the use the http.Client to make a request against the exposed API. Is something similar available and if so are there examples of how to do this?