kubernetes-retired / kubefed

Kubernetes Cluster Federation
Apache License 2.0
2.5k stars 531 forks source link

kubedctl enable services has conflict with knative services #1078

Closed qiujian16 closed 4 years ago

qiujian16 commented 5 years ago

What happened:

run $ ./bin/kubefedctl enable services --kubefed-namespace default returns F0802 15:16:29.293248 61760 enable.go:110] Error: Multiple resources are matched by "services": services, services.serving.knative.dev. A group-qualified plural name must be provided.

The reason is that i installed knative services crd before. However, we cannot enable kube services using command line, since it has no group.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

install knative services and run kubefedctl enable services

Anything else we need to know?:

Environment:

/kind bug

marun commented 5 years ago

@qiujian16 Good catch. The 'core' API types lack of a group doesn't allow providing a group-qualified name in cases like this where the kind/plural name is ambiguous. Would it make sense for kubefedctl enable to accept core as a qualifier (e.g. services.core) for these types and then strip the suffix before use?

qiujian16 commented 5 years ago

for better user experience, should we have kubefedctl enable service points to the v1/services instead of running kubefedctl enable service.core? It is similar for federated deployment, today we need to run kubefedctl enable deployment.apps or kubefedctl enable deployment.extensions, but it will be better if we can run kubefedctl enable deployment which then points to a certain api group. WDYT?

marun commented 5 years ago

@qiujian16 The current approach is requiring that the user be explicit, and I don't think this case suggests deviating from that approach. We can't assume that we know better than the user as to what they should be enabling.

qiujian16 commented 5 years ago

yes..that makes sense. So if there are not multiple match, should still use kubefedctl enable service, otherwise, should use kubefedctl enable service.core

marun commented 5 years ago

And not just for services - kubefedctl should accept *.coreas a type name and strip the .core suffix for interaction with the API.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubefed/issues/1078#issuecomment-570915542): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ysjjovo commented 4 years ago

So what's the solution? I still got error message while execute command kubefedctl enable service.core.

F0904 16:14:12.809596   49450 enable.go:111] Error: Unable to find api resource named "service.core".
hectorj2f commented 4 years ago

/reopen.

hectorj2f commented 4 years ago

/reopen

k8s-ci-robot commented 4 years ago

@hectorj2f: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/kubefed/issues/1078#issuecomment-688215800): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubefed/issues/1078#issuecomment-704850199): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
hectorj2f commented 4 years ago

/remove-lifecycle rotten

makkes commented 4 years ago

I issued a PR with a potential fix: #1294