Closed carlory closed 1 year ago
(⎈ |karmada:default)➜ karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -h
Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
JSON and YAML formats are accepted.
Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.
Usage:
karmadactl apply (-f FILENAME | -k DIRECTORY)
Examples:
# Apply the configuration in pod.json to a pod
kubectl apply -f ./pod.json
# Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml
kubectl apply -k dir/
# Apply the JSON passed into stdin to a pod
cat pod.json | kubectl apply -f -
# Note: --prune is still in Alpha
# Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx
kubectl apply --prune -f manifest.yaml -l app=nginx
# Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file
kubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap
Flags:
--all Select all resources in the namespace of the specified resource types.
--allow-missing-template-keys If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
--cascade string[="background"] Must be "background", "orphan", or "foreground". Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background. (default "background")
--dry-run string[="unchanged"] Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. (default "none")
--field-manager string Name of the manager used to track field ownership. (default "kubectl-client-side-apply")
-f, --filename strings that contains the configuration to apply
--force If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
--force-conflicts If true, server-side apply will force the changes against conflicts.
--grace-period int Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion). (default -1)
-h, --help help for apply
--karmada-context string Name of the cluster context in control plane kubeconfig file.
-k, --kustomize string Process a kustomization directory. This flag can't be used together with -f or -R.
--openapi-patch If true, use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec. Otherwise, fall back to use baked-in types. (default true)
-o, --output string Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.
--overwrite Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration (default true)
--prune Automatically delete resource objects, that do not appear in the configs and are created by either apply or create --save-config. Should be used with either -l or --all.
--prune-whitelist stringArray Overwrite the default whitelist with <group/version/kind> for --prune
-R, --recursive Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
-l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
--server-side If true, apply runs in the server instead of the client.
--show-managed-fields If true, keep the managedFields when printing objects in JSON or YAML format.
--template string Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
--timeout duration The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object
--validate If true, use a schema to validate the input before sending it (default true)
--wait If true, wait for resources to be gone before returning. This waits for finalizers.
Global Flags:
--add-dir-header If true, adds the file directory to the header of the log messages
--alsologtostderr log to standard error as well as files
--kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr log to standard error instead of files (default true)
--one-output If true, only write logs to their native severity level (vs also writing to each lower severity level)
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
(⎈ |karmada:default)➜ karmada git:(karmadactl-apply)
manifests.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
dce.daocloud.io/app: micro-dao-2048
name: micro-dao-2048
spec:
replicas: 2
selector:
matchLabels:
dce.daocloud.io/component: micro-dao-2048
template:
metadata:
labels:
dce.daocloud.io/app: micro-dao-2048
dce.daocloud.io/component: micro-dao-2048
spec:
containers:
- image: daocloud.io/daocloud/dao-2048:latest
imagePullPolicy: Always
name: micro-dao-2048
resources:
limits:
cpu: 128m
memory: "67108864"
requests:
cpu: 64m
memory: "67108864"
---
apiVersion: v1
kind: Service
metadata:
labels:
dce.daocloud.io/app: micro-dao-2048
name: micro-dao-2048
spec:
ports:
- name: micro-dao-2048
port: 80
protocol: TCP
targetPort: 80
selector:
dce.daocloud.io/component: micro-dao-2048
sessionAffinity: None
type: ClusterI
deploy:
(⎈ |karmada:default)➜ karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -f ~/manifests.yaml
deployment.apps/micro-dao-2048 created
propagationpolicy.policy.karmada.io/micro-dao-2048-deployment created
service/micro-dao-2048 created
propagationpolicy.policy.karmada.io/micro-dao-2048-service created
(⎈ |karmada:default)➜ karmada git:(karmadactl-apply) kubectl get svc,deploy,pp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 64d
service/micro-dao-2048 ClusterIP 10.103.19.104 <none> 80/TCP 41s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/micro-dao-2048 3/2 6 3 41s
NAME AGE
propagationpolicy.policy.karmada.io/micro-dao-2048-deployment 41s
propagationpolicy.policy.karmada.io/micro-dao-2048-service 41s
Great thanks @carlory. Take care.(Too late for you now)
Great!I was gonna do it , but I've been busy lately.
I think a delete
subcmd is needed too.
I think a
delete
subcmd is needed too.
Can I pick up it ?
I think a
delete
subcmd is needed too.Can I pick up it ?
sure
/cc @Garrybest @lonelyCZ @duanmengkk
Why is there no --namespace
parameter?
Why is there no
--namespace
parameter?
@chaunceyjiang https://github.com/karmada-io/karmada/compare/9f42c711389ae34d9a647b9b249ffbaa358fafb8..51ff2aad2b3477fe3e04e71470500bfd67f59f2f
/assign @rainbowmango
/assign @rainbowmango
Should we consider an other name for this subcmd? Because its behavior is different from kubectl apply
, which may mislead new users.
This command is implemented by kubectl apply
, I think it should support all the capabilities of kubectl apply
in the future.
It's the main difference that karmadactl apply
is designed for the muli-clusters sence to be used by karmada, and it's not a kubectl command.
after manifests are applied to karmada using this command, they will be annotated with kubectl.kubernetes.io/last-applied-configuration
:
manifests:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"dce.daocloud.io/app":"micro-dao-2048"},"name":"micro-dao-2048","namespace":"kiki"},"spec":{"replicas":2,"selector":{"matchLabels":{"dce.daocloud.io/component":"micro-dao-2048"}},"template":{"metadata":{"labels":{"dce.daocloud.io/app":"micro-dao-2048","dce.daocloud.io/component":"micro-dao-2048"}},"spec":{"containers":[{"image":"daocloud.io/daocloud/dao-2048:latest","imagePullPolicy":"Always","name":"micro-dao-2048","resources":{"limits":{"cpu":"128m","memory":"67108864"},"requests":{"cpu":"64m","memory":"67108864"}}}]}}}}
creationTimestamp: "2022-06-16T10:49:42Z"
generation: 1
labels:
dce.daocloud.io/app: micro-dao-2048
propagationpolicy.karmada.io/name: micro-dao-2048-deployment
propagationpolicy.karmada.io/namespace: kiki
name: micro-dao-2048
namespace: kiki
resourceVersion: "14033226"
uid: c71012a6-5dcf-48a1-af8c-e2367784bbb9
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
dce.daocloud.io/component: micro-dao-2048
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
dce.daocloud.io/app: micro-dao-2048
dce.daocloud.io/component: micro-dao-2048
spec:
containers:
- image: daocloud.io/daocloud/dao-2048:latest
imagePullPolicy: Always
name: micro-dao-2048
resources:
limits:
cpu: 128m
memory: "67108864"
requests:
cpu: 64m
memory: "67108864"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
readyReplicas: 4
replicas: 6
unavailableReplicas: 2
updatedReplicas: 6
generated policies:
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"kind":"PropagationPolicy","apiVersion":"policy.karmada.io/v1alpha1","metadata":{"name":"micro-dao-2048-deployment","namespace":"kiki","creationTimestamp":null},"spec":{"resourceSelectors":[{"apiVersion":"apps/v1","kind":"Deployment","namespace":"kiki","name":"micro-dao-2048"}],"placement":{"clusterAffinity":{}}}}
creationTimestamp: "2022-06-16T10:49:42Z"
generation: 1
name: micro-dao-2048-deployment
namespace: kiki
resourceVersion: "13848540"
uid: 5c469f8a-312d-42e2-901c-a3cee206bbaf
spec:
placement:
clusterAffinity: {}
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: micro-dao-2048
namespace: kiki
/cc @RainbowMango
/cc @lonelyCZ @RainbowMango
@lonelyCZ PTAL #2083
Thanks @carlory
I will review and test it ASAP today.
/assign
@lonelyCZ
how to retrigger e2e case ?
how to retrigger e2e case ?
Refer https://prow.k8s.io/command-help?repo=kubernetes%2Fkubernetes
/retest
It seem not working.
/cc @RainbowMango @XiShanYongYe-Chang
It seem not working.
Yes, it needs to be manually set out by @RainbowMango to retry.
/cc @lonelyCZ
/cc @lonelyCZ @RainbowMango
I just tested it as below, it works fine.
But I found the name of auto-generated policy
is not easy readable unless the users explicitly set the resource name like web-deployment
, web-service
. So should we auto-generate policy name format as <name>-<kind>-<hash-key>
or is there a better way?
all.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
ports:
- port: 81
targetPort: 8080
selector:
app: web
---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
name: web
---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
name: web
spec:
type: ClusterSetIP
ports:
- port: 81
protocol: TCP
[root@master67 mci]# karmadactl apply -f all.yaml --all-clusters
deployment.apps/web created
propagationpolicy.policy.karmada.io/web-6d7f8d5f5b created
service/web created
propagationpolicy.policy.karmada.io/web-76579ccd86 created
serviceexport.multicluster.x-k8s.io/web created
propagationpolicy.policy.karmada.io/web-7d98b55cd7 created
serviceimport.multicluster.x-k8s.io/web created
propagationpolicy.policy.karmada.io/web-75fdb9ccbd created
cc @RainbowMango @carlory
When there are different versions of a resource in a file, if the policy name is produced according to the rules of <name>-<kind>-<hash-key>
, the same resource will correspond to multiple policies. hash-key is generated via gvk.
Maybe we should modify the implementation of the generator function, here is an example:
// GeneratePolicyName generates the propagationPolicy name
func GeneratePolicyName(name, kind, group string) string {
hash := fnv.New32a()
hashutil.DeepHashObject(hash, group)
return strings.ToLower(fmt.Sprintf("%s-%s-%s", name, kind, rand.SafeEncodeString(fmt.Sprint(hash.Sum32())))
}
however, when there are resources of the same name and type in a file, but they do not belong to the same API group, <name>-<kind>-<hash-key>
is not easy readable.
// GeneratePolicyName generates the propagationPolicy name
func GeneratePolicyName(name, kind, group string) string {
if group == "" {
return strings.ToLower(fmt.Sprintf("%s-%s", name, kind))
}
group = strings.ReplaceAll(group, ".", "-")
return strings.ToLower(fmt.Sprintf("%s-%s-%s", name, kind, group))
}
@lonelyCZ @RainbowMango what do you think ?
however, when there are resources of the same name and type in a file, but they do not belong to the same API group,
- - is not easy readable.
It's rare, but it can happen. I want to know how to deal it in k8s when there are same kind in different group, like kubectl get clusters
but the clusters belong to two different group. Could you please look it? @carlory ,thank~
@lonelyCZ
sorry for too late reply
kubectl uses priorityRESTMapper to automatically choose a particular Resource or Kind when multiple GVRs matches are possible. kubectl uses discovery client to fetch all the groups and resources. the returned result is sorted by kube-aggregator which is integrated in the kube-apiserver binary, according to the group and version priorities of resources and lexicographical order of resource name.
priorities of built-in resources: kubernetes/aggregator.go at master · kubernetes/kubernetes · GitHub crd and aa share the same gv priority: kubernetes/crdregistration_controller.go at master · kubernetes/kubernetes · GitHub
I just tested it as below, it works fine.
But I found the name of auto-generated policy is not easy readable unless the users explicitly set the resource name like web-deployment, web-service. So should we auto-generate policy name format as
- - or is there a better way?
@lonelyCZ @RainbowMango
Regarding how to generate the name of the policy, there was a discussion here before, are you sure that you want to make changes in this pr?
Thanks for your research!@carlory
Regarding how to generate the name of the policy, there was a discussion https://github.com/karmada-io/karmada/pull/1824 before, are you sure that you want to make changes in this pr?
Unnecessary, we can improve it in another pr, could you please submit an issue to follow it?
done.
@lonelyCZ Is there anything else that needs to be modified about this pr?
@lonelyCZ updated
Thanks for hard works! @carlory
/lgtm
/cc @RainbowMango
Thanks @carlory , can you update the latest test report to the PR description? I can't clearly see the whole feature from the long discussions.
In addition, I see this PR sounds like part of #1934 , is that right? What else we need to do?
@RainbowMango
can you update the latest test report to the PR description?
done
What else we need to do?
// generatePropagationObject generates a propagation object for the given resource info.
// It takes the resource namespace, name and GVK as input to generate policy name.
// TODO(carlory): allow users to select one or many member clusters to propagate resources.
func (o *CommandApplyOptions) generatePropagationObject(info *resource.Info) runtime.Object {
....
}
Some opensource products is deployed by a long-size yaml,such as calico https://docs.projectcalico.org/manifests/calico.yaml. this pr provide an easy way to deploy it to member cluster.
What's the difference between karmadactl apply
and kubectl apply
?
OK. I get it.
Maybe like karmadactl apply xxx.yaml --to-clusters=member1,member2
.
/approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: RainbowMango
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Signed-off-by: carlory baofa.fan@daocloud.io
What type of PR is this?
/kind feature
What this PR does / why we need it:
Some opensource products is deployed by a long-size yaml,such as calico https://docs.projectcalico.org/manifests/calico.yaml. this pr provide an easy way to deploy it to member cluster.
Which issue(s) this PR fixes: Ref #1934
Special notes for your reviewer:
Does this PR introduce a user-facing change?: