Closed hzyfox closed 2 years ago
I believe I'm experiencing a very similar issue and it seems to be connected to how the kubebuilder generates CRDs from the structs defined in the package api/
type PodTemplateSpecForExample struct {
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec v1.PodSpec `json:"spec,omitempty"`
}
// ExampleSpec defines the desired state of Example
type ExampleSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Template For Pod
Template PodTemplateSpecForExample `json:"template"`
}
In the CRD, this is generated:
properties:
template:
description: Template For Pod
properties:
metadata:
type: object
spec:
description: PodSpec is a description of a pod.
properties: ...
But a couple of odd things seem to happen which makes not sense. If I do kubectl explain
ssm-user@ip-172-31-5-82:~/project$ kubectl explain examples.spec.template.metadata
KIND: Example
VERSION: examples.my.domain/v1alpha1
DESCRIPTION:
<empty>
ssm-user@ip-172-31-5-82:~/project$
which was odd. Then kubectl apply on
apiVersion: examples.my.domain/v1alpha1
kind: Example
metadata:
name: example-sample
spec:
template:
metadata:
labels:
app: foo
spec:
containers:
- image: centos:8
name: test
shows up in kubernetes as
apiVersion: examples.my.domain/v1alpha1
kind: Example
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"examples.my.domain/v1alpha1","kind":"Example","metadata":{"annotations":{},"name":"example-sample","namespace":"default"},"spec":{"template":{"metadata":{"labels":{"app":"foo"}},"spec":{"containers":[{"image":"centos:8","name":"test"}]}}}}
creationTimestamp: "2021-12-27T01:46:58Z"
generation: 1
name: example-sample
namespace: default
resourceVersion: "88368"
selfLink: /apis/examples.my.domain/v1alpha1/namespaces/default/examples/example-sample
uid: 36c38649-f3cd-4e1a-bb20-b7e8b2d773c6
spec:
template:
metadata: {}
spec:
containers:
- image: centos:8
name: test
As you can see, the metadata is completely missing. It seems as if the generation is deliberately missing or ignoring any other declaration of ObjectMeta in the type if it doesn't appear in the top-level struct.
@abstractalchemist I have found the solution, use the controller-gen crd option crd:generateEmbeddedObjectMeta=true
will work
That's moderately better, but I think this needs to be better documented as it's not explained anywhere ( I don't even see this option in the kubebuilder book ) and it's not clear to me why this would be specifically called out as ignored for processing by the builder.
@abstractalchemist I found this option through controller-gen -h
, and there is no mention of this option in the official kubebuilder controller-gen CLI documention .
@abstractalchemist And I think crd:generateEmbeddedObjectMeta
shoul default to be true, I don’t know why this option is turned off by default. CRD
is very likely to use nested ObjectMetadata
HI @hzyfox and @abstractalchemist,
We would like to have a FAQ section on the docs : https://github.com/kubernetes-sigs/kubebuilder/issues/1723
WDYT about to collab with the project by creating this one and adding it there? The idea would be similar to: https://sdk.operatorframework.io/docs/faqs/
@camilamacedo86 LGTM, it will be useful to add it to FAQ. But I don’t know how to describe this problem accurately and concisely
@hzyfox, could you try to contact the controller-tools maintainers and ask a help over how better you can describe this scenario? Maybe @alvaroaleman can give a hand for us here.
I have the same problem but I don't see crd:generateEmbeddedObjectMeta
option
Version: v0.4.1
+crd[:allowDangerousTypes=<bool>][,crdVersions=<[]string>][,maxDescLen=<int>][,preserveUnknownFields=<bool>][,trivialVersions=<bool>] package generates CustomResourceDefinition objects.
Tried it, this parameter is available in controller-gen 0.6.2
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Environment
Kubectl Version
Kubernetes Version (Kind Cluster)
Kubebuilder Version
Os
I use kubebuilder to define my own
CRD
like below, and it containsVolumeClaimTemplates
filed which the type is[]coreV1.PersistentVolumeClaim
But when I apply the CR like the below, I found that the
metadata
filed is emptyHere is the yaml which get from the k8s ectd, it cound be found that the metadata of the
volumeClaimTemplates
is empty.Does anyone know why?
And when I mark the volumeclaimtemplate field with the below comment
metada
can be decoed correctlyOriginally posted by @hzyfox in https://github.com/kubernetes-sigs/kubebuilder/discussions/2459