Open Roia5 opened 2 years ago
After discussing with @fabianvf, it sounds like the problem is that force: true
causes a PUT
request. If a PUT comes through without an ownerReference present, the owner reference is removed.
The proxy-cache needs to detect when PUT requests would result in a removed owner reference and add it back.
yeah. For the time being if using force
/PUT
you'll need to manually add the ownerreference to your resource to ensure it's maintained.
Came across the same problem and worked around this according to @fabianvf suggestion and listing the changes here for those who are interested:
- name: "Get the existing deployments ownerReferences if they exist"
community.kubernetes.k8s_info:
api_version: apps/v1
kind: Deployment
name: "{{ base_name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
register: existing_deployment
- name: "Update deployment"
community.kubernetes.k8s:
state: "{{ base_state }}"
definition: "{{ lookup('template', './deployment.yaml') | from_yaml }}"
force: yes
while the Deployment template deployment.yaml
contains:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ base_name }}"
namespace: "{{ ansible_operator_meta.namespace }}"
ownerReferences: {{ existing_deployment.resources[0].metadata.ownerReferences | default(omit) }}
...
spec:
...
Nevertheless, this method doesn't seem to work well with the reconciliation logic of the controller. It seems that the a reconciliation is always triggered after a force-update of a resource, which results in a reconciliation loop. Does anyone have a suggestion? Is there anything I should consider, maybe also passing in resourceVersion
or generation
? And how does this behave when there really is an update, would it auto-detect a new generation based on the contents of spec
or is the force-update generally not considered when updating Deployments?
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/lifecycle frozen
Bug Report
What did you do?
When attempting to use k8s module to deploy objects using operator-sdk, I tried to change the
apply: true
parameter intoforce: true
. When using force, the objects are all created without the CR owner. As a result they are not deleted when the CR is deleted.What did you expect to see?
A correct owner reference to the CR.
What did you see instead? Under which circumstances?
Objects without an owner. (including statefulsets, deployments...)
Environment
Operator type:
/language ansible
Kubernetes cluster type:
OpenShift 4.5
operator-sdk version: v0.17.1
kubernetes version v1.19
Additional context
Might be related to the fact force uses the wanted YAML as-is, without the owner injection.