Closed phlogistonjohn closed 1 year ago
Right. These tests probably don't make any sense on a single node setup. I need to think about how to handle that case.
Erm.
--- PASS: TestIntegration/groupedShares/clustered/TestPodsReady (0.02s)
--- PASS: TestIntegration/scheduling (51.03s)
--- PASS: TestIntegration/scheduling/NodeSelectorSuite (24.09s)
--- PASS: TestIntegration/scheduling/NodeSelectorSuite/TestPodsRunOnLabeledNode (23.93s)
--- PASS: TestIntegration/scheduling/AffinityBasedSelectorSuite (26.94s)
--- PASS: TestIntegration/scheduling/AffinityBasedSelectorSuite/TestPodsRunOnLabeledNode (26.75s)
PASS
ok github.com/samba-in-kubernetes/samba-operator/tests/integration 1607.647s
yq not found in PATH, checking /root/samba-operator/.bin
controller-gen not found in PATH, checking /root/samba-operator/.bin
/root/samba-operator/.bin/controller-gen "crd:trivialVersions=true,crdVersions=v1" rbac:roleName=manager-role webhook \
paths="./..." output:crd:artifacts:config=config/crd/bases
YQ=/root/samba-operator/.bin/yq /root/samba-operator/hack/yq-fixup-yamls.sh /root/samba-operator/config
kustomize not found in PATH, checking /root/samba-operator/.bin
/root/samba-operator/.bin/kustomize build config/default | kubectl delete -f -
namespace "samba-operator-system" deleted
customresourcedefinition.apiextensions.k8s.io "smbcommonconfigs.samba-operator.samba.org" deleted
customresourcedefinition.apiextensions.k8s.io "smbsecurityconfigs.samba-operator.samba.org" deleted
customresourcedefinition.apiextensions.k8s.io "smbshares.samba-operator.samba.org" deleted
role.rbac.authorization.k8s.io "samba-operator-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "samba-operator-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "samba-operator-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "samba-operator-proxy-role" deleted
rolebinding.rbac.authorization.k8s.io "samba-operator-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "samba-operator-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "samba-operator-proxy-rolebinding" deleted
configmap "samba-operator-controller-cfg" deleted
service "samba-operator-controller-manager-metrics-service" deleted
deployment.apps "samba-operator-controller-manager" deleted
* Deleting "minikube" in kvm2 ...
* Deleting "minikube-m02" in kvm2 ...
* Deleting "minikube-m03" in kvm2 ...
* Removed all traces of the "minikube" cluster.
time="2023-03-02T22:35:46Z" level=fatal msg="Unable to delete registry-samba.apps.ocp.cloud.ci.centos.org/sink/samba-operator:ci-k8s-1.26-pr291. Image may not exist or is not stored with a v2 Schema in a v2 registry"
Our tests passed, but the minikube delete
command must have failed?
I'm marking this as ready for review and kicking off another ci run but if this keeps happening I'll ask reviewers to focus on the actual Go tests status rather than the overall ci state.
/test centos-ci/sink-clustered/mini-k8s-1.26
Erm.
* Deleting "minikube" in kvm2 ... * Deleting "minikube-m02" in kvm2 ... * Deleting "minikube-m03" in kvm2 ... * Removed all traces of the "minikube" cluster. time="2023-03-02T22:35:46Z" level=fatal msg="Unable to delete registry-samba.apps.ocp.cloud.ci.centos.org/sink/samba-operator:ci-k8s-1.26-pr291. Image may not exist or is not stored with a v2 Schema in a v2 registry"
Our tests passed, but the
minikube delete
command must have failed?
I think its the following skopeo
command from job script that failed in the above scenario:
skopeo delete "docker://${CI_IMG_OP}"
I can see that the job was triggered within a span of 20 minutes. Image tagging is differentiated by just PR numbers. 1st run completed successfully and the 2nd run fails to find the image for deletion afterwards.
@mergifyio rebase
rebase
@mergifyio rebase
rebase
WHY did it dismiss the existing review. There were no conflicts and the rebase was done via mergify. Errgh. @anoopcs9 can you please take another look? Thanks!
Fixes: #283
The new podSettings key allows for control of certain parameters the operator can not or should not guess at. Currently, this allows one to control scheduling via nodeSelector and affinity sections under podSettings in SmbCommonConfig.
It's located in SmbCommonConfig as SmbShare is really supposed to be about the share and less about the server. It's similar to how configuration of network integration is done by SmbCommonConfig. It's certainly not for SmbSecurityConfig :-)
I also added some extra labels to the smb pods so that they can be quickly backtracked to the smbcommonconfig and SmbSecurityConfig that were used to generate them. These labels helped my write the test cases.