Open a-hilaly opened 1 year ago
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Our current Helm release testing is quite basic - we use presubmit
prowjob
to create a kind cluster, deploy a Helm chart, check if the controller pod is running, and scan for "ERROR" in the logs. It's a good start, but not enough. We want to level up our testing game for better reliability, and catching bugs before shipping new controller releases.Ideally we need to add more fine-grained tests when it comes to logs scanning, for example we could look for specific errors like:
no matches for kind \"VirtualCluster\" in version \"emrcontainers.services.k8s.aws/v1alpha1\"
)cannot get resource "leases" in API group "coordination.k8s.io"
)unknown flag: --test
)