Closed bacongobbler closed 7 years ago
:+1:. Such a library could be modeled after the Go on in Helm: https://github.com/helm/helm/tree/master/kubectl.
Closed by #497
Reopening this since I think we may need to inspect a bit more closely what we add to k8s. Right now we use a very basic mock but in many cases it would at least give us a decent idea of what we are applying based on various configs and otherwise.
Thoughts on how we should go about this?
@helgi In the past few sprints I know you've worked on these mocks, are they sufficient enough to close this issue now?
I haven't been happy enough with those - They prove certain interactions but not enough introspection on the built payload, et all. When we do more of that then I'd consider this done
I updated the core description with some additional information.
One thing to note is functions like manifest
in each resource we do not test directly but rather rely on create
and update
to call it with the right things. As such there is no introspection at that level on what is being sent... That could be a good addition overall, to compare to that directly since that's what is sent to Kubernetes... Thoughts?
Going to close this - We have very good coverage, mostly 70% and higher. In many cases 85% will be hard to achieve due to the mocking not being smart enough to create more delays etc to exercise some of the wait code
Having some library to inspect the requests sent to kubernetes would be helpful for adding unit tests to new features in the kubernetes client. That way we aren't spending all of our time manually testing these features.
[added by @helgi]: Minimum coverage needs to be 85% per file / class in the scheduler to be considered good coverage
Following need work:
__init__.py
Deployment
HPA
Pod
RC
https://codecov.io/gh/deis/controller/tree/master/rootfs/scheduler has the most up to date information