Closed nicholasjng closed 2 months ago
Attention: Patch coverage is 75.00000%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 56.42%. Comparing base (
af9a3cd
) to head (4accad5
). Report is 3 commits behind head on main.
:white_check_mark: All tests successful. No failed tests found.
Files with missing lines | Patch % | Lines |
---|---|---|
client/src/cli/commands/list.py | 0.00% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Summary: I list all pods for a workload, any failed pod associated with it will have status.phase=='Failed'
attached to it.
Technically, this is portable across job types in the sense that any failed pod associated with a cluster resource should trigger an alarm (if, for example, the kuberay operator were to fail, we would get a notifier here as well.)
This requires a working connection to the k8s API server, which makes it fail in CI if not mocked away. (I'm not sure why that is, but maybe you can chime in here.)
This requires a working connection to the k8s API server, which makes it fail in CI if not mocked away. (I'm not sure why that is, but maybe you can chime in here.)
I'm not surprised it needs a mock, since the endpoint accesses the pods
property of the workload, which calls out to the k8s API. I guess one way is to mock the has_failed_pods
property as you did, or you could reach for mocking the pods
property, so that derived attributes can be computed from it.
In the long run, we would benefit from introducing a few test fixtures to produce mock workloads, and not repeat that logic across the individual tests.
As per title. We only warn if the job hasn't already failed, in which case you can inspect the job directly to debug.
Addresses the final point for #86, at least for Kueue jobs.