Closed bpradipt closed 2 months ago
I think you missed:
- https://github.com/bpradipt/cloud-api-adaptor/blob/3293613ca799154180f1595e5e788b8be173e7fd/src/cloud-api-adaptor/podvm/README.md?plain=1#L60 (though when my podvm version removal PR merges that will be deleted)
Were you planning to follow up with a separate PR to bump:
- https://github.com/bpradipt/cloud-api-adaptor/blob/3293613ca799154180f1595e5e788b8be173e7fd/src/cloud-api-adaptor/docs/addnewprovider.md?plain=1#L211
- https://github.com/bpradipt/cloud-api-adaptor/blob/3293613ca799154180f1595e5e788b8be173e7fd/src/peerpod-ctrl/Dockerfile#L2 If so I think it makes sense to move the change to https://github.com/bpradipt/cloud-api-adaptor/blob/3293613ca799154180f1595e5e788b8be173e7fd/src/cloud-api-adaptor/Dockerfile#L2 in that PR as well?
I missed the READMEs. Thanks for the pointers. Will update it.
@stevenhorsman should I move the changes dependent on the golang fedora image version into a separate PR?
@stevenhorsman should I move the changes dependent on the golang fedora image version into a separate PR?
We've had two different approaches for this:
I think given this is a major golang bump maybe approach 1 is safer this time?
Created a separate PR to update the fedora golang image - https://github.com/confidential-containers/cloud-api-adaptor/pull/2031 Once 2031 is merged, will update this PR
https://github.com/confidential-containers/cloud-api-adaptor/actions/runs/10771064012 shows that the image has been built now.
The e2 tests are timing out. I do see pod creation tests successfully executed. Maybe the timeout for the e2e tests could be increased. @stevenhorsman @mkulke should I merge this PR?
The e2 tests are timing out. I do see pod creation tests successfully executed. Maybe the timeout for the e2e tests could be increased. @stevenhorsman @mkulke should I merge this PR?
I think that's fine. I was going to say that the daily e2e runs haven't been working, but this morning's passed. I think we merge it and then I'll try and take a look at the test failures when I get a chance. From an initial glance it looks like the TestLibvirtCreateNginxDeployment is flakey and when it fails we are likely to trigger the timeout, but I'll try and get some more concrete info on this. Maybe that test is a candidate to only run on nightly and not PRs?
TestLibvirtCreateNginxDeployment is flakey and when it fails we are likely to trigger the timeout, but I'll try and get some more concrete info on this. Maybe that test is a candidate to only run on nightly and not PRs?
I'll also take a look at this test. Let's discuss this in our next community interlock. Probably by that time we'll have more insights on the flakiness.
Fixes Vulnerability #1: GO-2024-3106