kubernetes / k8s.io

Code and configuration to manage Kubernetes project infrastructure, including various *.k8s.io sites
https://git.k8s.io/community/sig-k8s-infra
Apache License 2.0
733 stars 813 forks source link

Migrate away from google.com gcp project k8s-authenticated-test #1459

Open spiffxp opened 3 years ago

spiffxp commented 3 years ago

Part of umbrella issue to migrate away from google.com gcp projects: https://github.com/kubernetes/k8s.io/issues/1469

Part of umbrella to migrate kubernetes e2e test images/registries to community-owned infrastructure: https://github.com/kubernetes/k8s.io/issues/1458

The registry is used by the following kubernetes e2e tests:

The k8s-authenticated-test project was accidentally earlier today, and has caused these tests to fail (ref: ref: https://github.com/kubernetes/kubernetes/issues/97002#issuecomment-737435131)

We should:

spiffxp commented 3 years ago

/wg k8s-infra /area artifacts /sig testing /sig release /area release-eng

spiffxp commented 3 years ago

For reference, here's the output of gsutil iam get gs://artifacts.k8s-authenticated-test.appspot.com/

{
  "bindings": [
    {
      "members": [
        "projectEditor:k8s-authenticated-test",
        "projectOwner:k8s-authenticated-test"
      ],
      "role": "roles/storage.legacyBucketOwner"
    },
    {
      "members": [
        "allAuthenticatedUsers",
        "projectViewer:k8s-authenticated-test"
      ],
      "role": "roles/storage.legacyBucketReader"
    }
  ],
  "etag": "CAk="
}

To keep the behavior of these tests as-is using a new registry, the key part is allAuthenticatedUsers instead of allUsers.

That said, I question whether we should keep these tests at all, ref: https://github.com/kubernetes/kubernetes/issues/97026#issuecomment-738500525

spiffxp commented 3 years ago

/miletone v1.21 /sig apps test owner /sig node I think this is a more appropriate test owner for this functionality

pacoxu commented 3 years ago

/cc

spiffxp commented 3 years ago

/milestone v1.22

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

spiffxp commented 3 years ago

/remove-lifecycle rotten /milestone v1.23 Push to close https://github.com/kubernetes/k8s.io/issues/1458 for v1.23

spiffxp commented 2 years ago

/milestone v1.24

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

ameukam commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

ameukam commented 2 years ago

/remove-lifecycle stale /lifecycle frozen

BenTheElder commented 6 months ago

One possible alternative discussed at https://github.com/kubernetes/kubernetes/issues/113925#issuecomment-1536115193

This project is at risk in the near future and GCR is deprecated and shutting down within a year anyhow.

Raised in #sig-node today.

BenTheElder commented 1 month ago

copying from https://github.com/kubernetes/kubernetes/issues/113925#issuecomment-2304834317

This internal GCR will be shut down early (october?) rather than waiting for the public normal end-user GCR turn down timeline unless a Googler intervenes (internal bug: b/355704184).

It does not seem like we have sufficient interest in this test to bother continuing to deal with this problematic infrastructure, I'm somewhat inclined to preemptively shut it down now and move on.

We should not have ever been depending on a hardcoded service account key for a public authenticated endpoint in the test binaries, that was never a sustainable solution.