Open ameukam opened 3 years ago
@dims where is the original source code for cs.k8s.io? :eyes:
/wg k8s-infra
/sig contributor-experience /priority backlog
/assign @spiffxp cc @mrbobbytables @alisondy @cblecker @munnerz
@dims where is the original source code for cs.k8s.io? eyes
@nikhita You can find the config here https://github.com/dims/k8s-code.appspot.com/
What's the argument against hosting it on AAA?
@BenTheElder nothing other than someone has to do it :) oh, i don't know how to wire the ingress/dns stuff
i tried a long time ago :) https://github.com/kubernetes/k8s.io/pull/96
What's the argument against hosting it on AAA?
I would say lack of artifact destined for aaa
(aka no up-to-date container image for hound
). We could host the image on k8s-staging-infra-tools
.
@ameukam should this issue be migrated to the k/k8s.io repo?
@nikhita I'm not sure about the right place of this issue. Just wanted to put this under SIG Contribex TLs and Chairs radar.
it should be under k/k8s.io imho. I think we should host it on AAA fwiw.
Moving to k8s.io repo. slack discussion - https://kubernetes.slack.com/archives/CCK68P2Q2/p1623300972130500
/sig contributor-experience /wg k8s-infra
I took a stab at onboarding codesearch; @spiffxp could I get your input? I want to make sure I didn't miss anything. I want to stage all the infra, and get it deployed via prow first. Then we can follow up with another PR to cut-over DNS when we are ready.
https://github.com/kubernetes/k8s.io/pull/2513 https://github.com/kubernetes/test-infra/pull/23201
I could also work on adding the docker build logic after, but I haven't worked in that repo yet so I'll have to do some digging.
cc @dims
/priority important-soon /milestone v1.23
What about using https://sourcegraph.com/kubernetes to minimize the maintenance burden here? This is something I suggested to @dims in the past, but didn't have the bandwidth to do at the time.
choices are:
if i missed any other options, please feel free to chime in.
/unassign
FYI: If choice 2 is picked, my two PRs are pretty much ready to stage codesearch in the aaa
cluster. There are a few small things that need to happen after the merge, but that's documented in my PRs.
thanks @jimdaga
+1 to give #2 a shot. will let Aaron and Arnaud to review and merge all 3 PRs
/milestone v1.24
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@ameukam what is remaining here?
@ameukam what is remaining here?
Deploy a canary instance from https://github.com/kubernetes/k8s.io/pull/2513. Once we have confidence with that instance we can rollout a prod instance.
/assign
@nikhita, I'm interested in helping with setting up a canary instance.
Post-merge checklist item from PR https://github.com/kubernetes/k8s.io/pull/2513 that need working on:
cs-fetch-repo
docker image (Open PR: https://github.com/kubernetes/test-infra/pull/25576)@pmgk07, once we're done with having https://github.com/kubernetes/test-infra/pull/25576 merged for adding cs-fetch-repos
image under k8s infra, the next step would be updating the codesearch/deployment.yaml#L27 to use above hosted image.
Update deployment to use deployed docker image (using a temp image for now)
@Priyankasaggu11929 Let's give @jimdaga the final call about this. There are possible changes that need to be added the Docker image.
Now that https://github.com/kubernetes/k8s.io/pull/3492 is merged, I see codesearch is deployed in the cluster!
However, it looks like the init containers are crashing:
kubectl get pods -n codesearch
NAME READY STATUS RESTARTS AGE
codesearch-5b975d449-lgm9b 0/1 Init:CrashLoopBackOff 8 19m
codesearch-5b975d449-zzqkl 0/1 Init:CrashLoopBackOff 8 19m
I'm out of the office right now, so I can't do a full debug. But it does seem like something needs fixing :( (I also don't have access to view pod logs, so not sure how to get that)
Let's give @jimdaga the final call about this. There are possible changes that need to be added the Docker image.
+1. Yes 🙂
There's also an error for decoding ingress in the build-logs of the post-k8sio-deploy-app-codesearch
job.
I've raised a minor patch fix: https://github.com/kubernetes/k8s.io/pull/3502
Now that #3492 is merged, I see codesearch is deployed in the cluster!
However, it looks like the init containers are crashing:
kubectl get pods -n codesearch NAME READY STATUS RESTARTS AGE codesearch-5b975d449-lgm9b 0/1 Init:CrashLoopBackOff 8 19m codesearch-5b975d449-zzqkl 0/1 Init:CrashLoopBackOff 8 19m
I'm out of the office right now, so I can't do a full debug. But it does seem like something needs fixing :( (I also don't have access to view pod logs, so not sure how to get that)
You can use GCP Logging console for the logs: https://console.cloud.google.com/logs/query;query=resource.type%3D%22k8s_container%22%0Aresource.labels.namespace_name%3D%22codesearch%22;cursorTimestamp=2022-03-11T06:20:53.646489047Z?project=kubernetes-public.
I did a quick research based on the logs and it suggested the issue may be related to the architecture of the Docker image.
skopeo inspect docker://jdagostino2/codesearch-fetch:0.1.7 | jq .Architecture
"arm64"
The image seems to be built using a arm64
processor but the GKE nodes are amd64
. We should try to switch to gcr.io/k8s-staging-infra-tools
and see what's happening.
@ameukam I unknowingly added fixes #xyz in my PR which led the k8s-ci-robot to close this issue. Feel free to reopen this issue if there's anything pending.
/reopen
@ameukam: Reopened this issue.
https://cs-canary.k8s.io is now up and running. We should spread the word about his existence and think about we will flip cs.k8s.io
to aaa
GKE cluster
sounds like a great plan @ameukam ! we can flip ASAP. i will leave the other one running for a week or two just in case we have a problem
The one last change we need before we flip is a job that restarts the deployment nightly to pick up any changes. The way I set up the deployment there should be no downtime while the new pods are coming up.
Hopefully have the final two PRs needed to consider this "go-live" ready.
Looking for an LGTM for these two:
/milestone v1.25
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@jimdaga do you have cycles to address the review on https://github.com/kubernetes/k8s.io/pull/3679?
/milestone v.126
@ameukam: The provided milestone is not valid for this repository. Milestones in this repository: [v1.24
, v1.25
, v1.26
]
Use /milestone clear
to clear the milestone.
/milestone v1.26
Can we explore deprecating this in favour of GitHub Code Search?
https://cs.github.com/ https://github.com/features/code-search
Not really. one issue is that Github CS requires auth to use it while we have anonymous queries using cs.k8s.io
(e.g. https://go.k8s.io/owners/dims).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen closing this is not helpful, unless we've also shut down the existing infra.
/assign @SohamChakraborty
@ameukam: GitHub didn't allow me to assign the following users: SohamChakraborty.
Note that only kubernetes members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
I think this is now ready for migration from the bare metal server to aaa cluster. I spoke with Arnaud and he will decide on a path for migration.
https://cs.k8s.io is running on a baremetal server provided by Equinix Metal(ex Packet) under CNCF budget and operated until now by @dims.
The question was asked about whether or not we should host CodeSearch on aaa cluster.
Ref: https://kubernetes.slack.com/archives/CCK68P2Q2/p1615204807111900?thread_ts=1615189697.108500&cid=CCK68P2Q2
Issue open to track the discussions and the consensus about this.