Closed glindstedt closed 6 years ago
Hi @glindste. Thanks for your PR.
I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
I understand the commands that are listed here.
cc @nikhiljindal @csbell @bowei
/ok-to-test
Hey thanks a lot for the PR @glindste. Appreciate the contribution.
Regarding code duplication with ingress-gce
, we are working with @bowei to reduce it. We have existing copied code here and want to get rid of it as soon as possible, but realize that requires refactoring in ingress-gce and then revendoring it here.
I am curious how you found this problem? Are you using kubemci?
@nikhiljindal I spent last week researching how to do multi cluster ingress. Kubernetes federation did not fit our usecase at all, we want individual control over the clusters and it doesn't mesh well with how we deploy things. I think I was digging around in the ingress-gce
source code and found the gce-multi-cluster
annotation which led me to kubemci, which seems to do exactly what we want.
Getting the health check path from the probe was pretty much the only missing feature before I could test how kubemci worked (neither our service or the default backend returns 200 on /
) so I figured I'd give it a go.
Thats great @glindste! Are you going to use this with your GCP clusters?
We'd like to include you (and others) in our early access program if possible. Could you sign up for the EAP using this form? https://goo.gl/forms/RHg032DrRnnqF0by1 . We would love your feedback as you try out kubemci and would love to know what works and what doesnt. cc @mdelio
Thanks a lot for the feedback! :)
Yes if no major blocker comes up we'll probably use kubemci to set up the LB for our clusters, the alternative currently is managing the LB manually which we'd like to avoid. I've signed up for the EAP
Thanks for the changes @glindste This looks great. I have a few comments, mostly minor about being explicit about which code is copied and better error messages.
We should be good to merge once we resolve them. Thanks!
@nikhiljindal I think I'm done now, please have a look if I addressed all your comments and if there's anything else I can fix. Thank you for the excellent review comments. Also, would you like me to squash the commits before merging?
I saw you'd fixed vet and fmt on master so I rebased and made sure they pass for this branch as well
Yes, please squash the commits. You can also remove WIP from the PR title and commit message :)
This PR is not ready but I wanted to open it because I would like some feedback on the approach before continuing. I tried reusing as much as possible from
ingress-gce
, however I chose to copyGetProbe
andgetHTTPProbe
fromingress-gce/controller/utils.go
. This is because theGCETranslator
that was the receiver needed aLoadBalancerController
to initialize which seemed like unnecessary bloat. One possible solution would be to refactor the code iningress-gce
to make it easier to reuse so we could avoid code duplication. I haven't bothered with the tests yet since I'm not sure if this is the way you'd want to implement this.I tested the implementation manually by running and it works ~with one exception, sometimes it seems like even though the cached informer has been synced it fails to get the correct path and defaults to
/
. I'm not yet sure if there's an issue with some race condition in the syncing or if the error is in this PR.~ EDIT: found the issue, I used the wrongHasSynced
method when checking if the pod informer had synced