kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
10.77k stars 2.22k forks source link

Produce darwin/arm64 binaries for v3 #4612

Closed camilamacedo86 closed 1 year ago

camilamacedo86 commented 2 years ago

Is your feature request related to a problem? Please describe. We cannot upgrade the stable go/v3 plugin in kubebuilder to use kustomize v4 and we would like to provide on it the support for darwin/arm64. In this way, we would like to ask for v3 binaries in this architecture.

Describe the solution you'd like

Be able to use the install.sh script to also install darwin/arm64 binaries for v3

k8s-ci-robot commented 2 years ago

@camilamacedo86: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
KnVerey commented 2 years ago

Can you please provide more information on why you cannot upgrade to v4, and whether or not this is permanent? v4 is more than a year old now, and we do not currently have a long-term support policy for v3.

camilamacedo86 commented 2 years ago

Hi @KnVerey,

From v3 to v4, we have a MAJOR bump. Then that means breaking changes. ( besides, I know that you tried to make it us much backwards compatible as possible )

For Kubebuilder we have stable plugins which use v3; we will provide a new alpha version that uses v4 to allow people to begin to upgrade and experiment with that as we begin to use v4 and its new features.

But we do not wish to remove the support for the stable plugins ( which does scaffolds using kustomize v3 ) now, and we would like to allow users which are running the things in Apple Silicon still be able to use the scaffolds done with v3 and the stable versions.

So to do that, we would like to have v3 binaries in this architecture. Since producing the binaries in the architecture is not a really huge effort would be lovely if you could accept this one. I am doing the PR for that.

camilamacedo86 commented 2 years ago

Hi @KnVerey,

I was looking at it and see the code from the latest commit used to build the latest v3 release: https://github.com/kubernetes-sigs/kustomize/blob/602ad8aa98e2e17f6c9119e027a09757e63c8bec/releasing/cloudbuild.sh#L97-L98.

So should not this asset be generated already? Why do we have not it on the release page https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv3.10.0

Would that only be adding a new Cloud Build trigger?

OR

The problem here is because the latest releases where not generated by pushing a new tag to the repo: See that the latest v3 tag is: https://github.com/kubernetes-sigs/kustomize/blob/v3.3.1/releasing/cloudbuild.sh (and does not have the changes on the cloudbuild.sh)

KnVerey commented 2 years ago

I think the reason it doesn't already exist is that go-releaser itself didn't support that architecture at the time of the release in question. darwin/arm64 support first appeared in v0.156.0, and the last v3 release used v0.155.0. We started producing darwin/arm64 with v4.2, where we upgraded to v0.172.1.

camilamacedo86 commented 2 years ago

HI @KnVerey,

Could we not update the goreleaser?

KnVerey commented 2 years ago

Yes, in theory we could make some new commits to the release branch and create a new tag. But the effort/risk isn't nothing, since we've never actually tried to do this before. For one thing, I recently discovered that the cloud build wasn't using the specified tag, so we'd need to cherry-pick a version of this change. There were major internal/dependency changes between v3 and v4, and dependencies could be another source of surprises since we do not vendor. Other similar dragons could be lurking, which makes me quite unenthusiastic about attempting this.

From v3 to v4, we have a MAJOR bump. Then that means breaking changes. ( besides, I know that you tried to make it us much backwards compatible as possible )

Yes, there were a few. I wasn't yet heavily involved in the project, but I know we had to drop support for some remote URL formats and changed underscored flags to use dashes. Is your project definitely affected by the specific changes that were made?

camilamacedo86 commented 2 years ago

HI @KnVerey,

We have a proposed solution for moving forward with kustomize v4. See: https://github.com/kubernetes-sigs/kubebuilder/pull/2583 (there, you can check why we cannot just begin to provide kustomize v4 with the current stable plugin used to scaffold projects)

So, to allow Apple Silicon users to use the default currently and stable implementation (which is the goal that we are trying to achieve with this request) we would like to also have the kustomize v3 bin for this architecture.

In this way, we are moving forward to use and provide the solution with kustomize v4, but we are also allowing users still use the stable and current implementation in this case. Note that we will need to support the current implementation for a long period. Then, having the kustomize v3 bin in this format would be very helpful for us.

What is the easier way do you think that we could try to move forward:

gquillar commented 2 years ago

Hi @KnVerey, @camilamacedo86, we have the same issue for ppc64le and architectures. We want to support operator-sdk on those architectures and operator-sdk uses kustomize v3.8.7. So we planned to update kustomize to v4.5.2 to have those architectures supported operator-framework/operator-sdk#5674 but Camila pointed out the breaking change issue. We can consider using a new kustomize/v2-alpha plugin as proposed in kubernetes-sigs/kubebuilder#2583 , but we definitively would prefer to have a kustomize v3 binary supporting ppc64le/s390x.

natasha41575 commented 2 years ago

@camilamacedo86 @gquillar Is it an option for you to build these binaries yourself from the release branches' source code? Given our limited resources, we don't have a precedent for supporting old versions and generally only cherry pick commits for security-related issues.

camilamacedo86 commented 2 years ago

Hi @natasha41575, @KnVerey,

Thank you for your time.

a) Currently, the releases have not been done by the tags ( so it is tough to know what version we are releasing by ourselves ). Would it be possible for us to fix this one? b) What would be the steps to build the bin? Could you provide the steps after git checkout the tag? What are the commands required? That would be very helpful.

Again, thank all for the support and attention.

camilamacedo86 commented 2 years ago

Hi @natasha41575, @KnVerey,

Could you please give a hand on this one?

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/4612#issuecomment-1292184516): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.