Closed Ant0wan closed 2 months ago
Hi @sagikazarmark - do you need help maintaining this repo ? We could provide help to maintain it as we use it in production. Best,
I believe this PR needs a rebase: a bunch of changes have already been merged (PVC, labeler fix).
@Ant0wan from what I can tell, the most popular chart from this repo is sftpgo.
Given how widely it's used and that there is no other official chart, I wonder if it would make sense to host it under the sftpgo org (even as a community project).
@drakkan I know you've been against hosting this chart under the main repo, but there is a separate org now and it's easier to manage external contributors that way. I'm obviously a bottleneck to the maintenance of this chart. WDYT?
@sagikazarmark, at that time I had never used a Helm chart and was not comfortable maintaining it. Now things have changed, I already maintain an Helm chart for our Azure AKS offering, so I may also maintain this chart myself within the sftpgo organization. This way it would at least be constantly updated with the latest version of SFTPGo, this should be expected if we move it in the sftpgo org.
The main problem now is that in these years I have also seen and above all the bad side of Open Source and therefore I have lost much of the initial enthusiasm, I am rather hesitant to add further maintenance burdens.
@sagikazarmark, I think you know well that also reviewing a PR is work, often more work than adding the feature yourself. You have to make sure that no other use case will be broken and once the PR is merged you have to maintain and evolve the feature yourself forever as project maintainer.
Adding more collaborators to a project also increases the coordination and review work. See xz to understand what can happen if you blindly trust collaborators. I don't want anything like this to happen to SFTPGo or projects within the sftpgo organization.
@Ant0wan you wrote you use this chart in production, so you offer your help to maintain it because it is useful to your company, that's great and appreciated but you are implicitly requiring one of these two things:
@sagikazarmark thanks for your reviews. I'll rebase ASAP.
@drakkan I get that maintaining this chart can be a lot. Our company uses it in production, so we're invested in keeping it up-to-date. I'm not asking for blind trust; I want my PRs reviewed thoroughly.
If my help isn't wanted, no worries. I can just fork the repo. But I'd prefer to collaborate if possible. Let me know what you think. Best,
@drakkan I understand your concerns about OSS. I'm not sure what you are proposing, though.
Would you like to maintain a chart under the official org/repo? If so, I'd be happy to contribute this chart to the effort.
Otherwise I'll probably move it out to a different repo.
@sagikazarmark I can add you as external collaborator in sftpgo org, but I'll not maintain the chart myself until a company with an active support plan will use it. However I think I will update the SFTPGo version each time I'll tag a new version.
As for the license, I usually use AGPLv3 in sftpgo org, but in this case I think it's fair to keep the MIT license.
I'm not sure about the CLA that is enabled on all the repositories in the sftpgo org, it's probably superfluous for an MIT licensed project but just in case I'd keep it.
Let me know what do you think. Thank you
@drakkan Thanks! I'll move the chart to that repo when I get the time.
@Ant0wan, you are a bit disrespectful. You are more than welcome to fork, but using it to put pressure on maintainers is not very nice.
Please run make docs
to make the linter happy.
@drakkan Thanks! I'll move the chart to that repo when I get the time.
Thanks! Of course keep yourself as the copyright holder as now when you'll move the chart
@sagikazarmark thanks a lot for this quick review. Have a great time
Title: Update Horizontal Pod Autoscaler to Support
apiVersion: autoscheduling/v2
Description:
This pull request addresses a critical issue related to horizontal pod scheduling in Kubernetes environments. The
apiVersion: autoscheduling/v2beta1
used for this purpose is no longer supported by Kubernetes versions 1.22 and above.Key Changes:
values.yaml
.Testing:
Additional Notes:
Relevant to: