Closed gabeweaver closed 10 years ago
Okay, so to try to tl;dr my concern here a bit:
My worry is that we should seek to adapt to the workflow people already have, which in the Chef community is largely based around Github and semver tagging. This can totally be added later, but I think long term that we should consider manual uploads to be the rare case for people that host their own repos or where the repo is private even if the cookbook is public (I could see the latter happening for in-firewall SCM systems mostly). The initial effort should focus on the common use cases and workflows, and right now that puts a bullseye on github as a source of truth.
The issue is that isn't the actual workflow for people using the site today. I realize there is a tautology there, but it's a real thing - they have a workflow that involves uploading to the community site. The right thing for the first MVP, from my perspective, is to get to a place where we support the existing workflow on a code-base we feel comfortable riffing on. I can think of lots of ways to implement hooks between github and the community site, all of which don't break if you also support the "legacy" upload mechanism.
So, fwiw, my point of view is:
Yeah, I don't disagree with that given that current state of the code, and this can definitely be added later. I think the title might be a bit overly pointed, I don't think uploads should be removed, just that we should revisit this at some point and look at how to make sharing easier.
Also - having artifact storage that isn't 3rd party is absolutely a requirement, from my perspective - otherwise we're at the mercy of a plethora of 3rd party artifact storage implementations.
It's a great idea to have a hook that knows how to deal with new cookbook releases - we could write a web hook that does the upload on the behalf of the author, for example, in the same way we have one for the CLA bot. It's a better design for the use case regardless.
Yep, not suggesting using Github for actual storage, just more of a Travis-style thing where it builds an archive on release and updates the site. The biggest issue with it (other than time) would be how to convert metadata.rb to JSON data safely.
Fast path to implementation would be to force people who want to use the webhook to post-process their metadata.rb to json and check it in.
I think, longer term, an interesting idea would be a configurable backend storage for artifacts. Want to use the local file system - great! Want to use S3 or Fast.ly - configure these things and bam! Want to use GitHub - put in your access token.
I think there is and should be an intentional separation between the "artifact" and the "code". An artifact is published and should be compressed, checksumed, and verifiable (preferably signed). Code, on the other hand, guarantees none of those properties.
I second Adam's comments around being at the mercy of the ecosystem. Additionally, the desire to make Supermarket an app that organizations can run internally conflicts with the GitHub tie. What if they are using GitHub Enterprise? Or Bitbucket? Or perforce/svn/... anything "not GitHub"?
I am going to close out this issue. Just a couple of notes:
Moving this spirited discussion into a GH issue since it is a more appropriate format for having an ongoing, detailed discussion.