Closed lukego closed 9 years ago
Anybody want to volunteer for this? If so then what kind of changes are you interested in testing? (Check out the Pull Requests lists for inspiration on what kind of changes come down the pipeline.)
I think I'm confused by step 3 in your proposed process. Why would a tester merge a PR onto their own branch? If I review a PR and I think it looks good, you are saying I should merge it on my branch and then I should open a PR that to 100% contains code that someone else has written, ie my PR is identical to the original coders PR. Why? Seems easier to just comment on the original PR then :)
You can send your own changes as PR to the project, wait for someone to review and write "+1" on the PR then you push merge, or you let someone else have RW on the repo in addition to yourself, then you could assign your own PRs to be reviewed of them (this is how I typically work in small projects).
Why would a tester merge a PR onto their own branch?
This is to adopt the distributed model from the Linux kernel workflow.
The idea is that we treat our branches like IP routers and our commits like packets. Each branch (router) makes sure that the commit (packet) looks okay and then forwards it (pull request) to the next hop. The rules are all applied locally and independently of the size of the network. The chain of merges that a commit passes through on its way to master will read like a traceroute of the path that it took and who tested it.
It works for Linux and for the internet so I hope it will work for us too :-)
To clarify:
Reviews posted on Github issues are still important in the same way that they have always been. The difference is that each PR will have a definite next-hop branch and the owner of that branch will be the one weighing the comments and deciding when they are satisfied.
Great that you ask @plajjan! I am still working on really understanding the kernel git workflow but I feel that there is something deeply right about it.
Fun router analogy :P
I'm not sure the proposed workflow, let's call it linux-kernel-flow, would improve things for Snabb. It was invented for the kernel a long time ago. If it is so superior, why are other open source projects not adopting it? The kernel is a big machine and requires an involved model for the relatively few people at the top to manage it. Snabb is the complete opposite - one of the design goals is even to keep it small, so I don't think you should try to imitate what the kernel is doing simply because it isn't a good fit.
With that said, I think the concept of multiple reviewers is a good thing and I fully support that, but it doesn't mean that all code must take the detour via someone elses branch. Many other projects (much bigger than this) merge patches directly onto master and I think we could continue that just with distributed reviewing responsibility.
@plajjan This is an interesting discussion :-)
Let me concede that:
However, I am personally optimistic about it and I do want to try it out seriously as an experiment.
So, coming back on topic, what I need is an enthusiastic volunteer or two. For example I would love to persuade @eugeneia to verify code cleanups and documentation improvements. Perhaps @alexandergall would be willing to verify extensions of shm/counter and snmp/mtu related changes too? If so I would be happy to explain the idea of this workflow in however much detail is desired and start setting Assignee on the relevant Pull Requests.
Oh, hey, @eugeneia did volunteer the last time I suggested this on the mailing list!
Max, let me belatedly take you up on that offer, and include code cleanups and basic usability enhancements in addition to documentation fixups :-)
Let me also revise the idea for how to dispatch the Pull Requests. Instead of sending them directly to your branch on your repository, let us try keeping the PRs open on the main repository but making you the Assignee. This is to indicate that the PR will first be accepted by you onto one of your branches (merge) and then that branch (containing one or many approved fixes) will be PR'd by you for merge to master.
Clear as mud? :-)
Crystal clear! :)
I like the proposed workflow because I imagine it will lift some pressure of @lukego to vouch for every single PR and decentralize some of the meta arguments (like how to merge X). I know luke loves to get his hands dirty so I reckon he will have more time for that and spend less time on micro managing? :)
Just to clarify, distributed reviewing is a good thing but from my perspective that doesn't mean you have to change the flow of git commit and merges.
Again, as a clarification, the alternative in my view would be to have a core team of trusted people, so in addition to @lukego we also have @eugeneia and perhaps someone else. This core team has RW on the repo so any core team developer is free to merge things on master. New PRs are assigned to one of the people in the core team for review and once they are happy with the quality of the proposed code they push the merge button. Luke puts his code in a PR and lets Max review it. Max lets Luke review his PRs. All code is thus reviewed by at least one core developer. This is very similar to what Luke proposes, if not identical, as far as the distributed review go. The difference is the actual code flow, either PR -> master or PR -> personal-core-dev-branch -> master.
You are naturally free to do however you please but I would very much like to understand what you perceive as the gain by merging things on personal branches first? :)
@plajjan The momentum of Snabbo:master
will decelerate. Instead of many small PRs it will advance in bigger chunks of better tested and integrated merges. This means less work for people hacking on master and hopefully less work related to merge conflicts in general.
Each "topic branch" will also be more independent and can choose its own pace / relation to master.
@plajjan Good reference here: Distributed Workflows.
The idea is to keep it simple and scalable and flexible.
SnabbCo/master
branch is not special and may not even end up being the most important branch.The programs/
structure actually allows us to scale up Snabb Switch development massively over time. Perhaps in the years to come we will have multiple major network elements upstreaming onto the master
branch but each working primarily in their own downstream branches and controlling their own releases cycles, etc. This is already happening with both the NFV and the VPN applications that have control over when they will pull changes from master and when they want to maintain downstream extensions.
I really appreciate the autonomy that this structure grants to every developer. If I am developing on a project then I want to be maintaining the version that works for me and merging that with upstream in a controlled way. I don't want to be forced to work directly on the upstream master branch, I don't want anybody to have veto of what code I choose to deploy, and I don't want to lose sync with development and be left behind either. This distributed workflow is the process that I see as solving these problems.
The trouble with a centralized workflow for me is that it works well for some people and badly for others and runs more risk of becoming political. OpenStack is perhaps the most advanced centralized workflow in the world today and from my perspective it is the least pleasant project to contribute to for the reasons sketched above.
I close this as an issue but will very willingly continue this discussion here :-)
There's a number of things that seems to get mixed up, at least from my perspective.
Simple: no core team, no voting, no special processes for reaching consensus/quorum/etc. Code lands upstream when a short chain of people individually approve it. That chain is defined socially by who likes to pull what kind of changes from who. Scalable: as the community grows we only need a small number of people to become subsystem maintainers by collecting good changes on branches. This process should hold up at least until we reach the size of the Linux kernel community :-)
This seems to be a lot about nomenclature. You just mentioned "small number of subsystems maintainers". This is what I would call a "core team", so to me the above is contradictory. You say "no core team" yet the second point specifically mentions a group of people that are more trusted than the rest, ie a "core team". We are naturally free to call it what we want but I think that you interpret "core team" as implying some form of magic voting and special processes where I don't think any such meaning can be derived merely from the name.
For the various topic branches I think we are looking at this from very different perspectives. If I develop an app it is only natural that I maintain my own branch of that and only occasionally merge this on master, like when I feel I've reached a certain point with new features, it's stable and so forth. In essence I have my own versioning for my particular app but for various reasons I choose to have it also included in the Snabb main master. I could just as well maintain this app completely outside of Snabb git. The choice of where it goes is probably more about what type of app it is, like ARP or GENEVE, which are fairly generic, should probably ship together with the Snabb framework, while my super-specific FooBar app that only works in the TeraStream network probably should be maintained outside.
This is no different from any feature in any project. If I am developing a PostgreSQL feature I would do so on my own branch in my own repo and once I'm happy with it I would submit it to master in one of their commit fests. If anyone elses wants to contribute to my Postgres feature while I'm working on it, it makes sense they send their code to me and merge it into my branch before merging on master.
What you seem to suggest though is that person X will maintain branch X and since X works with ARP and GENEVE, I should send by bugfixes for ARP to branch X. Since person X is also developing a new VXLAN driver I will have to wait a few months for that to complete before person X merges branch X to master. If I had targeted my ARP bug fix directly to master, it would be merged faster and could be tested earlier by everyone and not just person X. It is rather common to have merge windows where new features are merged after which only bug fixes are allowed as so achieve a stable version that can be released every now and then.
The points you bring up about autonomy for developers is completely orthogonal to the rest of this discussion and I don't think it holds true. Anyone is always free to base their work on specific versions of an upstream repo. Splitting that repo up in many branches doesn't help this very much. You maintain the version of a core lib I want in your branch but you also have the new counter module which I'm not particularly interested in - in fact I have my own version and don't want yours at all. Having separate branches doesn't make this any easier - I think I'd argue to the contrary, it will likely just become a mess where I am merging features from many branches to get the latest components that I want.
I hear what you are saying about OpenStack and I'm not sure if this is just a general observation (which I agree with) or if you are trying to argue against me. I don't think I'm advocating for anything that is remotely similar to OpenStacks workflow. I'm just talking about the way a piece of code travels before it hits master. As far as review of that code goes I think we are saying the same thing, just using different nomenclature. OpenStack is difficult to work with for completely different reasons but let's not delve into that as it would take us way off topic ;)
We've kept this discussion at a very generic level so when we think of good structure we probably have different particular examples in mind, which is also likely why we end up with different conclusions on what is the best way to structure the workflow. I am not particularly active on the project so my opinion isn't very relevant. You write code, and quite a lot of it, so you should stick to whatever you are happy with :)
@plajjan Concretely I hope this is a solution for cases like #483. The distributed model means that people can cooperatively develop new features fairly independently of how ready other people are to adopt them.
I really hope this will work out and we will have an active musl-compatible branch. Then it would make it easy for the community to include people who need things that cannot land on master overnight. More extreme examples would be a Windows port, a DPDK I/O backend, a branch that links GPL code, etc. These ideas should be given a chance to attract users and gain momentum.
How do you deal with these issues in a centralized model?
I admit that this is experimental and the distributed model has not actually solved any practical problems for us yet. I hope to soon be able to say otherwise :-)
I would like to experiment with adding more merge steps into our workflow. The intention of this idea is to speed up getting good changes onto master by using smaller faster steps.
The process today is:
The idea is to add this simple way for more people to do merges:
The basic idea is that this would be a fairly fluid and flexible way to have more than one person approve the changes being submitted. This is based on the Linux model where changes are accepted by a chain of subsystem gurus before landing on master and every step in the chain represents somebody's review and approval.
I would be especially pleased that this process would let me send my own changes to other people for review and acceptance instead of dumping them straight onto master.
Once more: The purpose of this idea is to make the changes land on master faster and in better shape. I would have an easier time pressing Merge if I know that I am not the only one who has reviewed a change and thinks it is right.