gumball-guardian / meshd

MIT License
2 stars 2 forks source link

Design Notes Brain Dump #1

Open jgowdy opened 3 years ago

jgowdy commented 3 years ago

Nodes should be configured with a hostname upon deployment which is resolved to a set of AAAA or A records which provide public facing ports to join the mesh.

Once nodes join the mesh, they can use host discovery to find the other hosts.

When bootstrapping, a node creates a public/private key pair. The public key becomes the node's identity.

In the interim, we should configure clients with a static membership list until we have message passing working.

Nodes can be configured to load different modules, and those modules can provide different capabilities. For example, a node configured with an EmailSend module with a configured SMTP / Submission endpoint would advertise the SendEmail capability.

Nodes will continuously attempt to make one or more outbound connections to the other peers that the node is aware of, on the paths the other node is advertising, until the configured number of outbound paths are up. This happens regardless of any inbound paths from the same host.

When a Node connects to an endpoint, both the originating Node and the terminating Node will immediately send announce messages describing their identity, capabilities, etc. Nodes will keep a list of active paths / connections to each other Node, and when a packet is to be sent the connection to be used is prioritized based on the different preference levels of protocol, layer 2 or 3 adjacency, encrypted transport, and other relevant traits. Nodes will prefer to send messages on the outbound connections they make. This way packets are distributed among at least two of the links, keeping both alive.

Nodes can have multiple IP addresses and multiple ways they can be accessed. Our current focus is on WebSockets as a transport but we can add more later. But the WebSocket links can form across multiple IPs between the same Nodes. Either way, for each send() a single connection will be selected and used.

By default all traffic should be point to point. For hosts with public IP addresses, this is not a problem. For hosts behind NAT who have NAT-PMP or UPnP should be able to map a port and also accept inbound connections. However, some Nodes won't be reachable inbound. Those Node should still be able to make outbound connections to port 443 on other Nodes who can accept inbound connections. Other Nodes will be attempting to connect to the published paths of the Node that cannot accept inbound connections, but with a back off that will settle down to trying once every N hours over time. As long as one bidirectional WebSocket link is up, the Nodes can pass traffic.

Nodes send JSON messages to one another. The message JSON always starts with a map, and the root of the map contains source (public key), destination (public key or wildcard for broadcast), a TTL counter that defaults to 2 (see adjacent hop routing), and a signature for the payload that matches the source public key. The payload is one of the items in the root map, and it is a Base64 encoded encrypted JSON map containing the application specific payload values. The message JSON also has a Headers map under the root for providing metadata such as whether the payload has been compressed before encryption and what compression was used. The payload is encrypted using the destination Node's public key so that only it can decrypt the payload.

When receiving an inbound message, a Node first evaluates the source field. If the public key in the source field is trusted as a member of the mesh, continue, otherwise drop message. Then the signature is checked on the payload using the source public key, if it matches, continue, otherwise drop message. Now we know we have an authentic message from a trusted member of the mesh.

The destination field is evaluated. If the message is for us, we go down the process received message path.

If the message is for broadcast/wildcard, decrement the TTL and then forward the packet to all connected Nodes over the most preferred path.

If the message is unicast, and we have a connected path to that Node, send it. If we don't have a connected path to the target Node, we don't try to make one while the message waits, we attempt to locate a Node in our "routing table" that does have a connected path to the destination node. We then forward our message to that node, which we then would hope would forward it on using this same logic. If we can't find any adjacent Nodes to send the message to, we can search our routing table / network map for anything up to N hops away from the destination node to forward the message to.

If we can't determine where we should send the message, we can optionally "scattershot" it to all of our connected Nodes over the most preferred path to each, in the hopes that the message can be forwarded by those Nodes.

Messages will include a field specifying the desired response type. "FireAndForget" messages are never responded to. "Acknowledge" messages are responded to with an ACK message once the packet is received and signature checked and decrypted (but before the message payload is processed). "RPC" messages expect an optional ACK before packet processing and then a Response packet with the JSON result payload for the operation / transaction. Any message type other than FireAndForget, a Node receiving a unicast message destined for a public key / Node that this Node has no knowledge of or rational path to, sends a NoPathToNode response.

Once Nodes form best effort meshes of WebSockets and are able to transmit messages to one another, we add a Raft implementation with the communication tunneled over our WebSocket messaging system rather than via direct network traffic. The Raft implementation will have an election. We will bias the election towards nodes with public IPs, more powerful CPUs, lower latency to test endpoints, longest uptime, and other factors. If no public IP hosts are candidates, RFC addressed Nodes which have seemingly functional UPnP or NAT-PMP are next preference. These hosts while behind NAT are able to accept incoming connections. Nodes that are behind NAT with no mapping protocol will always choose to be followers and never put themselves up as candidates.

Even if every other Node is behind NAT with no port mapping option, as long as there is one Node that has UPnP or NAT-PMP the mesh can form.

Nodes will send announcements on their connected paths periodically, and immediately upon initial connection. These announcements will contain the advertised paths for the Node, it's public key, it's capabilities, etc. These announcements will optionally contain the list of other Nodes / paths that the announcing node is aware of, feeding Node and path discovery.

Once the WebSocket mesh is up and we have a working Raft election, we now have a distributed peer to peer messaging system, but with an elected leader. The protocol then supports a key-value-store form of persistence. Each Node will have a replica of the information in the key-value-store. The Nodes can perform weak reads by reading their local values. The Nodes can perform strong reads by requesting a read from the leader. All mutations must be sent to the leader. The leader then replicates the mutation to all nodes (including the node that made the mutation, since it doesn't exist until it's replicated to us). Reads will include a modified time for the values returned, and Conditional Updates with that modified time will be checked as a form of optimistic locking. In effect, if the read that a write is based on is outdated, the write is rejected and the Node CAN choose to re-read and mutate again based on the updated record.

Nodes that join the mesh and are verified are able to queue replicated writes, download the current persistence database from any other connected node, and then apply the queued replicated writes until up to date and resynchronized.

The other persistence that could be offered would be a savvy form of object storage using erasure codes and distributed data across the nodes, or a much simpler file system based distributed file store with configured number of replicas.

Once the leader election takes place and persistence comes up, the leader can undertake to map the network using broadcast and TTLs, and render a network map into persistence.

Nodes would constantly be learning about peers, but only registered peers in the master peer list are considered valid/trusted. Once a Node bootstraps and joins the mesh, it is added to the master peer list in key-value-storage, and then it downloads that master peer list.

Nodes would potentially make untrusted adhoc connections to discovered paths, but when they receive the announcement after connecting they'll learn this discovered path leads to a node not on our list.

Only if the other Node is part of the master peer list, do we handle or process traffic from that Node.

Master peer lists are only additive to peer host discovery, therefore if strong reads aren't desired or available, Nodes can leverage their own local copy since the last replica update.

At the moment paths will be WebSockets, which have 64bit size_t message size limits. However, regardless of how high they are, any path will advertise what it believes to be its maximum message size (we may probe this). Paths / transports that can't handle the size of the message we are sending are simply disqualified/ skipped.

The Raft elected leader would be able to act as a distributed lock manager between nodes, allowing for synchronization. With the key-value-store, we have persistence. With multiple paths we have multi-homing.

With the distributed lock manager and key value store provided by the leader, we can then leverage the JSON messaging mesh to bootstrap a WireGuard VPN mesh, similar to Tailscale. Effectively the JSON messaging mesh becomes the D channel and the WireGuard links become the Bearer channels.

Down the road if hosts can't setup WireGuard links due to firewalls or NAT issues, we can have bearer/IP channel fall back to an OpenVPN outbound TCP 443 which should work on any host that can make HTTPS requests.

Once the IP mesh is up, the JSON messaging mesh should advertise a new path, over that IP mesh. That path would likely be preferred. This would allow the control plane to pass through the VPN security of the data plane in nominal IP mesh up conditions, falling back to direct WebSocket linkage. Inception. In theory once the IP mesh is up, all non-IP mesh control plane connections could fall away, leaving only the IP mesh as active paths. These paths would be used until there were connectivity issues that caused the Node to shift back to trying to do direct WebSockets.

Once you have a self forming JSON messaging mesh that provides distributed persistence with optimistic locking for consistency, Raft for resiliency and consistency, and distributed lock management, which then bootstraps you a peer to peer IP VPN mesh, you basically have a powerful platform for many applications.

One, a personal botnet for nmap scans, trace routes, personal VPN service, monitoring my servers and desktops etc.

Two, a family could mesh their computers together and appear to be on the same LAN allowing for file and printer sharing for example.

Three, certain multiplayer games may be playable as LAN games through the WireGuard mesh.

Four, a small business could, rather than building an office network with the complexity that comes with that, use this system to build a zero trust network between all of the office employee workstations and the office servers. This would allow the office LANs to be configured the same as a Starbucks. Standard Internet with a standard NAT router. The Office LAN would be treated like a very fast, very optimal public WiFi, rather than a trusted network. The system would offer different modules that allow for health monitoring of the Node for example. Disk space sufficient? OS updates applied? OS isn't EOL? Antivirus is enabled? Etc.

Five, rather than setting up a distributed IP VPN mesh, the control plane could be used to create a virtual/orchestrated firewall which would act like a parameter firewall, but would accomplish the rule set by translating those rules into host based firewall rules.

When a Node is on an RFC address, it advertises that path too. When a Node is attempting to connect to another Node on one of its advertised paths, if the source Node has an RFC address, it will try to connect to the other Node's RFC addresses. This allows Nodes on an Office LAN to pass traffic between hosts on the local switch.

When you install a Node, configure it with the bootstrap hostname, it tells you it’s public key as it starts up. The Node will try to connect repeatedly but the peers won't trust it because it's not on the master list. The person managing the node add issues a command to one of the trusted nodes, doing a mutation to the master peer list to add the public key of the new Node. At this point the new Node's repeated efforts to link to the mesh will start working (hopefully).

Using the key-value-store functionality or optionally transparently plugging in a third party key value store, the mesh could offer application layer key value store eventually consistent persistence with optimistic locking. This is effectively the "DynamoDB".

Using the object storage functionality previously discussed, or plugging in a third party object store, the mesh effectively provides "S3".

Endpoints can be configured to run scripts based on events such as a particular URL path. These scripts would be written in something like JavaScript, Python, Lua, etc. The scripts would be able to run other scripts, potentially modify messages before send or after receive, etc. The scripts would be able to access the key-value-storage and the object storage and send messages. Any time we need user logic or want to offer the ability to customize logic. We would likely pick one language to really put most of our effort into. This would provide "Lambda".

The zero trust IP networking through a bootstrapped WireGuard VPN mesh (with OpenVPN/TCP/443 leaf nodes as needed) provides what is effectively the "VPC".

At this point we have the makings of a zero trust private mesh network that would work across the world, with a built in private cloud.

When we do start getting working builds going, I would like us to make sure the meshd engine announces itself in syslog, in Windows event viewer, in the Windows System tray or notifications area, as a MacOS notification, whatever. The point is, our copy of the command and control code makes every effort to make it clear that we are installed and running and to not be "hidden" on someone's computer. Most people think "Botnet! Evil!" We can't stop people from taking this open source software and recompiling it without the alerting code. But perhaps by making sure our software is very plainly open about being installed and running, we can avoid malware engines deciding we are a botnet.

We might also make it super easy to rip out, like one click removal. I just think we need to be very proactive about this from the start or we could get malware flagged and it'll be a nightmare trying to deal with that.

jgowdy commented 3 years ago

We will have to look at what kind of fast compression we want to apply to the message payloads before we encrypt them.

There are some interesting numbers here: https://www.lucidchart.com/techblog/2019/12/06/json-compression-alternative-binary-formats-and-compression-methods/

We will have to look at Brotli, Zstd, gzip -0, xz, etc and see what provides the most benefit for our use case with the least impact. Networks are usually fast anymore, so you don't want to spend too much time compressing. But some light compression could take a bite out of the encrypt / decrypt / signing size.

jgowdy commented 3 years ago

The WebSocket sub protocol will be where we describe what serialization format the messages are going to be in. WebSockets are binary safe and in fact can send both text and binary frames. We could in the future have a binary sub protocol for sending binary messages in something like BSON or whatever we determine is best.

jgowdy commented 3 years ago

We should have labeled channels in our messages. Probably labeled priority too. The current level we are working at can be the default channel, bulk priority. Then later we can add other channels and priority levels for messages. But for example we could choose to prioritize key-value-store replication traffic which would also be on its own message "channel". Similar to iSCSI and FCoE receiving different treatment, we would likely prioritize the Raft traffic followed by the persistence traffic, etc.

jgowdy commented 3 years ago

There will actually be 3 "databases". The shared global key value store that's brokered by the elected leader. The local key value store which is for local persistence for the Node (not the same as the Node's replica of the global persistence). Then the key-value-store-as-a-service aka "DynamoDB" is the third. The control plane on the node operates on the global data store and the local data store. The lambda scripts would operate on the value-store-as-a-service in addition to it being available as an API etc.

jgowdy commented 3 years ago

When a Node has a path that uses TLS, we don't absolutely need the TLS certificate verification (due to messages being sender signed) nor do we require the TLS encryption in flight (due to the message being public key encrypted for the destination). Yet it only adds to our defense in depth to use TLS protocols like HTTPS, and if we do expect certificates to be valid, we should validate them. Thus Nodes that are configured with valid certificates advertise the path with their proper hostname and pass a flag indicating that this path should verify. Nodes that are configured for no-verify will use TLS as a layered defense against passive listeners and provide forward secrecy which our use of public key encryption does not. That being said, Nodes can update their encryption public key in the persistence, also effectively renaming themselves.

jgowdy commented 3 years ago

I think one of the better goals of this project is to create that WireGuard IP mesh network and then add modules for private cloud services that leverage the Nodes of the mesh to provide distributed JSON and object storage, and to provide Lambda style script execution / FaaS. For example, the object storage would just ensure configured replicas stay in to date, and up to 100% of the Nodes can bear a replica of the data. Both the Document/JSON (or K/V storage if you prefer) and the object storage will be offered through the Raft leadership. The ability to run scripts in response to HTTPS requests is largely what will make this work. Clients will be able to create easy web handlers using a scripting language, and put workflows in those handlers just like with Lambda. Container hosting can be something similar, taking the traits of the potential host Nodes into account.

It would actually probably make sense to make the container module (and maybe the FaaS module) and build them out of off the shelf components like Kubernetes and OpenFaaS which we’d orchestrate through our messaging mesh. Kubernetes has been forked and modified for simpler usage before in projects like k3s and minikube. We might have even better luck taking something thinned out like k3s and embedding that as a module in our system.

If we have a container module that encapsulates k3s to provide container hosting services across the nodes, and a FaaS module that does the same using OpenFaaS, we can offer these modules as part of our packages. Not just a self forming zero trust decentralized / distributed private IP network, but such a network that offers distributed fundamental cloud like services, leveraging the resources of the Nodes themselves. In more serious / professional setups, there may be dedicated hosts added to the mesh to handle the FaaS or container hosting module. But it should all be a matter of configuration for the particular use case.

This would give someone a one stop shop for setting up a company, providing both the “office LAN” in a distributed state, and allowing a company to configure to use private cloud like services over that distributed network. This is probably sufficient to meet 95% of the needs of small businesses across the globe. Later we add more added value for this use case like an off the shelf IMAP host and SMTP/Submission relay. Things that would potentially make the setup turn-key for the small business consumer. This is like the network equivalent of Windows SBS.

This is where I see an awesome intersection of value. The things that I, as a self styled “power user”, want for my own personal use cases, also seem to blend nicely with services that would provide great value to small to mid businesses who can’t afford or don’t trust the public cloud. With a set of distributed hosts, a meshed command and control network, and eventual consistent with an elected leader, we should be able to offer a basis for shared network services. We need to color in some of the picture here related to how those shared services are offered externally by Nodes with public IP connectivity (resources will probably depend on the “capability to host blah.com” which steers applications to the right configured network endpoint.

If we can get to self forming distributed connectivity, and can provide distributed k3s and distributed OpenFaaS, we will have created something that is transformative to the industry IMO.

Both Kubernetes and OpenFaaS are written in golang. Kubernetes is Apache 2.0 licensed, OpenFaaS is MIT licensed. We just need to modularize these pieces so that they’re only used by setups that need those services. That being said, we might make OpenFaaS part of the core so we can leverage it in our offerings.

I’d love for one of the use cases for what w are building is a one stop shop for standing up a modern startup or small business via distributed zero trust networking and private cloud services (that port easily to something like AWS when your company gets large enough. Just by virtue of designing systems based on document storage (DynamoDB), object storage (S3), FaaS (Lambda), and containers (k3s/k8s), the consumer of this system will be modeled well for moving to a public cloud.

jgowdy commented 3 years ago

Perhaps also a Samba based module providing authentication services with file and printer sharing. For clients not seeking to conform to the cloud model but simply want classic network services.

Since I’m obsessed with NTP we should include an NTP module for standing up NTP service for the network, and SNTP modules for the clients to estimate the error of their Node and compensate. Meshd tracks it’s own time using SNTP and the TSC or other system timer to compute the delta of the last sampled time stamp. There can be a module that forcibly corrects the actual system time too, or we can make that a separate module that depends on the NTP module (or maybe it’s just a config of the NTP module

jgowdy commented 3 years ago

Coalescing with one of my concurrent wild ideas, we should offer a DNS server module and and internet gateway module. The internet gateway module offers basic NAT IP and REST API for HTTP Proxy based internet access to hosts with default to deny / allowlist only. This allows the system to implement egress filtering for the internet gateways. Using Internet gateways, Nodes can optionally transition from using their local internet connection as their default gateway, and instead prioritize the gateway in the Node’s routing table. This would unify group internet traffic for monitoring and filtering and policy, if the consumer of the system decided that was valuable. In this way, the mesh can act as a decentralized “NordVPN” or PIA type offering too.

jgowdy commented 3 years ago

The DNS server module could be extended to offer PiHole type functionality as an added benefit. Offices often pay for web filtering products which block both known bad actor sites and potentially advertising and associated tracking garbage among other things. And virus scans the downloads, which we could use clamav for if we figure out a working design for how to broker them. Maybe we have a trusted CA module, which instructs clients to add the trusted CA to each Node’s certificate store trusted roots. This would allow for internal TLS validation of web endpoints, in addition to allowing a WebProxy module to proxy all outbound HTTPS traffic for the network. This would allow us to provide virus scanning and anti-malware services. The client module on the Nodes would have to change the system web proxy config. Maybe we offer SOCKS too as part of our Internet Gateway module. So it can be an IP forwarding / NAT gateway, it can be a Web Proxy (+ cache), it can be a SOCKS proxy, etc. Each of these can be off the shelf open source projects packaged inside a module.

jgowdy commented 3 years ago

A module akin to a Sharepoint type solution? Works with LibreOffice and Office desktop apps?

jgowdy commented 3 years ago

Remote Desktop (FreeRDP) gateway module for remote access

jgowdy commented 3 years ago

Backup module for client or server backups using one off the shelf backup system per module. Try to have a super simple turnkey backup module with very few options and configuration for EZ mode setups, and then have advanced use cases too. Can potentially integrate with cloud based storage like Glacier

jgowdy commented 3 years ago

It may be that a pNFS or other NFSv4 era solution may allow us to offer a virtual NFS mount spanning the cluster with enough replicas to provide redundancy. Similar to Amazon EFS.

Obviously all shared storage that each Node contributes to one of these services, the data is encrypted in flight through our protocol, and it is encrypted before being written to local persistence (at rest). Key storage for Nodes to bootstrap on startup tbd.

jgowdy commented 3 years ago

We can have some kind of modules or options for a cluster wide KMS service, which could be based on one or more HSM enabled (and advertising the capability) Nodes providing the root of trust.

I think modules are going to generally be generic interfaces for OnLoad, OnUnload, etc, so that a loaded module can do anything, but also have the ability for those generic modules to register more specific interfaces that the module implements. So if the mesh had a general concept of a KMS service interface, then when you load amazon_aws_kms.mod in its OnLoad or maybe async in a background thread after loading is complete, calls a method like RegisterKMS which attempts to register an available KMS service with the leader who is managing service discovery. It writes the service discovery metadata to the Raft persistence such that if a new leader takes over the state is not lost for registered services.

Registered services will have a heartbeat field in the metadata persistence that causes them to fall away if not refreshed periodically. This way if components of the mesh are down, they will be removed from service discovery. That way a consuming component of a particular registered service will error on the fact that there are no registered KMS modules active, rather than finding the metadata for a KMS module but then not being able to connect to it. This is of course the case for the heartbeat timeout period, but not on an ongoing basis which would be suboptimal.

jgowdy commented 3 years ago

Modules themselves can leverage the coordination provided by the Raft leader election, such that an instance of a module will detect that it is running on what is now the leader, and broker the other instances of itself running on nodes not the leader. There would be a communication “channel” through the messaging mesh that allows the same module instance on different nodes to communicate with the leader’s instance of that module. This allows modules to leverage the full capabilities of the original self forming mesh to do all sorts of cool additional features and functionality

jgowdy commented 3 years ago

Could be something like two types of module. The type that can only handle reload to upgrade to Leader behavior for the module, and the type that can dynamically reconfigure to be the leader.

The default handler for OnBecameLeader will be to call shutdown. When the module exits, the Node will decide by configuration its supposed to be running that module and restart it. One of the module startup parameters is “am I the leader?” So when the module restarts it will know it is the leader from the start. OnLostLeader will also have a default handler to shutdown, with the same effect, starting back up with leader = false. If modules can handle dynamically adjusting to leadership role and follower role, they can override those two handlers with their own implementation that does what it needs to. The restart case is better for extreme simplicity in modules. The dynamic case is better for module that need to preserve their state between role transitions in order to be optimal implementations. We default to the restart case because it has to work anyway for the system to work, and it doesn’t require any special action on behalf of the developer.

jgowdy commented 3 years ago

These generic modules that can be loaded, unloaded, and are provided an interface to the mesh’s messaging network and Raft capabilities are the basis of any functionality added to the messaging mesh and Raft setup. This way all of the “forks” or special interest branches of the project continue to share the same core code base.

In some ideal great success for our project, there could even be closed source commercial modules providing added value at a cost. The MIT license of the project facilitates this. I believe the project should generally be business friendly and not discourage such proprietary extensions of the project, even as we push forward with open source equivalents (sometimes duplicating or superseding the proprietary functionality). We should not inhibit the implementation of open source functionality in modules to prop up any proprietary modules, and the interfaces must stay generic, even if certain interfaces are introduced to facilitate particular modules being written.

jgowdy commented 3 years ago

Everyone should be aware that this project is MIT licensed and thus companies like Amazon could fork or start selling this package as a service. The MIT license allows this; this is intentional; they are free to do as they please as long as they honor the license. We should not become one of these projects with “sour grapes” over actions that are plainly allowed under our open source license.

jgowdy commented 3 years ago

Everything in this brain dump Issue is my opinion. Statements I make about what we “will” do are non-binding and subject to the consensus of the team. The goal of this stream of consciousness is to share ideas with the team, and any “absolute” or “commanding” or “authoritatively directing / policy making” statements should be reinterpreted as suggestions or conjecture or proposals.

I would suggest that we adopt a “without objection” decision making process to allow the project to proceed at the fastest pace. Decisions are always subject to re-evaluation if the consensus determines that’s best.

jgowdy commented 3 years ago

Per our original conversations on Slack with the team, here is my proposed team charter:

The core team members shall be composed of the original team (enumerate team members here) to start, and all other joining and departure of core team membership shall be determined by a simple majority vote. No particular reason is required for the core team to remove a member, as this isn’t a court room. The project will have a Leader elected yearly by simple majority. I propose to the team that I be given the first year of leadership to help guide the project along the original vision, after which we select others to keep the project evolving. The power of the Leader is simply to tie break any even votes (after casting their own vote as a Member). Otherwise the Leader helps promote the project, frame the conversations, and generally provides friendly advisory leadership to the project. The real power is in the consensus of the simple majority of the core team members. The project structure should remain as simple as possible, focus on the fact that everyone is a volunteer, and we all have our own ideas, visions, and goals for the project. Light touch governance and self determination are the heart of this proposed philosophy.

Any future additions or modifications to this charter require only a simple majority, as do any other sets of rules. This includes changes to the code of conduct, license, project name, and any other aspect or facet of the project without limitation.

The project should not demand explicit time commitments or deadlines from core team members, nor should anyone be expected to apply labor anywhere they do not choose to. We are all adults with lives and careers and other commitments. We all own our own labor. Please do not put time related pressure on fellow team members.

Inactive core team members can be removed by the core team consensus (or any other reason as previously established). As a general guideline someone who has not participated in the project in some form or fashion (email, GitHub comment, PR, anything) over a period of six months could be a potential candidate for removal. Removal for inactivity should generally be considered non-prejudicial. The nature of other removals and eligibility for rejoining are entirely within the power of the core team consensus.

Any core team member or members can start any sort of commercial enterprise they choose related to the project, and they have the freedom of association as those commercial enterprises are outside the scope of the project. If there is to be a primary commercial endeavor it is suggested that all who have made a material contribution are included as equal partners, but this suggestion is non-binding.

Pull requests should require two approvals from at least one other core team member who did not author the pull request or its commits, for the sake of quality, prior to merge. Code changes can be reverted / backed out and reconsidered by the simple majority.

The philosophy should emphasize, “we don’t need drama, just fork and do your own thing, no hard feelings.”

jgowdy commented 3 years ago

https://datatracker.ietf.org/doc/html/rfc4787