Closed jefflill closed 4 years ago
We discussed this and have decided to go with this plan. I'm closing this issue and we'll track progress in #937.
Some additional comments and tweaks:
I did some more research and git implements something called subtrees which looks to be closer to what we want than submodules. This still doesn't seem ideal though.
While talking to Stuart yesterday, I thought of the name neonLIBRARY when discussing our nuget packages. I really like this and went ahead and registered neonlibrary.com, .net and .io. We'll want to use neonLIBRARY when referring to library releases.
I also believe it's time to distinguish between neonDESKTOP and neonKUBE. We've been conflating these as neonKUBE. Going forward, neonDESKTOP will refer to the client side artifacts used to deploy and manage clusters and neonKUBE will refer to the cluster itself. We'll also rename neonKUBE-setup-#.#.# --> neonDESKTOP-setup-#.#.#
I also noticed that we've been using the term neonKUBE Desktop. This is yucky. Let's formalize this as just neonDESKTOP. I've been using that when speaking; we just need to update the documentation.
While working on #937, I realized that it's probably time for us to rethink how we're versioning and doing releases.
History
Up until now, the source repo is used for building and releasing the **Neon.* libraries, neon-cli, neonDESKTOP** and implicitly, the neonKUBE cluster. We've been releasing these together using a single version number with the libraries being published to nuget with the library documentation being published to https://doc.neonkube.com, and we also publish other build artifacts including client setup and the library help CHM to this GitHub repo as releases.
This single version scheme has worked OK so far, but only because we haven't started doing public preview releases of neonKUBE. Once we do that, and people start having things like neonDESKTOP installed on their workstations, we won't want to release a new version of neonDESKTOP (and perhaps auto-update users) just because we updated an unrelated library like Neon.Temporal.
Our neonKUBE versioning strategy to date has also been confusing. The original idea, going way back to the neonHIVE days, was to allow a user deploying a cluster to customize the versions of various components like Docker, Kubernetes, Istio, Help, and the Kubernetes Dashboard via
KubernetesOptions
in the cluster definition. We've decided that allowing this is a really bad idea from a testing and support perspective: way too many combinations of components.Also what's totally lacking right now, is a way to describe the version a deployed cluster. Say we just deployed a cluster. We could say it's running some version of Kubernetes, say 16.0, but what if we've added new stuff to neonKUBE but that it still runs Kubernetes 16.0; saying a new cluster is still a 16.0 cluster doesn't doesn't really convey the change.
GitHub Repo Thoughts
One idea is to split the neonKUBE repo up into two or more independent repos (but we'd continue with centralized issues). We could potentially:
Another thing we need to think about is where will we put any proprietary code that will not be made public. This could include things like neonCLOUD that will manage cluster setup and updates perhaps very soon and perhaps other premium features.
Proposal
Here's a summary of what I think we should do:
I really want to keep neonKUBE together as a single repository. It's just too convenient to have the libraries and code in the same Visual Studio solution and then be able to hit F5 and start debugging. This also makes building release artifacts easier. There are a couple alternatives, but these don't seem very attractive:
We could use GitHub Submodules. This would involve having the neonKUBE repo link to any library and other repos and then have the neonKUBE solution expect these to end up in well known locations so these projects could be referenced by the solution. This seems to have some promise, but after reading the submodules link above I'm not so sure. Requiring every dev to explicitly initialize/update these submodule links seems really problematic to me. Also, VS integrated support for submodules seems lacking.
We could have a separate library repo publish nuget packages to a local directory during builds on the developer workstation and have the neonKUBE solution configured to pick these up. There are some serious problems with this:
Create a new private neonCLOUD repo for the proprietary neonKUBE stuff. This will include its own issues.
I think we should have the libraries continue to use the current version scheme and also continue to be versioned together. The way to think about this is that libraries with the same version have been tested against each other and should be installed and updated together. MSFT does this with related libraries they publish. This does mean that we'll sometimes release libraries individual libraries that didn't actually change but I'm not very concerned about that. On the plus side, this makes it easier to maintain release notes as well as building and publishing API documentation (which currently combines documentation for all libraries).
We should start a new versioning scheme for neonKUBE starting at something like 0.1.0-alpha. This version would implicitly specify the version of Kubernetes and other system components installed including Docker, Helm, Dashboard, Istio, Calico, etc. as well as (eventually) the Ubuntu release. We'd include information about this in the release notes. neonDESKTOP and neon-cli would also be versioned with this and we'd allow users to specify the neonKUBE version in their cluster definition and these would need to be capable of installing or managing clusters with the current or older neonKUBE versions.
neonDESKTOP/neon-cli will need maintain tables mapping each neonKUBE to the individual sub-component versions (like Docker, Kubernetes, Isto,...). We were doing something like this in the faked
HeadendClient
but this needs to be explicitly baked into these apps.We also need to embed the Helm charts for each release in the apps as well. We're currently pulling these from the master branch, but that's actually pretty broken. We could reference release branches instead but we're not keeping those long term and it's just weird to have a dependency on GitHub just to deploy a cluster.
We need to think through how we version any Docker images we use in the cluster. This would include things like our logging services, our cluster operators, as well as other folks' infrastructure images like Kibana, Elasticsearch, Cortex, YugabyteDB, etc. I think we should probably tag our internal cluster images using the neonKUBE version number with clusters built from a release branch obtaining images from the nkubeio Docker organization and builds from non-release branches using nkubedev.
We'll eventually be building Hyper-V, XenServer, as well as Azure, AWS, and Google Cloud VM base images with all of these things preinstalled and preconfigured as much as possible. These will also be versioned using the neonKUBE version.
We're currently publishing all of the Neon.Kube.* libraries to nuget. This made more sense in the neonHIVE days where we started this pattern but it doesn't really make sense any more. Other than Neon.Kube.Service, none of these libraries are really intended for end-users and Neon.Kube.Service is really independent of neonKUBE and really has only one non-important dependency on Neon.Kube. I think we should rename `Neon.Kube.Service --> Neon.Service and cease publishing the *Neon.Kube. libraries.
What this Means
Only packages useful to end users will be published to end users. We'll versions these separately from neonKUBE clusters and components.
neonKUBE clusters and build artifacts including client setup will be versioned and released together.
neonDESKTOP/neon-cli will:
If or when the time comes to have publish one or more neonKUBE related packages for end users, we'll version those with the neonKUBE version and they'll presumably also be backwards compatible with older clusters,