Closed clubanderson closed 9 months ago
@clubanderson It'd be great if you could present at a TAG Runtime meeting. I've created https://github.com/kcp-dev/edge-mc/issues/283 to coordinate this.
on your calendar for May 4th. See you then
Thank you for the opportunity to present on the May 18th TAG Runtime community meeting. We had lots of great feedback. We are looking forward to hearing your decision on our inclusion in the CNCF Sandbox. Thank you.
updated the application based on current information
Does this contribution include KCP itself?
Does this contribution include KCP itself?
No. We can use KCP as a provider of logical cluster functionality. In the weeks ahead we will announce support for other logical cluster providers such as kind, k3s, k0s, hypershift, and etc. I believe KCP has a separate sandbox application and is unrelated to KubeStellar.
@amye is there more information you need?
CNCF has another edge computing project kubeEdge. Is there any integration of this project with KubeEdge?
After review in the CNCF Sandbox/TAG call we would request that you reapply after the repository has been reorged. The project seems to be in an early stage and we want to understand the broader contribution/interest when it's in a state consumable outside the sponsoring organization. Thank you!
@erinaboyd thank you for your review and decision. Are there any other factors that we can address? If I am reading your comments correctly, I see that the KubeStellar CNCF sandbox application was denied at this time due to:
1) our source code was located at https://github.com/kcp-dev/edge-mc 2) we have not yet relocated to https://github.com/kubestellar/kubestellar as we plan to complete in July 3) the level of contribution and interest for KubeStellar is not broad enough at this time
Are those the only factors that contributed to our denial?
We’ve completed the move of our repo to https://kubestellar.io/code (https://github.com/kubestellar/kubestellar)
@amye would it be possible to postpone our application one more cycle to your November vote? Due to summer vacations we have slower throughput on our kcp decoupling efforts. We would like a completed decoupling before your next vote on our status. Sept 12th would not afford us enough time to polish our documentation. We are hoping you have another vote in November. I feel confident that we can make that vote based on our progress.
@clubanderson - the next review is December 12th. Ready to be included in the group?
Hi Amye, yes. KubeStellar would be honored to have your review. We had a great show at KubeCon NA this week. Lots of interest and new use cases. We believe there is lots of potential to work with the landscape members to help them become multicluster enabled without having to invent it themselves. Thank you for keeping us on track.
Would it be possible to update the TAG with our updates since our last presentation? I'd be happy to pay a visit to do so.
Would it be possible to update the TAG with our updates since our last presentation? I'd be happy to pay a visit to do so.
You'd have to check with them on their schedules, December 12th is our next sandbox review meeting.
Ok. Then I will just leave my comments here in the hopes they are reviewed.
1) we separated completely from KCP's GitHub repo 2) as I write this, we are putting on the final touches for decoupling from KCP as our exclusive logical cluster backend. We now offer options for kind and kubeflex as well as others that arise from followers of SIG API Machinery enhancements coming in the near future. 3) we streamlined our messaging and landing page (kubestellar.io) 4) we have increased in terms of adoption - stars and forks 5) we are continuing to work with projects in the existing landscape 6) our lighting talk at KubeCon was well received
We believe there is a bright future and need for our project. Because it is modular it will make a great fit to instantly and consistently extend single cluster projects into multi-cluster capable solutions.
/vote-sandbox
@amye has called for a vote on [Sandbox] KubeStellar
(#32).
The members of the following teams have binding votes: | Team |
---|---|
@cncf/cncf-toc |
Non-binding votes are also appreciated as a sign of support!
You can cast your vote by reacting to this
comment. The following reactions are supported:
In favor | Against | Abstain |
---|---|---|
👍 | 👎 | 👀 |
Please note that voting for multiple options is not allowed and those votes won't be counted.
The vote will be open for 7days
. It will pass if at least 66%
of the users with binding votes vote In favor 👍
. Once it's closed, results will be published here as a new comment.
/check-vote
So far 18.18%
of the users with binding vote are in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
2 | 0 | 0 | 9 |
User | Vote | Timestamp |
---|---|---|
TheFoxAtWork | In favor | 2023-12-12 18:40:14.0 +00:00:00 |
justincormack | In favor | 2023-12-13 15:55:41.0 +00:00:00 |
@rochaporto | Pending | |
@mattfarina | Pending | |
@RichiH | Pending | |
@mauilion | Pending | |
@dzolotusky | Pending | |
@cathyhongzhang | Pending | |
@nikhita | Pending | |
@kgamanji | Pending | |
@erinaboyd | Pending |
/check-vote
So far 63.64%
of the users with binding vote are in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
7 | 0 | 0 | 4 |
User | Vote | Timestamp |
---|---|---|
dzolotusky | In favor | 2023-12-15 14:58:37.0 +00:00:00 |
kgamanji | In favor | 2023-12-17 13:14:20.0 +00:00:00 |
TheFoxAtWork | In favor | 2023-12-12 18:40:14.0 +00:00:00 |
justincormack | In favor | 2023-12-13 15:55:41.0 +00:00:00 |
rochaporto | In favor | 2023-12-17 22:14:26.0 +00:00:00 |
mauilion | In favor | 2023-12-16 3:21:43.0 +00:00:00 |
mattfarina | In favor | 2023-12-18 13:49:24.0 +00:00:00 |
@RichiH | Pending | |
@cathyhongzhang | Pending | |
@nikhita | Pending | |
@erinaboyd | Pending |
The vote passed! 🎉
72.73%
of the users with binding vote were in favor (passing threshold: 66%
).
In favor | Against | Abstain | Not voted |
---|---|---|---|
8 | 0 | 0 | 3 |
User | Vote | Timestamp |
---|---|---|
@TheFoxAtWork | In favor | 2023-12-12 18:40:14.0 +00:00:00 |
@mauilion | In favor | 2023-12-16 3:21:43.0 +00:00:00 |
@kgamanji | In favor | 2023-12-17 13:14:20.0 +00:00:00 |
@rochaporto | In favor | 2023-12-17 22:14:26.0 +00:00:00 |
@mattfarina | In favor | 2023-12-18 13:49:24.0 +00:00:00 |
@justincormack | In favor | 2023-12-13 15:55:41.0 +00:00:00 |
@dzolotusky | In favor | 2023-12-15 14:58:37.0 +00:00:00 |
@erinaboyd | In favor | 2023-12-19 14:04:37.0 +00:00:00 |
Thank you all for accepting our project into the CNCF Sandbox program!!!
Hi @clubanderson ! Welcome aboard! We're very excited to get you onboarded as a CNCF sandbox project! Here's the link to your onboarding checklist: https://github.com/cncf/sandbox/issues/192
Here you can communicate any questions or concerns you might have. Please don't hesitate to reach out, I am always happy to help!
Application contact emails
andy@clubanderson.com, dettori@us.ibm.com
Project Summary
mutlicluster configuration management for edge, multi-cloud, and hybrid cloud
Project Description
What is KubeStellar?
We are an opensource community focused on creating a flexible solution for challenges associated with mutlicluster configuration management for edge, multi-cloud, and hybrid cloud:
Disconnected Operation: clusters don't always have connectivity, we get it Large Scale Deployments: the world is a big place and clusters can exist anywhere. Farms, Cruise Ships, Oil Rigs, even Space Small Locations: clusters can come in all shapes and sizes. MicroShift, K3s, and Kind can enable help small take part in a Kubernetes environment Different Types of Clouds: Edge, sovereign, regulated, high-performance, on-prem - we got you covered
So what are we working on to solve these challenges?
Desired placement expression: Need a way for one center object to express a large number of desired copies Scheduling/syncing interface: Need something that scales to a large number of destinations Rollout control: Need programmatic control of rollouts, possibly including domain-specific logic Customization: Need a way for one pattern in the center to express how to customize for all the desired destinations Status from many destinations: Need a way to access status from individual edge copies Status summarization: Need a way to say how statuses from edge copies are processed/reduced along the way from edge to center
Org repo URL
https://github.com/kubestellar
Project repo URL
https://github.com/kubestellar/kubestellar
Additional repos
https://github.com/kubestellar/helm https://github.com/kubestellar/homebrew-kubestellar
Website URL
https://kubestellar.io
Roadmap
https://docs.kubestellar.io/main/Coding%20Milestones/PoC2023q4/outline/
Roadmap context
We are focused on simplifying our user experience and scalability at this time.
Contributing Guide
https://docs.kubestellar.io/main/Contribution%20guidelines/CONTRIBUTING/
Code of Conduct (CoC)
https://github.com/kubestellar/kubestellar/blob/main/CODE_OF_CONDUCT.md
Adopters
Turbonomic, MAS/MVI, OpenZiti (in-progress), ArgoCD, Kyverno, Cornell University, MCAD
Contributing or Sponsoring Org
IBM
Maintainers file
https://github.com/kubestellar/kubestellar/blob/main/OWNERS
IP Policy
Trademark and accounts
Why CNCF?
Our project is born from and is for the technical and governing principles of the CNCF. We are fervent believers that cloud native opensource communities are the best way to a) develop projects that are instantly compatible with one another, b) are easily adopted and absorbed by community members and clients, and c) fosters the notion of democratization of technology in an equitable platform. There are opportunities at all levels of our stack to contribute and be part of a movement that is bigger than any of us could achieve individually. We are blessed to work for an organization and research institution that allows us to work full time in an opensource capacity.
Benefit to the Landscape
We believe our community will contribute significant advancements in scalability of object storage and configuration of large numbers of heterogenous edge locations back to Kubernetes and benefit a large number of existing CNCF projects. We have already been asked to help ArgoCD overcome some of their scalability challenges as a result of our work. We are working with the European Space Agency to contribute a vision and architecture for deep-space exploration using a resilient platform capable of withstanding lengthy windows of disconnected operations. We are working with the IoT Edge Working Group and its members (from Cisco and VMWare) to create an Edge Native Application Manifesto to help standardize the way organizations and individuals write and refactor edge applications. We are also working with Chick-Fil-A on understanding how their Restaurant Compute Platform (RCP) operates and how it might benefit from gaps that our community has identified and are working to fill. We have a very refined focus and we have a clear mission. We think we have learned a lot from the current CNCF members and we would be honored to be considered as a sandbox member.
Cloud Native 'Fit'
We are a state-based-management (Kubernetes) community that is working toward providing API extensions and operators/controllers that will dramatically improve the consistency, compatibility and scalability of existing configuration management for heterogeneous multicluster, edge, and multicloud locations.
Cloud Native 'Integration'
We are dependent on Kubernetes. We have decoupled from KCP and have made our project more generic and able to work with many kubernetes distribution.
Cloud Native Overlap
We do not think there is direct overlap at this time with other CNCF projects. However, we do touch on some of the areas that other projects are investigating in the space of configuration management, scalable kubernetes, and alternative object storage (kine).
Similar projects
Some have compared us with OCM, but KubeStellar takes a different approach to workload lifecycle management and adds some interesting capabilities. First, we do not use a manifest to bundle resources that you intend to have delivered to your multicluster/multicloud/edge environment. OCM uses manifestwork, we allow unbundled usage of any Kubernetes resource - namespaced or clusterscoped. Second, we have the notion of upsyncing resources. This comes in handy when you have a spoke/edge that needs to ‘communicate’ back to the hub in some way other than just a status update in the original CR. Third, we have the capability to use clusters as transport rather than target. We denature resources so that they are not unpacked/applied to clusters where they are not intended to execute. This allows us to go from a 2-tier hub-and-spoke model/environmnet, to a n-tier hub-intermediary-spoke environment. Then, add the ability to customize workloads in groups and summarize statuses to reduce the cognitive load of your devops. This is really useful when you want to reach far edge clusters that are not directly connected to your hub.
Product or Service to Project separation
I don't know what information this section is expected to contain based on the heading alone. If you can expand on what is meant by separation, I might be able to better answer this request.
Project presentations
We have presented to the IoT Edge Working Group. We are on the agenda for Feb 22, 2023, and presented KubeStellar to the IoT Edge community, but I see no recording link - agenda: https://docs.google.com/document/d/1Yuwy9IO4X6XKq2wLW0pVZn5yHQxlyK7wdYBZBXRWiKI/edit#
We have presented to the Kubernetes SIG MultiCluster group on May 30, 2023 https://docs.google.com/document/d/18mk62nOXE_MCSSnb4yJD_8UadtzJrYyJxFwbrgabHe8/edit
We presented at GitOpsCon 2023 in Vancouver A Quantitative Study on Argo Scalability - Andrew Anderson & Jun Duan, IBM https://www.youtube.com/watch?v=PB3OTXDjFjg&t=4s
Project champions
Steve Wong - IoT Edge WG Stefan Schimanski - Upbound.io (KubeStellar Governance Board member) Sebastian Scheele - Kubermatic (KubeStellar Governance Board member) Clint Dovholuk - OpenZiti Hakim Weatherspoon - Associate Director for Cornell Institute for Digital Agriculture (CIDA) Chief Executive Officer (CEO) and Co-Founder of Exotanium, Inc. Professor Gosia Steinder - IBM Fellow, Hybrid Cloud Research (KubeStellar Governance Board member) Endre Sara - Founding Engineer - Causely
Additional information
Here are some urls that link to important information about our project:
First - join our mailing list to get access to some of these urls - kubestellar.io/join_us
kubestellar.io/agenda - our community meeting agenda google doc kubestellar.io/blog - our medium reading list kubestellar.io/code - our current GH repo (wherever that is) kubestellar.io/community - our stable docs community page kubestellar.io/drive - our google drive kubestellar.io/joinus - our dev mailing list where you join and get our invites kubestellar.io/join_us - also, our dev mailing list kubestellar.io/linkedin - our linkedin filter (soon, our page) kubestellar.io/tv - our youtube channel kubestellar.io/youtube - also, our youtube channel kubestellar.io/infomercial - our infomercial that premieres on June 12th at 9am
and.. the very important… kubestellar.io/quickstart - our stable quickstart (edited)
Thank you for the opportunity to apply for membership. Our community would be honored to join your organization. Please do not hesitate to reach out with questions or a need for further explanation or documentation. Happy to oblige in whatever way we can.
Thanks, Andy Anderson KubeStellar.io