kubewharf / kubebrain

A High Performance Metadata System for Kubernetes
Apache License 2.0
764 stars 79 forks source link

Question: A little confusion about the stateless, availability and scalability. #8

Closed STRRL closed 2 years ago

STRRL commented 2 years ago

I totally admit that Kubebrain is definitely a GREAT project! And thank you open-source it! 🎉🎉🎉

I also considered how to make Kubernetes run on TiKV, and make the adapter layer efficient and flexible. I have some questions about the design and implementation of kubebrain. Please let me know what I missed! ❤️

It seems kubebrain uses a “master-slave” architecture, and “watch routine” only happens on the leader. So when using kubebrain with the official kube-apiserver, I could only configure the leader’s IP as the etcd server. But the leadeship would be changed, but the configuration on kube-apiserver is not easy to change. So I think currently, there is no easy way to use multi-instance kubebrain with official kube-apiserver.

And I noticed that we would build a proxy for kubebrain on the ROADMAP. Until the proxy works as what etcd does, I think this problem would be resolved. :)

Or do we have other suggested ways to setup multi-instance kubebrain with official kube-apiserver?

Another issue with “single master” is scalability. The performance of the master-slave architecture applications is restricted by the power of the single node. Do we have a plan or idea for migrating to “multi-master” design?

divanodestiny commented 2 years ago
  1. if you enable the the flag --compatible-with-etcd, slave will automatically forward txn and watch request to leader. You don't need to make any changes on the official kube-apiserver if you just want to have a try. But if you use KubeBrain in this way, it is not make the best use of API that KubeBrain provides. We may open-source the patch on apiserver which implements the another Storage which is different from etcd3 base on KubeBrain-Client later.
  2. Our plan to design proxy aims to solve the problem that only one leader could write now, we want to add the proxy layer to explore if there is possibility to make procession of writing scalable. It may involve changes in apiserver and other components of Kubernetes, we will share and discuss it when the design and demo is complete.
STRRL commented 2 years ago

if you enable the the flag --compatible-with-etcd, slave will automatically forward txn and watch request to leader. You don't need to make any changes on the official kube-apiserver if you just want to have a try.

Oh yeah! I found the related codes in etcdproxy. That makes sense!

But if you use KubeBrain in this way, it is not make the best use of API that KubeBrain provides. We may open-source the patch on apiserver which implements the another Storage which is different from etcd3 base on KubeBrain-Client later.

I also hope that one day the kubebrain could be one of the Storage implementations in the Kubernetes Upstream! ❤️

Our plan to design proxy aims to solve the problem that only one leader could write now, we want to add the proxy layer to explore if there is possibility to make procession of writing scalable. It may involve changes in apiserver and other components of Kubernetes, we will share and discuss it when the design and demo is complete.

Looking forward to the solution for resolving writing scalable! ❤️❤️

STRRL commented 2 years ago

All of my questions have been explained very clearly. I am going to close this issue.