jwcesign / federation-metrics-adaptor

Metrics adaptor for federation clusters.
2 stars 0 forks source link

tmp #1

Open jwcesign opened 1 year ago

jwcesign commented 1 year ago

git log --no-merges --since 2022-01-01 --author "changzhen" --numstat |\ grep -v "vendor" |\ grep -Pv "Date:|insertion|deletion|file|Bin|generated|yaml|html" |\ grep -P "^\d+\t\d+" |\ awk 'BEGIN{total=0}{total+=$1+$2}END{print total}'

jwcesign commented 1 year ago

image

jwcesign commented 1 year ago

image

jwcesign commented 1 year ago

image

jwcesign commented 1 year ago

image

jwcesign commented 1 year ago

image

jwcesign commented 1 year ago

Title

Multi-cluster autoscaling that satisfies sub-cluster autonomy with rich policies

Abstract

With the development of cloud computing, multi-cloud and multi-cluster deployment and management have become the new norm. The existing single-cluster autoscaling policy cannot meet the complex scaling requirements of multi-cloud and multi-cluster. How to efficiently autoscaling services in multi-cloud so that end-user can deploy services freely and flexibly is a critical challenge that needs to be resolved.

In this session, Wei will introduce how we achieve autoscaling coordination of multi-cluster in Karmada. Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds. With various autoscaling policies and unified API, Karmada can do autoscaling with multi-cluster, while satisfying sub-cluster autonomy, compatible with K8s native HPA, real-time dynamic tuning, and minimizing control plane load. At last, Wei will share some classic multi-cloud autoscaling scenarios and how Karmada solves these problems.

Benefit for ecosystem

According to a dozen of surveys, more and more companies are deploying their business with multi-cloud. Hope this session will give developer/end-user audiences some inspiration for multi-cluster autoscaling. After this, attendees will: -1. get to know the pain points of multi-cluster autoscaling. -2. have a good understanding of how karmada solves these pain points. -3. have an overview of the scenarios which multi-cluster autoscaling can handle.

chaunceyjiang commented 1 year ago

Single-cluster HPA can be seamlessly used in Multi-cluster!

In today's cloud-native era, where multi-cloud and multi-cluster deployments have become the new norm, existing single-cluster elasticity technologies are no longer sufficient to meet the complex elasticity requirements of multi-cloud and multi-cluster environments. Currently, Kubernetes' single-cluster elasticity technology is mainly based on HPA. However, in a multi-cluster scenario, single-cluster HPA cannot meet the growing demands for cross-cluster scaling, unified configuration management of multiple cluster HPAs, scaling fault migration as well as limitations on resource usage within a single cluster.

In Karmada, the seamless migration of single-cluster HPA to multi-cluster HPA allows you to leverage your experience with single cluster HPAs while breaking through various limitations imposed by them.

There is no intrusive modification required for existing HPAs and it does not affect any existing business operations; thus enabling you to enjoy all the benefits that come with using a Multi-Cluster HPA!

jwcesign commented 1 year ago

title:
Break through resource boundaries and scale workloads across clusters on a large scale

Abstract:

demo:  
At Tencent, the cloud native infrastructure platform supports thousands of businesses ,
manages various heterogeneous hardware and massive resources to provide services for
these businesses. Kubernetes is great, but considering factors such as scalability, availability,
and multi regional deployment, infrastructure upgrade etc., the multi cluster based
architecture has become a key part of the whole platform.
How to efficiently manage the resources of these clusters so that different business teams can
deploy services freely and flexibly, while ensuring sufficiently high resource utilization is the
key challenge needs to be resolved.
As a new multi
cluster orchestration framework, Karmada is designed to be Kubernetes
Native API oriented, which make life much easier.
In this sharing, Lei Xue and Kevin will go over:
1. Key challenges of hyperscale infrastructure at Tencent
2. Evaluation of K8s based multi cluster solutions and considerations
3. Achievements, problems met and resolved

Nowadays, multi-cluster deployment and management of workloads are becoming increasingly common, and workloads usually use HPA for scaling to meet changing demands. However, the resources of these clusters are isolated from each other, and when scaling using multi-cluster resources, there is no unified coordination to support larger business scales and diversified use of multi-cluster resources. For example, prioritizing scaling the workloads in the local cluster before scaling the workloads in the public cloud when the local cluster cannot handle it. Therefore, how to break through the resource isolation of multiple clusters, unify coordination and achieve cross-cluster autoscaling is a difficult problem in the industry.

As a multi-cluster orchestration framework, Karmada aims to address the challenges of multi-cluster management. To solve this problem, Karmada implements two types of across-cluster HPA: Centralized and Distributed. The former provides a native experience that similar to that of a single cluster and high-precision scaling, while the latter offers greater business elasticity support and lower query metric traffic consumption for larger-scale operations.

In this session, Wei and XinYan will go over:

  1. The benefits of autoscaling across clusters.
  2. The key challenges of autoscaling across clusters.
  3. How Karmada solves the key challenges.
  4. How to select the appropriate type for autoscaling across clusters.

Benefits to the Ecosystem:

demo:
The Karmada project is jointly initiated by end users from finance, manufactoring, telecos, internet
industries, and has attracted a lot of attention and contributions after launchend, including: Tencent,
Huawei, ICBC, SPD bank, FAW, T3go, xiaohongshu.com, qutoutiao.net, vipkid, youzan, China
Mobile etc.
Karmada is now in progress been donated to CNCF, and this case study would definitely gave
confidence to audiences that the communtiy is open and active with companies Tencent and Huawei
joining efforts. Please consider this talk as a keynote.
Additionally, Audiences would learn:
1. Experience of managing hyperscale cloud native infrastructure at Tencent, with millions of cores
and tens of thousands of GPU cards.
2. Exploration and practice of multi cluster offline task scheduling, pre scheduling, resource
rebalancing and other capabilities enhancement based on Kubernetes.
3. How companies like Tencent and Huawei could collaborate together though open source.

The Karmada project is jointly initiated by end users from finance, manufactoring, telecos, internet industries, and has attracted a lot of attention and contributions after launchend, including: Tencent, Huawei, ICBC, SPD bank, FAW, T3go, xiaohongshu.com, qutoutiao.net, vipkid, youzan, China Mobile etc.

This session will provide the audience with an overview of the benefits of multi-clusters and autoscaling across clusters. It will also demonstrate how easy it is to migrate your business to multi-clusters, allowing for greater scalability in workload management.

cc @chaunceyjiang

jwcesign commented 1 year ago

https://docs.google.com/document/d/19dCOog6FK9atIsYmbQyXYFmq3giIsVEpdf_8-b_AL6A/edit?usp=sharing

jwcesign commented 1 year ago

Wei Jiang serves as a software engineer for Huawei Cloud. He is now working 100% on open source, focusing on acceleration for Knative and autoscaling for Karmada. Before that, Wei worked on Huawei Cloud Container Serverless Service and participated in multiple serverless projects. Wei graduated from the Communication College of Hangzhou Dianzi University, majoring in communication engineering.

chaunceyjiang commented 1 year ago

Chauncey Jiang is a software engineer at Dao Cloud, with a passion for cloud-native technologies and expertise in Kubernetes. He specializes in multi-cloud and multi-cluster environments, and actively contributes to the open source community as a karmada reviewer, istio member, and openELB approver. Chauncey frequently attends meetups to exchange knowledge on cloud-native technologies with other enthusiasts.