Open wxbty opened 11 months ago
As for generate the report automatically, we can provide a project contains all the requirements, like source code, scrpits and etc. And then we can seek for some cloud machines to run it.
I don’t know if this github workflow supports the jmeter tool. If it does, use this tool directly. can do some research
I don’t know if this github workflow supports the jmeter tool. If it does, use this tool directly. can do some research
We can use the code of the benchmark project and the jmh that comes with openjdk
As mentioned in the roadmap #13065 , the benchmark construction form I understand is as follows. Do you have any other opinions?
Dubbo performance benchmark test
Goal: Reflect Dubbo’s benchmark performance indicators in all aspects
Format: Use the following column chart to compare the performance of a certain scenario (try to keep other parameters the same and pay attention to different parameters), as shown in the following example
Scenario 1: request/response
Different rpc frameworks, in the core request scenario (unray) performance, use the commonly used (recommended) configurations of each framework
Parameters/returns use the most common small requests, small returns
Scenario 2: Serialization method
Dubbo3.2.8 version, small request, small return, using different serialization methods (fastjson2, hessian2, json)
Scenario 3: Communication protocol
Dubbo version 3.2.8, small request, small return, using different communication protocols (triple, dubbo, http) Dubbo version 3.1.8, small request, small return, using different communication protocols (triple, dubbo, http)
Scenario 4: Registration Center
Pay attention to the performance of dubbo under different registration centers. Mainly to register rt and push rt
Zookeeper, Nacos, Redis
Dynamic operation
Similar to the effect of grpc, it is run on the main branch every few hours, and the dashboard is dynamically displayed to monitor the impact of recently merged code on performance.
This item needs to be discussed how to implement it. It can definitely be done by starting a separate service. Is it feasible through CI?
Dubbo性能基准测试
目标:体现Dubbo各方面的基准性能指标
形式:使用以下柱形图,对比某一场景的性能(尽量保持其他参数相同,要关注的参数不同),如下示例
场景1:request/response
不同rpc框架,在核心的request场景(unray)性能表现,使用各框架常用(推荐)配置
参数/返回使用最常见的小请求,小返回
场景2:序列化方式
dubbo3.2.8版本,小请求,小返回,使用不同的序列化方式(fastjson2、hessian2、json)
场景3:通讯协议
dubbo3.2.8版本,小请求,小返回,使用不同的通讯协议(triple、dubbo、http) dubbo3.1.8版本,小请求,小返回,使用不同的通讯协议(triple、dubbo、http)
场景4:注册中心
关注dubbo 在不同注册中心下性能表现。主要是注册rt,推送rt
Zookeeper、Nacos、Redis
动态运行
类似grpc的效果,每几个小时对主分支运行一次,仪表盘动态显示,以监测最近合入的代码对性能的影响。
此项需要讨论如何实现,单独起一个服务必然可以做到,通过CI是否可行?