grafana / k6

A modern load testing tool, using Go and JavaScript - https://k6.io
GNU Affero General Public License v3.0
26.05k stars 1.27k forks source link

gRPC performing at nearly 50% of HTTP equivalent #1846

Closed thecodejunkie closed 1 week ago

thecodejunkie commented 3 years ago

Hi,

I was chatting with @simskij, on the Gophers Slack about some performance observations I was having when using k6 to compare gRPC and HTTP endpoints (in the same service), and he suggested I post here as well to get some insights from the performance wizards :)

Both endpoints have a similar setup (for testing purpose) and for gRPC I get about 32k req/s and for HTTP I am seeing about 55k req/s. I was expecting gRPC to be a bit higher, but I might be wrong?

I run my service and k6 tests on the same machine, a 2.2Ghz 6 core i7 MBP with 32GB RAM. Below are the test results

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: http.js
     output: -

  scenarios: (100.00%) 1 scenario, 50 max VUs, 40s max duration (incl. graceful stop):
           * default: 50 looping VUs for 10s (gracefulStop: 30s)

running (10.0s), 00/50 VUs, 552787 complete and 0 interrupted iterations
default ✓ [======================================] 50 VUs  10s

     ✓ is status 200

     checks.....................: 100.00% ✓ 552787 ✗ 0
     data_received..............: 81 MB   8.1 MB/s
     data_sent..................: 44 MB   4.4 MB/s
     http_req_blocked...........: avg=3.09µs   min=0s      med=2µs      max=35.3ms  p(90)=3µs    p(95)=3µs
     http_req_connecting........: avg=88ns     min=0s      med=0s       max=1.58ms  p(90)=0s     p(95)=0s
     http_req_duration..........: avg=772.04µs min=60µs    med=510µs    max=75.5ms  p(90)=1.6ms  p(95)=2.23ms
     http_req_receiving.........: avg=31.77µs  min=8µs     med=17µs     max=74.58ms p(90)=30µs   p(95)=36µs
     http_req_sending...........: avg=13.45µs  min=4µs     med=8µs      max=37.86ms p(90)=15µs   p(95)=18µs
     http_req_tls_handshaking...: avg=0s       min=0s      med=0s       max=0s      p(90)=0s     p(95)=0s
     http_req_waiting...........: avg=726.81µs min=39µs    med=475µs    max=60.53ms p(90)=1.54ms p(95)=2.15ms
     http_reqs..................: 552787  55151.751698/s
     iteration_duration.........: avg=897.79µs min=92.22µs med=595.81µs max=75.59ms p(90)=1.78ms p(95)=2.5ms
     iterations.................: 552787  55151.751698/s
     vus........................: 50      min=50   max=50
     vus_max....................: 50      min=50   max=50
          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: grpc.js
     output: -

  scenarios: (100.00%) 1 scenario, 50 max VUs, 40s max duration (incl. graceful stop):
           * default: 50 looping VUs for 10s (gracefulStop: 30s)

running (10.0s), 00/50 VUs, 322100 complete and 0 interrupted iterations
default ✓ [======================================] 50 VUs  10s

     ✓ status is OK

     checks...............: 100.00% ✓ 322100 ✗ 0
     data_received........: 33 MB   3.3 MB/s
     data_sent............: 30 MB   3.0 MB/s
     grpc_req_duration....: avg=1.39ms min=116.98µs med=1.1ms  max=83.39ms p(90)=2.69ms p(95)=3.41ms
     iteration_duration...: avg=1.54ms min=177.19µs med=1.24ms max=83.52ms p(90)=2.9ms  p(95)=3.65ms
     iterations...........: 322100  32149.866889/s
     vus..................: 50      min=50   max=50
     vus_max..............: 50      min=50   max=50

Environment

Expected Behavior

I was expecting gRPC to be on par (or even faster) then my HTTP endpoint? @simskij was seeing similar results on his MBP, but wasn't 100% sure of the difference in results (i.e if it was to be expected and why, or if something other was at play)

Actual Behavior

gRPC performs at (almost) 50% of the HTTP endpoint

Steps to Reproduce the Problem

I've included all the code to reproduce this

main.go

```go package main ​ import ( "context" "encoding/json" "google.golang.org/grpc" "log" "net" ​ "github.com/gofiber/fiber/v2" "github.com/soheilhy/cmux" "go-grpc-http-muxer.com/chat" ) ​ func main() { l, err := net.Listen("tcp", ":3000") if err != nil { log.Panic(err) } ​ m := cmux.New(l) ​ ​ // Create a grpc listener first grpcListener := m.MatchWithWriters(cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc")) ​ // All the rest is assumed to be HTTP httpListener := m.Match(cmux.Any()) ​ go serveHTTP(httpListener) go serveGRPC(grpcListener) ​ _ = m.Serve() ​ } ​ type ChatServer struct { ​ } ​ func (c *ChatServer) SayHello(context.Context, *chat.Message) (*chat.Message, error) { m := &chat.Message{ Body: "Hello, World 👋!", } ​ return m, nil } ​ func serveGRPC(l net.Listener) { s := &ChatServer{} grpcServer := grpc.NewServer() ​ chat.RegisterChatServiceServer(grpcServer, s) ​ if err := grpcServer.Serve(l); err != nil { log.Fatalf("failed to serve: %s", err) } } ​ func serveHTTP(l net.Listener) { app := fiber.New() ​ app.Get("/", func(c *fiber.Ctx) error { m := &chat.Message{ Body: "Hello, World 👋!", } ​ b, _ := json.Marshal(m) ​ return c.Send(b) }) ​ app.Listener(l) } ```

chat.proto

```proto syntax = "proto3"; ​ package go.grpc.http.muxer.com.chat.v1; option go_package = ".;chat"; ​ message Message { string body = 1; } ​ service ChatService { rpc SayHello(Message) returns (Message) {} } ```

chat.pb.go

```go // Code generated by protoc-gen-go. DO NOT EDIT. // versions: // protoc-gen-go v1.25.0-devel // protoc v3.14.0 // source: chat.proto ​ package chat ​ import ( context "context" grpc "google.golang.org/grpc" codes "google.golang.org/grpc/codes" status "google.golang.org/grpc/status" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" reflect "reflect" sync "sync" ) ​ const ( // Verify that this generated code is sufficiently up-to-date. _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) // Verify that runtime/protoimpl is sufficiently up-to-date. _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) ​ type Message struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields ​ Body string `protobuf:"bytes,1,opt,name=body,proto3" json:"body,omitempty"` } ​ func (x *Message) Reset() { *x = Message{} if protoimpl.UnsafeEnabled { mi := &file_chat_proto_msgTypes[0] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } } ​ func (x *Message) String() string { return protoimpl.X.MessageStringOf(x) } ​ func (*Message) ProtoMessage() {} ​ func (x *Message) ProtoReflect() protoreflect.Message { mi := &file_chat_proto_msgTypes[0] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { ms.StoreMessageInfo(mi) } return ms } return mi.MessageOf(x) } ​ // Deprecated: Use Message.ProtoReflect.Descriptor instead. func (*Message) Descriptor() ([]byte, []int) { return file_chat_proto_rawDescGZIP(), []int{0} } ​ func (x *Message) GetBody() string { if x != nil { return x.Body } return "" } ​ var File_chat_proto protoreflect.FileDescriptor ​ var file_chat_proto_rawDesc = []byte{ 0x0a, 0x0a, 0x63, 0x68, 0x61, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x1e, 0x67, 0x6f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x74, 0x74, 0x70, 0x2e, 0x6d, 0x75, 0x78, 0x65, 0x72, 0x2e, 0x63, 0x6f, 0x6d, 0x2e, 0x63, 0x68, 0x61, 0x74, 0x2e, 0x76, 0x31, 0x22, 0x1d, 0x0a, 0x07, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x62, 0x6f, 0x64, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x62, 0x6f, 0x64, 0x79, 0x32, 0x6d, 0x0a, 0x0b, 0x43, 0x68, 0x61, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x5e, 0x0a, 0x08, 0x53, 0x61, 0x79, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x12, 0x27, 0x2e, 0x67, 0x6f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x74, 0x74, 0x70, 0x2e, 0x6d, 0x75, 0x78, 0x65, 0x72, 0x2e, 0x63, 0x6f, 0x6d, 0x2e, 0x63, 0x68, 0x61, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x1a, 0x27, 0x2e, 0x67, 0x6f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x74, 0x74, 0x70, 0x2e, 0x6d, 0x75, 0x78, 0x65, 0x72, 0x2e, 0x63, 0x6f, 0x6d, 0x2e, 0x63, 0x68, 0x61, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x22, 0x00, 0x42, 0x08, 0x5a, 0x06, 0x2e, 0x3b, 0x63, 0x68, 0x61, 0x74, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } ​ var ( file_chat_proto_rawDescOnce sync.Once file_chat_proto_rawDescData = file_chat_proto_rawDesc ) ​ func file_chat_proto_rawDescGZIP() []byte { file_chat_proto_rawDescOnce.Do(func() { file_chat_proto_rawDescData = protoimpl.X.CompressGZIP(file_chat_proto_rawDescData) }) return file_chat_proto_rawDescData } ​ var file_chat_proto_msgTypes = make([]protoimpl.MessageInfo, 1) var file_chat_proto_goTypes = []interface{}{ (*Message)(nil), // 0: go.grpc.http.muxer.com.chat.v1.Message } var file_chat_proto_depIdxs = []int32{ 0, // 0: go.grpc.http.muxer.com.chat.v1.ChatService.SayHello:input_type -> go.grpc.http.muxer.com.chat.v1.Message 0, // 1: go.grpc.http.muxer.com.chat.v1.ChatService.SayHello:output_type -> go.grpc.http.muxer.com.chat.v1.Message 1, // [1:2] is the sub-list for method output_type 0, // [0:1] is the sub-list for method input_type 0, // [0:0] is the sub-list for extension type_name 0, // [0:0] is the sub-list for extension extendee 0, // [0:0] is the sub-list for field type_name } ​ func init() { file_chat_proto_init() } func file_chat_proto_init() { if File_chat_proto != nil { return } if !protoimpl.UnsafeEnabled { file_chat_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*Message); i { case 0: return &v.state case 1: return &v.sizeCache case 2: return &v.unknownFields default: return nil } } } type x struct{} out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: file_chat_proto_rawDesc, NumEnums: 0, NumMessages: 1, NumExtensions: 0, NumServices: 1, }, GoTypes: file_chat_proto_goTypes, DependencyIndexes: file_chat_proto_depIdxs, MessageInfos: file_chat_proto_msgTypes, }.Build() File_chat_proto = out.File file_chat_proto_rawDesc = nil file_chat_proto_goTypes = nil file_chat_proto_depIdxs = nil } ​ // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConnInterface ​ // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion6 ​ // ChatServiceClient is the client API for ChatService service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type ChatServiceClient interface { SayHello(ctx context.Context, in *Message, opts ...grpc.CallOption) (*Message, error) } ​ type chatServiceClient struct { cc grpc.ClientConnInterface } ​ func NewChatServiceClient(cc grpc.ClientConnInterface) ChatServiceClient { return &chatServiceClient{cc} } ​ func (c *chatServiceClient) SayHello(ctx context.Context, in *Message, opts ...grpc.CallOption) (*Message, error) { out := new(Message) err := c.cc.Invoke(ctx, "/go.grpc.http.muxer.com.chat.v1.ChatService/SayHello", in, out, opts...) if err != nil { return nil, err } return out, nil } ​ // ChatServiceServer is the server API for ChatService service. type ChatServiceServer interface { SayHello(context.Context, *Message) (*Message, error) } ​ // UnimplementedChatServiceServer can be embedded to have forward compatible implementations. type UnimplementedChatServiceServer struct { } ​ func (*UnimplementedChatServiceServer) SayHello(context.Context, *Message) (*Message, error) { return nil, status.Errorf(codes.Unimplemented, "method SayHello not implemented") } ​ func RegisterChatServiceServer(s *grpc.Server, srv ChatServiceServer) { s.RegisterService(&_ChatService_serviceDesc, srv) } ​ func _ChatService_SayHello_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Message) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(ChatServiceServer).SayHello(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/go.grpc.http.muxer.com.chat.v1.ChatService/SayHello", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ChatServiceServer).SayHello(ctx, req.(*Message)) } return interceptor(ctx, in, info, handler) } ​ var _ChatService_serviceDesc = grpc.ServiceDesc{ ServiceName: "go.grpc.http.muxer.com.chat.v1.ChatService", HandlerType: (*ChatServiceServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "SayHello", Handler: _ChatService_SayHello_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "chat.proto", } ```

grpc.js

```js import grpc from 'k6/net/grpc'; import { check, sleep } from "k6"; ​ export let options = { vus: 50, duration: '10s' } ​ let client = new grpc.Client(); client.load([], "chat.proto") ​ export default () => { if (__ITER == 0) { client.connect("127.0.0.1:3000", { plaintext: true }) } ​ const response = client.invoke("go.grpc.http.muxer.com.chat.v1.ChatService/SayHello", { body: 'hi' }) ​ check(response, { "status is OK": (r) => r && r.status === grpc.StatusOK }); } ```

http.js

```js import http from 'k6/http'; import { check } from "k6"; ​ export let options = { vus: 50, duration: '10s' } ​ export default () => { let res = http.get('http://127.0.0.1:3000/'); check(res, { 'is status 200': (r) => r.status === 200, }); } ```

imiric commented 3 years ago

Hi @thecodejunkie, thanks for reporting this and for the comprehensive code to reproduce it.

You're right, this is a surprising difference and we'd also expect better performance from the gRPC test, especially since you're reusing the TCP connection.

I took a look at it and didn't find anything immediately obvious. Some notes:

So we would need more time to dig into this and determine the root cause. Please give us a few days to investigate and discuss this internally.

thecodejunkie commented 3 years ago

Hi @imiric

Thank you for a great reply!

You're right, this is a surprising difference and we'd also expect better performance from the gRPC test, especially since you're reusing the TCP connection.

Yep. Unfortunately, the TCP connection reuse is a (temporary) workaround to a limitation with macOS that causes it to throw cannot assign requested address errors when the k6 gRPC client attempts to open too many connections at the same time 😉

There is some JSON marshaling overhead in the gRPC implementation, but even after removing it it accounted for about ~10% of the difference, so that's not the main issue.

Yes, I tried to offset that slightly by introducing JSON serialization, of the same data structure, in the HTTP endpoint

The Fiber framework uses fasthttp which is a bit faster than stdlib's net/http and would show a bigger difference, but even when using a net/http-based server the difference remains around 40%. I similarly tried removing cmux and the Content-Type matching from the example, but didn't notice any improvements.

Noted. Fiber was chosen because of the increased performance over net/http and I also tried eliminating cmux without any major improvements.

So we would need more time to dig into this and determine the root cause. Please give us a few days to investigate and discuss this internally.

Much appreciated! Please let me know if I can provide additional help or information

imiric commented 3 years ago

Hi again! I spent some time looking deeper into this, and while I can't say I managed to find the root cause, there are some improvements we could make to minimize this difference.

More notes:

So in conclusion, I'm not sure what we can improve on the k6 side besides exposing those gRPC/HTTP2 options. It's difficult to compare both directly with synthetic tests as the underlying transport is so different.

Hope this is helpful and if anyone else has more gRPC/HTTP2 experience, please chime in. :)

thecodejunkie commented 3 years ago

@imiric Hi! It just dawned on me that I did read your reply, but I never actually answered you 😱 I wanted to stop by and thank you for your very thorough investigation of this issue! So THANK YOU! 😄

imiric commented 3 years ago

No worries @thecodejunkie, hope it was useful.

Since we'd like to address this issue eventually, and to allow someone with more gRPC/HTTP2 experience to comment, let's leave this open for now. We'll consider exposing these and other options to the client, but that's likely going to be in a separate issue, and is currently not on the short-term roadmap.

LaserPhaser commented 2 years ago

Hello, Do you have any progress or ideas here?

na-- commented 2 years ago

@LaserPhaser, do you experience the same problems? If so, can you please share some details and benchmarks?

Skimming over this issue, it was mostly left open for further discussion, not necessarily with the expectation that anything needed to be fixed here? And the previous discussion and investigation was a long time ago, there have been substantial changes in k6, Go and the underlying gRPC and HTTP libraries, so some fresh data would definitely be required before we actually do anything.

LaserPhaser commented 2 years ago

Basically I have the same problem. When I run about 400 RPS from 16Core 32Ram CPU All my cores starting to throttle.

ENV: Linux 16Core 32Ram

K6 master (b60fe887f035adcfb59cd7fd0869f69c5442b5b8)

Scenario:

export function query_scenario() {

    // Take random element from ammo array
    var element = [];
    for(var i=0; i<20;i++){
        element.push({
            "item_id": ammo[Math.floor(Math.random() * ammo.length)]
        });
    }

    group(element.namespace, function () {

            const grpcPayload = {
                "item_ids": element
            };

            const grpcMethod = "METHOD"; // gRPC service and method here e.g. 

            let pod_url = pod_urls[Math.floor(Math.random() * pod_urls.length)]

            grpcClient.connect(pod_url, {
                plaintext: true, // means insecure connection - no TLS
                timeout: "60s" // default value but it's here for customization amd documentation purposes
            });

            const grpcResponse = grpcClient.invoke(grpcMethod, grpcPayload, {
                timeout: "60s" // default value but it's here for customization amd documentation purposes
            });

            check(grpcResponse, {
                'status is OK': (r) => r && r.status === grpc.StatusOK,
            });

            grpcClient.close();

    });
}

I've tried to profile k6: image

LaserPhaser commented 2 years ago

Basically I think it would be good enough to create xk6 extension which could be able to read preprocessed binary data without any marshalling.

Will try to implement it till the EOW

codebien commented 2 years ago

Hi @LaserPhaser, we already have an issue #2497 for refactoring that part, would you like to contribute directly there instead of creating a dedicated extension?

LaserPhaser commented 2 years ago

@codebien Sure, thank you. Will try to do my best.

kiwi-waquar commented 12 months ago

Hi @imiric @thecodejunkie

Context: I am making a simple benchmarking between REST and gRPC. I am runnning the test on 20 vu's for 4 mins total with some timing conditions (attached the k6 script)

Test scenarios: k6, throttled network upto 100KbPS (using wondershaper) , server and client are hosted in aws

The results are very astonishing to me, The rest-api is performing as expected with iteration count of ~16/s and latency of ~1700s (p95). But, the grpc have iteration count of ~1.5/s whereas the latency is around ~60ms.

I have tested the same in a normal environment ( with no network throttling) i am having the same issue with iteration count of rest 3-4 times more than that of grpc. also the latency(p95) in REST(~19ms) is slightly better then gRPC(~21ms)

I am unable to figure out why am i getting these result,

i-> rest is outperforming grpc ii-> throughput is so bad in grpc??

is it a k6 limitation or there something wrong in my scripts, I have searched a lot in internet but hard luck, Can you please guide me here? Thanks in advanced

benchmark_rest-VS-grpc.zip

codebien commented 12 months ago

Hey @kiwi-waquar, please post your question to the Community Forum, it is the place for support.

benchmark_rest-VS-grpc.zip

Please, share your code using a repository, a gist, or directly in the comment using the collapsable feature.

kiwi-waquar commented 12 months ago

hey @codebien Thanks for replying, I will surely post my question there. Below are the codes for reference Thanks a lot

gRPC server code

```java import com.google.protobuf.Timestamp; import io.grpc.Grpc; import io.grpc.InsecureServerCredentials; import io.grpc.Server; import io.grpc.stub.StreamObserver; import lombok.extern.slf4j.Slf4j; import org.springframework.stereotype.Component; import java.io.IOException; import java.util.concurrent.TimeUnit; @Component @Slf4j public class TestServer { private Server server; private void start() throws IOException { int port = 7861; server = Grpc.newServerBuilderForPort(port, InsecureServerCredentials.create()) .addService(new LargeObjectTextCallImpl()) .build() .start(); log.info("TestServer started, listening on " + port); Runtime.getRuntime().addShutdownHook( new Thread() { @Override public void run() { System.err.println("*** shutting down gRPC server since JVM is shutting down"); try { TestServer.this.stop(); } catch (InterruptedException e) { e.printStackTrace(System.err); } System.err.println("*** server shut down"); } }); } private void stop() throws InterruptedException { if (server != null) { server.shutdown().awaitTermination(30, TimeUnit.SECONDS); } } private void blockUntilShutdown() throws InterruptedException { if (server != null) { server.awaitTermination(); } } public void serverRunner() throws IOException, InterruptedException { final TestServer server = new TestServer(); server.start(); server.blockUntilShutdown(); } static class LargeObjectTextCallImpl extends MeteoriteLandingsServiceGrpc.MeteoriteLandingsServiceImplBase { @Override public void getLargePayloadAsList(EmptyRequest emptyRequest, StreamObserver responseObserver){ MeteoriteLanding meteoriteLanding = MeteoriteLanding.newBuilder() .setFall("fell") .setName("Adhi Kot") .setId(379) .setNametype("Valid") .setRecclass("EH4") .setMass(4239) .setYear(Timestamp.newBuilder().build()) .setReclat(32.100000) .setReclong(71.800000) .setGeolocation(generateGeoLocation()) .build(); responseObserver.onNext(meteoriteLanding); responseObserver.onCompleted(); } private GeoLocation generateGeoLocation(){ return GeoLocation.newBuilder().setType("Point").build(); } } } ```

REST healthcheck code

@GetMapping("/performance") public ResponseEntity performanceCheck() { TestResponsePayload testResponsePayload = new TestResponsePayload().setFall("Fell").setMass(4239) .setName("Adhi Kot").setRecclass("EH4").setId(379).setNametype("Valid").setTimestamp(new Date(1700301927)).setReclat(32.100000) .setReclong(71.800000).setGeolocation(new GeoLocationDto().setCoordinate1(71.8).setCoordinate2(71.8).setType("Point")); return new ResponseEntity<>(testResponsePayload, HttpStatus.OK); }

script for REST

```javascript import http from 'k6/http'; import {check} from 'k6'; import {Rate, Trend} from 'k6/metrics'; let apiSuccessRate = new Rate('API Success Rate'); let apiLatency = new Trend('API Latency'); export let options = { stages: [ {duration: '1m', target: 20}, // Ramp up to 20 virtual users over 1 minute {duration: '2m', target: 20}, {duration: '1m', target: 10}, ], systemTags: ['status', 'method', 'url', 'name'], }; export default function () { let url = 'http://aws-env/abc/healthcheck/performance'; let res = http.get(url, { headers: { 'Content-Type': 'application/json', 'accept': 'application/json', }, name: "API - Rest performance", }); if(res.status !== 200){ console.log(url) console.log(res.body) } check(res, { 'is status 200 for API': (r) => r.status === 200 }); apiSuccessRate.add(res.status === 200); apiLatency.add(res.timings.duration); } ```

script for gRPC

```javascript import grpc from 'k6/net/grpc'; import {check, sleep} from 'k6'; import {Rate, Trend}from 'k6/metrics'; let apiSuccessRate = new Rate('API Success Rate'); let apiLatency = new Trend('API Latency'); export let options = { stages: [ {duration: '1m', target: 20}, // Ramp up to 20 virtual users over 1 minute {duration: '2m', target: 100}, {duration: '2m', target: 100}, ], systemTags: ['status', 'method', 'url', 'name'], }; const client = new grpc.Client(); client.load(['definitions'], 'test.proto'); export default () => { client.connect('aws-env:443', { }); let response = client.invoke('test.MeteoriteLandingsService/GetLargePayloadAsList', {}); check(response, { 'status is OK': (r) => r && r.status === grpc.StatusOK, }); apiSuccessRate.add(response.status === grpc.StatusOK); client.close(); }; ```

proto file

syntax = "proto3"; import "google/protobuf/timestamp.proto"; option java_multiple_files = true; option java_package = "testGRPC"; option java_outer_classname = "TestLargeProto"; option objc_class_prefix = "KWT"; package testLarge; service MeteoriteLandingsService { rpc GetLargePayloadAsList(EmptyRequest) returns (MeteoriteLanding) {} } message EmptyRequest { } message StatusResponse { string status = 1; } message Version { string api_version = 1; } message GeoLocation { string type = 1; repeated double coordinates = 2; } message MeteoriteLanding { uint32 id = 1; string name = 2; string fall = 3; GeoLocation geolocation = 4; double mass = 5; string nametype = 6; string recclass = 7; double reclat = 8; double reclong = 9; google.protobuf.Timestamp year = 10; }

Benchmarking results

![grpc_100kbps](https://github.com/grafana/k6/assets/145996266/b85fabd1-4bd7-4bc8-95ee-b3b8712261b7) ![grpc_good_network](https://github.com/grafana/k6/assets/145996266/06e25ed1-6dd6-4802-ac78-e69e3b13aaa2) ![rest_100kbps](https://github.com/grafana/k6/assets/145996266/89783956-0ae6-4679-8ed5-3685121a1f3d) ![rest_good_network](https://github.com/grafana/k6/assets/145996266/ee4201b4-9592-4ea8-9065-7c11a3c197ef)

joanlopez commented 1 week ago

Basically I think it would be good enough to create xk6 extension which could be able to read preprocessed binary data without any marshalling.

Hi @LaserPhaser, we already have an issue #2497 for refactoring that part, would you like to contribute directly there instead of creating a dedicated extension?

As stated by these two comments, I think most of the significant improvements possible here are around that, and potentially adding the equivalent version of discardResponseBodies (or similar) for gRPC. So, for now, let's close this issue in favor of #2497, so we have a more clearly stated task for whenever we have capacity to work on improving gRPC performance.

If you, future reader, fall into this closed issue, and you think you're experiencing gRPC performance issues that are totally unrelated to what's discussed here, unexpected or where the potential improvement would come from a different part of the code, feel free to open a new issue with the specific details.

Thanks!