tag1consulting / goose

Load testing framework, inspired by Locust
https://tag1.com/goose
Apache License 2.0
795 stars 70 forks source link

gRPC support #9

Open xd009642 opened 4 years ago

xd009642 commented 4 years ago

Might want to wait until async support drops just because the defacto library for gRPC in rust is tonic which is built with tokio. This request is to add explicit gRPC support so logging the status codes returned for unary and streaming requests etc.

For reference https://github.com/grpc/grpc/blob/master/doc/statuscodes.md

jeremyandrews commented 4 years ago

Adding a link to the Rust library: https://github.com/hyperium/tonic

xd009642 commented 4 years ago

so gRPC is done over http/2 with request and response payloads being encoded protobufs and there are essentially 4 types of requests:

Since reqwest and tonic are both built on top of hyper you get http/2 for free which is nice :+1:

There currently exists https://github.com/bojand/ghz for gRPC load testing. The thing that makes ghz hard to use for me personally is that I do a lot of gRPC which involves uploading large files in chunks which makes it impractical to write their json config file with what data is in each field. I'd rather be able to use it as a library create the request packets so I can use it to tune things like chunk size etc (which is a reason why I'm interested in goose :grin:).

jeremyandrews commented 4 years ago

This is very helpful feedback, thanks!

Do you have any thoughts about how Goose might handle these streams? Unary is simple enough, but I'm wondering if the other types of requests are a good fit?

xd009642 commented 4 years ago

hmmm well personally most of my services have the client streaming and getting a single response and I'm interested in how the delay between the last message sent from the client and the end response changes under load. I've also worked with some bidirectional ones where they're able to do partial processing of subchunks where once again I care about the time between the last message and the final response. The chunking is largely handling the fact that not all the data is available when the request starts (or there's a lot of data). For load testing I'd have all the data available upfront and remove that latency for the purpose of testing.

But these aren't meant to be long lived connections more high performance processing of large amounts of data. I know some cisco routers do a streaming grpc where they send (typically small amounts of) telemetry data at a set update rate (i.e. once per second) to a client until it ends the connection. I wouldn't see a need for load testing with that sort of application.

lcmgh commented 1 year ago

Is this something still being planned by the maintainers? Would love to use the same load testing tool for REST and gRPC.