artilleryio / artillery

The complete load testing platform. Everything you need for production-grade load tests. Serverless & distributed. Load test with Playwright. Load test HTTP APIs, GraphQL, WebSocket, and more. Use any Node.js module.
https://www.artillery.io
Mozilla Public License 2.0
7.96k stars 506 forks source link

Better visibility into running scenarios and better debugging facilities #127

Open hassy opened 8 years ago

hassy commented 8 years ago

A virtual user / request tracing feature would be useful to be able to have real-time visibility into what (a subset of) virtual users are doing - requests being sent, responses that come back etc.

This would make it easier to both write new scripts and to figure out why a particular scenario isn't working anymore.

colceagus commented 8 years ago

I subscribe and can commit to the issue solution.

jdarling commented 8 years ago

My initial though was inside of core it should emit events related to scenarios, then in the runner or other tools you can subscribe to those events and do whatever with them. Plugins could be allowed for to extend functionality. EG: Pushing all events to Mongo, Cassandra, Redis,

That would allow users to go in from a report and basically see "Ahh I got a 500 from xxx, let me go query for that and see what happened"

I think the basic format should be something similar to how Bunyan logs, giving hostname, pid, eventName, and data.

I know it isn't supported yet, but that way when artillery is running in something like a containerized environment better information is available about the host that saw the errors (maybe its a host issue) and the errors themselves.

hassy commented 8 years ago

@jdarling There's two related but different use-cases to support - in one of them, you want visibility into every action that a virtual user has taken (requests sent, what was captured etc) to help debug complex scenarios or to verify they are working as intended. The other use-case is seeing the details of a request that's caused the server to return an error response. Both would be solved by logging absolutely everything that happens, but that would affect performance when running large-scale tests.