open-telemetry / opentelemetry-js

OpenTelemetry JavaScript Client
https://opentelemetry.io
Apache License 2.0
2.66k stars 765 forks source link

In-Memory Exporter for unit testing #4969

Open EduardoSimon opened 2 weeks ago

EduardoSimon commented 2 weeks ago

Hi folks, We are building a wrapper on top of otel's js sdk to standardize its usage within our company. As a feature, we would like to support an easy mechanism to test our manually instrumented code. In the backend code we use other sdks from other ecosystem that support an in-memory exporter. We configure it when running our tests and enable developers with custom test matchers to assert on the telemetry data. So far, it has been working like a charm and our developers are pretty happy with the approach, that's why we would like to do something along the same lines in our javascript wrapper.

Would you accept a PR to add it to the library?

Regards, Eduardo

pichlermarc commented 2 weeks ago

We actually do already export an InMemorySpanExporter from @opentelemetry/sdk-trace-base. Is it missing features?

EduardoSimon commented 2 weeks ago

Hi @pichlermarc, I tried that export and it worked like a charm. A combination of the SimpleSpanProcessor and the InMemorySpanExporter was key to set up the testing scaffolding. Do you think it makes sense to add any of this information to the docs?

My idea on how to test the telemetry emitted by the app/library is the following: as it's an in-process dependency and you may have a "contract" with your alerting infrastructure setup for your production environment, I'd like to avoid breaking that contract. I could just mock everything out, but then I would not be sure if I'm emitting the spans correctly.

pichlermarc commented 2 weeks ago

Do you think it makes sense to add any of this information to the docs?

I'm not sure. Actually a SpanProcessor (either batch or simple) is always needed to register an exporter when doing a manual OTel setup. Are you using NodeSDK to setup tracing in your test app? :thinking:

EduardoSimon commented 1 week ago

Nope, Im using a wrapper created by Honeycomb that eases the initial setup required. However, their API enables direct access to the processor, thats why the process is the same. I was lacking a testing section in the docs tbh, I found out that other ecosystems such as the elixir one have a testing page

pichlermarc commented 2 days ago

My idea on how to test the telemetry emitted by the app/library is the following: as it's an in-process dependency and you may have a "contract" with your alerting infrastructure setup for your production environment, I'd like to avoid breaking that contract. I could just mock everything out, but then I would not be sure if I'm emitting the spans correctly.

For application/library authors, I actually think it makes sense to test telemetry generation by mocking/spying on the @opentelemetry/api in unit tests. The reason for that is that `@opentelemetry/sdk-*" may receive more breaking changes then the API. So even if a new major version of the SDK is released you won't have to update all your tests, which will save you headaches in the long run.

The reason for the SDK and API being separate packages is that if something in the SDK gets a breaking change in a major version, users won't have to update their instrumentation code which only uses the API, and which may be mixed in with business logic - they just have to switch out the SDK setup code (which should be a just single file in their project). If the unit tests largely depend on the SDK anyway (via InMemory*Exporter and the data-model exposed by it) - then that would hold the update to the next major version back more than it needs to.

For integration tests, though - I think it absolutely makes sense to test output with an InMemory*Exporter from the SDK.

For E2E tests, I'd say the full SDK setup with the same exporter you'd use in prod is the best choice.

This will give you the most amount of coverage, while minimizing the impact of possible breaking changes in new major versions of the SDK.

Nope, Im using a wrapper created by Honeycomb that eases the initial setup required. However, their API enables direct access to the processor, thats why the process is the same. I was lacking a testing section in the docs tbh, I found out that other ecosystems such as the elixir one have a testing page

Interesting - IMO a short section on testing (and the concepts I wrote about above) could be very helpful. I think it may need a lot of thought before we can publish that, though. We have a weekly SIG meeting, I'll put that on the Agenda to see what others think about doing this. It may take a while until we get started on this though as we're first working though other topics that we've already identified as a priority before starting new things.