Open octonato opened 1 year ago
If we are comfortable enough that the solution with docker-compose works for us to run the app, I don't see any reason to not use it for integration tests. Option 2 looks nice to me.
The drawback is that we will be offering yet another way to test Kalix.
True, but at the same time, in real life, you will end up with different kinds of tests. Some integration tests might use only the proxy, and some of them might require proxy, kafka, and X other images to cover e2e the use case. The last ones will be slow, so I would prefer to have the freedom in choosing which level of integration tests I want to run now. It could be as simple as using the existing Testkit
for X and DockerComposeIntegrationTest
for Y, or DockerComposeIntegrationTest
with different settings (different docker-compose files).
Testconainters recommend using a separate compose file without defined port
The problem with this idea is the same as with runAll, for the javascript SDK we can't use Testcontainers.
we might want to re-think our strategy with the upcoming Teskit API for Topic and Streaming.
Yes, personally I would prefer to use kafka/pubsub vs mocked topic for my integration tests if this doesn't require too much extra work.
True, but at the same time, in real life, you will end up with different kinds of tests. Some integration tests might use only the proxy, and some of them might require proxy, kafka, and X other images to cover e2e the use case. The last ones will be slow, so I would prefer to have the freedom in choosing which level of integration tests I want to run now. It could be as simple as using the existing
Testkit
for X andDockerComposeIntegrationTest
for Y, orDockerComposeIntegrationTest
with different settings (different docker-compose files).
I agree with this. I think such flexility will be helpful.
we might want to re-think our strategy with the upcoming Teskit API for Topic and Streaming.
Yes, personally I would prefer to use kafka/pubsub vs mocked topic for my integration tests if this doesn't require too much extra work.
Looking at the current implementation of the eventing testkit (using a grpc backend):
I understand how in certain scenarios you will want to make sure to have a real kafka instance plugged-in to run some tests but in others, you just want to make sure something of X format is being published out and for that a real kafka instance seems overkill. If we were to use real instances, we would probably want to offer an high-level API even for these as well, hiding the real clients from the user, requiring some additional effort.
In my view, the eventing testkit should give you the ability to easily and loosely test the eventing integration boundaries with the focus on your business logic (rougly which messages get produced, read from, sent where and with which format). However, you might have other boundaries (e.g. action writing to elasticsearch) or have the need to be more certain about the semantics of the broker you're using and thus prefer to use a real instance (in this case, the DockerComposeIntegrationTest
would be the way to go) and I think it's reasonable to require the use of a real client by itself too (e.g. if I'm testing that 2 messages from the same entity end up in the same kafka partition).
We have now some integration tests in our samples (service-to-service subscribers) that don't rely on the official Testkit.
Instead, we have a hand-made test that starts a few containers using our own
DockerComposeUtils
and then starts the service under test. This code is rather raw and not intended for general consumption.It turns out that we also need a similar test for the Kafka quickstart, but the quickstart is intended to be used directly by users.
We need such a docker-compose based test for the Kafka quickstart because we want to verify that the
docker-compose.yml
we provide is correctly configured.So the question is: should we add a hand-made test on a quickstart project?
We can think of a few ideas:
DockerComposeIntegrationTest
abstract class that would provide the hooks. So we don't offer something as raw as we have now. The drawback is that we will be offering yet another way to test Kalix.On top of that, if we introduce something like
DockerComposeIntegrationTest
, we might want to re-think our strategy with the upcoming Teskit API for Topic and Streaming.For example, if the docker-compose file defines the eventing as gRPC, the API would behave as it is designed now. However, if it's Kafka, we would use the API to produce or consume messages directly to the Kafka run by docker-compose. Same for PubSub.
Service-to-service streaming is a special case. If the docker-compose includes some other Kalix service, users will be able to interact directly with it, feed it with some data and verify if the events are streaming correctly to the service under test (exactly the same way we are doing in the subscriber tests). However, if we configure the proxy to use gRPC as the eventing backend, then the proxy will not listen to that other service, but it will listen to Testkit gRPC eventing.