Closed samuelstolicny closed 3 years ago
We miss one inmportant bullet-point in this task:
Set of tests proposal:
1. Create cluster #1, 1M/1W
2. Add 1W to cluster #1
3. Add 1M and 1W to cluster #1
4. Create cluster #2 2M/2W
5. Add 1M and delete 1W in cluster #2, delete 1M and 2W in cluster #1
6. Destroy the clusters
I like the above-mentioned suggestion. It covers:
I have just one question. Should we bundle/chain steps together? E.g. step 3, how about adding just a master node alone? The advantage would be that the scope is tighter. The disadvantage would be that the test needs more steps.
I was considering the idea of splitting step 3 into two separate steps. My conclusion was that if we know, that step 2 was successful, we have either problem with the addition of the master node, or a problem with a more complicated addition (1M and 1W). Identifying what the problem is, would be obvious from the logs. Therefore I decided to keep step 3 like that.
Regarding other chained steps, my idea was to check the basic functionality in the first four steps and be more "harsh" on the platform in the rest of the tests.
But of course, we can change that :slightly_smiling_face:
What do you think @samuelstolicny?
@bernardhalas afaik we all have permissions to edit each other's issues - I've added your suggested bullet-point to the description of the task
Thank you to @MiroslavRepka for suggesting a good starting set of E2E tests, I like it.
I just want to point out that this set of tests will be modified quite a lot, and we don't need to make it perfect from the very beginning. We're very likely to end up revising the set of E2E tests after a few weeks/months of them being in operation, and we should also add more tests whenever we add a new major feature into the platform.
At this point in time, it's probably enough to actually implement the solution for carrying out these tests at all, so that we can begin building some sort of confidence over what the platform can do, especially despite the devs consistently committing to the codebase.
@MiroslavRepka I agree with your points definition.
Shouldn’t we consider adding hybrid cloud points?
After today's discussion with @bernardhalas, we have come up with concepts of how E2E tests could be run.
test
directory there would be directory test-set1
containing manifests to applytest-set2
to the test
directory that would contain new manifests to testAny feedback will be highly appreciated :slightly_smiling_face: . @bernardhalas @MarioUhrik @samuelstolicny
You summed it up well. I'll just add that:
test-set1
will contain a set of configs, that would be applied against a single instance of a target cluster sequentially.test-set1
...test-setN
would run in parallel.Hi guys, I'm glad this topic is moving forward and I agree with pretty much everything said above.
I am assuming that when @MiroslavRepka wrote "manifests", he meant "inputConfigs" (user input config for Claudie). So really the E2E tests will be just a bunch of inputConfigs, for which we'll have a framework, which will:
I'll just add a couple things from the design point of view of the E2E testing in general:
claudie-${PR-number}-${commit-hash}
or something like that, with its own completely isolated environment.claudie
.Let me just record a discussion from yesterday's daily here.
Essentially we opened up a possibility for using a gRPC
-native testing framework for injecting the userConfig
into Claudie and assessing the results.
The way how the injection gets done is an open topic (e.g. a K8s job, a service).
All conditions were met, closing this issue.
Task review: All subtasks in this issue were implemented in PR #60 together with PR #59 by @MiroslavRepka and @samuelstolicny. Parts of the discussion related to test parallelism etc are adressed in later issues, e.g. #97 'Feature: Support for parallel test framework pipelines'.
End-to-end tests
Motivation: We need to be able to test the platform's functionality in a production-like environment.
Description: Deploy the platform on a Kubernetes cluster and figure out how to run end-to-end tests on the current functionality of the platform.
Exit Criteria:
SaveConfigFrontEnd
message to Context-box