Open dejanb opened 6 days ago
I don't see an easy way to initialize trustify with ds3 once and then run multiple tests against it. The solutions I found so far mention manually doing so using ctor, lazy_static or once_cell.
Do we have a preferred way of doing something like this? I would like to see if we can do this all in cargo test, instead of spawning multiple processes/containers to do so (if we can get away with it). @ctron @jcrossley3 @helio-frota
We do have something similar, maybe a bit more heavyweight: https://github.com/trustification/trustify-load-test-runs/
It's orchestrated through the compose.yaml
I don't see an easy way to initialize trustify with ds3 once and then run multiple tests against it.
the db populate is done here ( I had problems downloading the dump but we can use local dump file )
yeah I have to say that once we have things working with trustify-load-tests it turns really handy - compose run, compose down etc...
Thanks for the inputs. We discussed it a bit and the idea is that we should try to reuse existing dataset test and work in iterations to get fully automated e2e suite.
Hi everyone, If it helps. the links that @jcrossley3 shared
is the strategy the UI is currently using for executing e2e tests. What it does is:
Reuse the current gh workflows is as easy as this PR https://github.com/trustification/trustify/pull/857/files (I think Jim made it even simpler with https://github.com/trustification/trustify/blob/7292e9be21c95a33b068f464f7f6f07c7c73c40f/.github/workflows/latest-release.yaml#L71-L77)
I am not sure exactly what kind of tests you guys have in mind but perhaps we could write tests at https://github.com/trustification/trustify-api-tests and let the current workflows to do the rest. We only pay attention to write tests because the whole infrastructure and the tasks for providing an instance of Trustify should be taken care by some work
IMO to do proper e2e tests we have to have a real instance of Trustify simulating what the final user will do to instantiate Trustify (which in the UI case is done by the operator).
We need to replicate guac end to end tests that covers correlation bugs found in v1
In my mind it would be ideal to start with dataset test and make it easy to add new test cases.