Closed rtnpro closed 6 years ago
wouldn't kubectl create --validate
be enough for this ?
@runseb i guess it(kubectl create --validate) will validate the artifacts to be according to kubernetes' spec, but does not ensure that app will run and respond?
Yes, the issue name: Validating for artifacts generated by kompose
is a bit misleading. It's one part of my proposition and if kubectl create --validate
does that, well and good. However, that does not replace the need for a functional test suite to help developers test their applications end to end. There could be bugs in the apps, container images, compose spec files, which could lead to the application not running as expected on the target platform.
This is why, I think, that we should provide the developer with tools to test their applications end to end, which, in turn, could be run on a CI platform.
but how can you ensure that app runs properly. k8s clusters may have many different setup (typical difference is DNS not installed). One app could run in one cluster but fail in another.
people may also use kompose, but start from a failing docker-compose.
There is a lot happening in that space with helm and charts. Charts (apps package), are being linted and are going through CI. I think it would make sense to expand the chart conversion to be inline with the helm packaging.
On Wed, Aug 31, 2016 at 3:12 PM, runseb notifications@github.com wrote:
but how can you ensure that app runs properly. k8s clusters may have many different setup (typical difference is DNS not installed). One app could run in one cluster but fail in another. True!
people may also use kompose, but start from a failing docker-compose.
There is a lot happening in that space with helm and charts. Charts (apps package), are being linted and are going through CI. :+1:
I think it would make sense to expand the chart conversion to be inline with the helm packaging. Yeah! It makes sense.
I did look at your code in atomic app, I see what you are doing now.
As far as I know you are not really doing functional testing of the app itself in the sense of the app itself functioning, you are checking a few assertions around what pods, services and rc need to be running.
I suppose this could be implemented in kompose via a kompose test
subcommand. But we would need to automatically detect what resources are supposed to have been created and be in running state.
Ideally, for the user a single kompose up
should do the conversion, the creation and then report on the status. like docker-compose up
in the foreground actually creates the containers and prints the logs of each container to stdout.
On Wed, Aug 31, 2016 at 5:58 PM, runseb notifications@github.com wrote:
I did look at your code in atomic app, I see what you are doing now.
As far as I know you are not really doing functional testing of the app itself in the sense of the app itself functioning, you are checking a few assertions around what pods, services and rc need to be running.
Yeah! Those were basic assertions that I had implemented. I was also planning to extend it in the future to include helper assertions to assert for endpoints and their responses.
I suppose this could be implemented in kompose via a kompose test subcommand. But we would need to automatically detect what resources are supposed to have been created and be in running state.
I do not fancy doing much magic, as in to detect rcs, pods, svcs for apps automatically. I'd rather prefer the code to be dumb, but empower the application developer to use our helper assertions to write test cases for their apps in a few lines. We can write test for some sample applications for testing kompose itself, and show, encourage developers to do the same.
Ideally, for the user a single kompose up should do the conversion, the creation and then report on the status. like docker-compose up in the foreground actually creates the containers and prints the logs of each container to stdout.
mhm, however I am not able to get the context of it w.r.t. to tests.
so from a UX perspective, what will a user have to do and how will they use kompose
to run those tests ?
On Fri, Sep 2, 2016 at 3:12 PM, runseb notifications@github.com wrote:
so from a UX perspective, what will a user have to do and how will they use kompose to run those tests ?
To run test cases for Kompose
$kompose test
To run tests on any compose appication
$kompose test /path/to/application
and where are the tests themselves ?
On Fri, Sep 2, 2016 at 3:32 PM, runseb notifications@github.com wrote:
and where are the tests themselves ?
I want the test cases for downstream applications to be in their code
base itself, in tests
subdir. In upstream kompose
source code,
it could be in tests
subdir as well.
However, on a second thought, do we need a kompose test
command.
AFAIK, people usually write functional tests for go applications in
shell scripts. And they could be run, without using the kompose
command directly. So, the application developers could run their tests
locally, by executing the local test runner script.
I think that we need to work on a set of scripts which could be extended by the users of kompose to validate their converted applications against real providers (kubernetes, openshift). This will help the developers to deploy their converted applications with confidence.
tests for users of kompose? I'm not sure what you mean by that :-( or even if I understand it correctly :-( For me it seems that you both are talking about different things :)
We need a lot more test, test for our-selfs, to make sure that we are not breaking features that are already working - unit tests, functional tests .....
Than I think we should focus on improving conversion rather that adding completely new features to kompose. We still don't support a lot of directives from docker-compose.yml (this should be our todo list with build on the first place )
I am pretty sure I understand what @kadel says. We definitely need more tests for kompose
itself. agree with that 100% and we need to increase the functionality.
but what @rtnpro is proposing are tests for the actual containerized application.
So I fail to understand how we would package those (application specific) tests in a kompose release and even how we would implement it in Go in the kompose source. Since @rtnpro is now talking about independent shell scripts...
but what @rtnpro is proposing are tests for the actual containerized application.
I think that what he meant by this is that it would be nice to have whole workflow as part of our test suite. One of the test would do actual deployment of some sample application to testing cluster (run in container or using minikube) and verify that deployed app is running and responding.
Or at least this is what he did for Atomic App.
unless we are taking end-2-end tests for CI
On Fri, Sep 2, 2016 at 5:31 PM, Tomas Kral notifications@github.com wrote:
but what @rtnpro is proposing are tests for the actual containerized application.
I think that what he meant by this is that it would be nice to have whole workflow as part of our test suite. One of the test would do actual deployment of some sample application to testing cluster (run in container or using minikube) and verify that deployed app is running and responding.
Yeah :)
@surajssd , @kadel , I think we have good set of test for kompose now, we can close this issue now ?
We can do validation on what kompose generates to check if it matches the k8s version using jsonschema or the tool called kubeval
@surajnarwade just found this issue as well. Kubeval is just about setup to be a Go library in https://github.com/garethr/kubeval/pull/15. Happy to help integrate that if of interest.
@garethr :+1:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
I think that we need to work on a set of scripts which could be extended by the users of kompose to validate their converted applications against real providers (kubernetes, openshift). This will help the developers to deploy their converted applications with confidence.
Proposed features
I have done some related work for Atomic App here: https://github.com/projectatomic/atomicapp/pull/655, which allows writing test cases for individual apps very easy, in a few lines: https://github.com/rtnpro/atomicapp/blob/2e6e72c221856abbca34f7779af70de5f490de25/tests/system/test_openshift_provider.py
Thanks to a comprehensive base test suite to allow for high level assertions: https://github.com/rtnpro/atomicapp/blob/2e6e72c221856abbca34f7779af70de5f490de25/tests/system/base.py#L425
I hope to be able to translate my experience in implementing something similar for kompose to help the developers using kompose to ship their converted apps with confidence.