Closed fmoral2 closed 1 year ago
Also I see some lint warnings from GH Actions, do we need to address those?
Also I see some lint warnings from GH Actions, do we need to address those?
warnings no, only errors. warnings its not a must to. Its only if its feasible and makes real sense
This looks overall fine. Slight concern that we're adding a maintenance burden in that now we can run this via:
* `go test` * `make` * `docker build...` (local) * `docker build...` (jenkins) where there can be different files that go into each of them. So now each time a new test is added, there need to be four separate run configurations to attempt just for one test, let alone any regression checks.
We should standardize on a common way to run the code. It requires docker for Jenkins, so that should definitely be one way, but if we want to move forward using environment vars/files then we should also do that in Jenkins so that we have to maintain less files and change complexity overall.
- So now each time a new test is added, there need to be four separate run configurations to attempt just for one test, let alone any regression checks.
i totally get your point but you are kind of mixing up some different subjects and also making not too correct assumptions
1- Related to how to run the code , we changed nothing. We only 3 ways of running as before , via local , via docker or via Jenkins
2- The changes was basic 2 : a) simplifying the env config b) adding a new docker file to avoid start having a docker file with a lot of conditionals.
b) this is a normal approach when you have different build targets , think as this as something like you have an application , that u need to build your image with different params and configs ( which is totally normal and necessary ) so its a common way to better organize, having docker and config files separated like:
- build dev env - docker compose with different params due to differences of needs and configurations environment
- build production - same stuff diff args and also sometimes different strategies ( which is also our case on docker files one docker file has different approach to use entrypoint bc locally its needed. )
We need to start to understand that the idea here is to have a framework, not a scripting automation. Saying that the framework will act as a app or service or lib. ( whatever path is better ) , so in that way we need to know that this "app" needs features, functionalities , to start to grow and do lots of nice and handy stuff, so its impossible to an app grow and add features, not adding any burden, not adding any complexity. The case here is to be aware of try to always maintain things simple ( as possible ) , sometimes we cant run away of having some complexity or some burden or also some duplication even ( things that we are tired to listen that is bad and we should avoid) yes of course we should avoid, avoid when possible and not adding extra complexity or over engineering stuff , but this is different than add new features or functionalities.
Again we need to change the perspective here, this is not a scripting automation, its a framework being built.
_
So now each time a new test is added, there need to be four separate run configurations to attempt just for one test, let alone any regression checks.
_
i cant see this happening , because when u add a new test case u should just do whatever you were already doing, adding the make command for that and creating the job on jenkins. No need for any of this.
_
We should standardize on a common way to run the code
_
thats exactly what we are doing here, standardising, separating what needs to be separate, and create a common way to run , via go test , make or jenkins.
_
but if we want to move forward using environment vars/files then we should also do that in Jenkins so that we have to maintain less files and change complexity overall.
_
this for instance its a over engineering decision , we dont need that complexity in jenkins for now, and probably never. you should handle this in the framework config, jenkins dont need to know those internal details
1- Related to how to run the code , we changed nothing. We only 3 ways of running as before , via local , via docker or via Jenkins
So via local
here can mean running either go test
, make
or docker build... && docker run...
. Jenkins just runs via the docker flow. I wasn't super clear in my point there though. I meant to say just that now there are two Dockerfiles to maintain -- one for when we run docker build... && docker run...
locally and the other for when that is run in Jenkins.
You did touch on that a bit later on:
b) this is a normal approach when you have different build targets , think as this as something like you have an application , that u need to build your image with different params and configs ( which is totally normal and necessary ) so its a common way to better organize, having docker and config files separated like:
build dev env - docker compose with different params due to differences of needs and configurations environment
build production - same stuff diff args and also sometimes different strategies ( which is also our case on docker files one docker file has different approach to use entrypoint bc locally its needed. )
summarizing , this is not bringing additional burden in a way thats bad, this is doing the opposite , its starting to organize better the framework.
I'd argue here that we shouldn't have different build targets other than different operating systems where this might run, which should be accounted for by having the docker flow already.
It seems to me the biggest point of this change is to simplify how the tests are run, which I do approve of. I think we can allow that simplicity to be present in our local docker runs as well as the jenkins docker runs. Because technically here by having two dockerfiles, I can actually run it locally via docker using today's flow, just building from the jenkins dockerfile.
but if we want to move forward using environment vars/files then we should also do that in Jenkins so that we have to maintain less files and change complexity overall.
this for instance its a over engineering decision , we dont need that complexity in jenkins for now, and probably never. you should handle this in the framework config, jenkins dont need to know those internal details
Jenkins is simply a CI tool that runs things, so if it doesn't need additional complexity, then we really shouldn't need that locally either. As I see it though, this PR actually makes things less complex, so I'd argue it should be present in Jenkins the same as locally.
1- Related to how to run the code , we changed nothing. We only 3 ways of running as before , via local , via docker or via Jenkins
So
via local
here can mean running eithergo test
,make
ordocker build... && docker run...
. Jenkins just runs via the docker flow. I wasn't super clear in my point there though. I meant to say just that now there are two Dockerfiles to maintain -- one for when we rundocker build... && docker run...
locally and the other for when that is run in Jenkins.You did touch on that a bit later on:
b) this is a normal approach when you have different build targets , think as this as something like you have an application , that u need to build your image with different params and configs ( which is totally normal and necessary ) so its a common way to better organize, having docker and config files separated like:
build dev env - docker compose with different params due to differences of needs and configurations environment
build production - same stuff diff args and also sometimes different strategies ( which is also our case on docker files one docker file has different approach to use entrypoint bc locally its needed. )
summarizing , this is not bringing additional burden in a way thats bad, this is doing the opposite , its starting to organize better the framework.
I'd argue here that we shouldn't have different build targets other than different operating systems where this might run, which should be accounted for by having the docker flow already.
It seems to me the biggest point of this change is to simplify how the tests are run, which I do approve of. I think we can allow that simplicity to be present in our local docker runs as well as the jenkins docker runs. Because technically here by having two dockerfiles, I can actually run it locally via docker using today's flow, just building from the jenkins dockerfile.
but if we want to move forward using environment vars/files then we should also do that in Jenkins so that we have to maintain less files and change complexity overall.
this for instance its a over engineering decision , we dont need that complexity in jenkins for now, and probably never. you should handle this in the framework config, jenkins dont need to know those internal details
Jenkins is simply a CI tool that runs things, so if it doesn't need additional complexity, then we really shouldn't need that locally either. As I see it though, this PR actually makes things less complex, so I'd argue it should be present in Jenkins the same as locally.
Gotta you yeah.
But the point is that to run locally now we have an extra feature that I don't see a need in Jenkins. The override stuff. Saying that we need an entry point that in Jenkins not needed
Also it's a ci env yes, but it runs on an agent machine with a lots of different configurations than locally. So yeah we can consider a different target Meaning that to have a Jenkins agent up and running with all stuif before test starts, you have a lot of config to do in there.
Besides that feature, all of the changes were applied on Jenkins file as well.
Yeah I'm not too concerned with how it is, it just needs to be taken into consideration that there are subtle differences, so post-development when testing new changes we just need to ensure that it builds properly in the separate targets is all. We don't have to make additional changes
Yeah I'm not too concerned with how it is, it just needs to be taken into consideration that there are subtle differences, so post-development when testing new changes we just need to ensure that it builds properly in the separate targets is all. We don't have to make additional changes
Hey, actually the difference between dockerfiles is not on the building , its just on how to be running it. So your worry will not need to be take into account, because all tests created have no reason to change the way it needs to be created based on the docker file changes. Also the running it self being different interferes nothing. So no need to have a double check on that, and either way we are having as PR requirement that the tests should be ran ok locally and on jenkins.
Proposed Changes
-feat: add test runner and makefile adjustments
The proposal is to add a test-runner to run make file local and in docker
Bellow you find the related tasks
Types of Changes
Testing
Checklist:
YES
UpgradeSUC - https://jenkins.int.rancher.io/job/rke2-tests/view/rke2-patch-validation/job/rke2_validate_cluster/383/console
Windows -https://jenkins.int.rancher.io/job/rke2-tests/view/rke2-patch-validation/job/rke2_validate_cluster/385/console
Verify code lint; we should not have errors.
Update the documentation if needed.
Update makefile and docker run if adding new tests.
Run your tests at least 4 times with all configurations needed and possible.
If needed test with different os types.
Linked Issues
Further Comments
Variables precedence order
First: code Second: CLI Third: .env