Closed mezarin closed 2 years ago
I am not sure what it takes so long on your system. On mine it takes at most a second to run. As i mentioned this serves two purposes. Catch errors during "development" (update scripts/customization/Dockerfile/devfile, etc) and catch errors on cases where only the customization files are updated. Again the purpose of running validation during build time is for "development". Erroneous updates to the scripts/Dockerfiles/etc can cause some customization not to be changed and not caught during manual inspection. It would be nice to have some automated way at build time to catch those errors prior to pushing a commit and wait for the tests to run for it to get caught.
Thanks for taking a look. If i am understanding the cases correctly, I believe the concerns you have are handled by the last update. The purpose of doing it this way is to detect script update issues prior to merging the code (when running build.sh. i.e. "unit" test) and after merging the code (forgot to run build.sh). The latter being the case you are most concerned about. I agree that the validation being introduced does not care that the merged output files are not exactly the same (i.e. comments, or anything that is not associated to customization parameters in the customization files) as the build.sh output. So, perhaps we can address the part you are more concerned about with a separate test that just diffs the files.
This PR adds validation to be sure that the output/generated stack artifact entries contain the customization values specified in the customization files (customize-ol.env and customize-wl.env). Validation takes place as part of the build process for development and as a PR test for those cases where the user may forget to build the stack artifacts after having customized the stack.
Sample error message: