Closed bdlink closed 1 year ago
@bdlink , when using s2i you can enable incremental build. The maven local cache from previous build is injected in the next build: https://docs.openshift.com/container-platform/4.7/cicd/builds/build-strategies.html#builds-strategy-s2i-incremental-builds_build-strategies Let me know if that would help you? Thank-you.
Jean-François,
Thank you for that suggestion. It would help the use case of a single application repeatedly deployed. I will test this case. My main concern is a use case where 25 developers with individual development namespaces are deploying 10 separate applications each (so around 250 applications), all with a few galleon layers on one or two base WildFly layers (such as 26.1.2.Final and 27.0.0.Final.) Thanks, Bruce On Oct 3, 2022, at 2:12 PM, Jean-François Denise @.***> wrote:
CAUTION: This email originated from outside of BCIT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
@bdlinkhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbdlink&data=05%7C01%7Cbruce_link%40bcit.ca%7C1365f3872c6e4e5499f808daa583f476%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004283403208604%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=bemSezWUUr%2FO2mGGsDrln7LOlEsujkaydYGwHGgEBnw%3D&reserved=0 , when using s2i you can enable incremental build. The maven local cache from previous build is injected in the next build: https://docs.openshift.com/container-platform/4.7/cicd/builds/build-strategies.html#builds-strategy-s2i-incremental-builds_build-strategieshttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openshift.com%2Fcontainer-platform%2F4.7%2Fcicd%2Fbuilds%2Fbuild-strategies.html%23builds-strategy-s2i-incremental-builds_build-strategies&data=05%7C01%7Cbruce_link%40bcit.ca%7C1365f3872c6e4e5499f808daa583f476%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004283403208604%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=I%2BUqq0CjL5ROMwK%2F2cigdgMWR8ztFSORNtif2jCFT0w%3D&reserved=0 Let me know if that would help you? Thank-you.
— Reply to this email directly, view it on GitHubhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fwildfly%2Fwildfly-s2i%2Fissues%2F392%23issuecomment-1266054444&data=05%7C01%7Cbruce_link%40bcit.ca%7C1365f3872c6e4e5499f808daa583f476%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004283403208604%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=rgJbCpEA0FkNyZRXf5bM8MFP1oMbu67Uk9QwXRkpDT4%3D&reserved=0, or unsubscribehttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAAEBGE7HB6W3DEYK5HD34XLWBND3FANCNFSM6AAAAAAQ3ZLLJI&data=05%7C01%7Cbruce_link%40bcit.ca%7C1365f3872c6e4e5499f808daa583f476%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004283403208604%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=fVtOdGgMfb6PtqjEiTdk3ZzXYiE6kbcOb1Nn4AOKwo4%3D&reserved=0. You are receiving this because you were mentioned.Message ID: @.***>
@bdlink , I am thinking at a 2 level of s2i builds (raw thinking):
Doing so we would only retrieve the dependencies of the app, not the WildFly ones.
This would imply some changes to the current wildfly-builder image (to detect the case of an existing server in the builder). As I said, that is raw thinking, would that work in your very realistic use-case?
@bdlink I wrote a POC for this 2 phases s2i.
The application src code contains an app and a server project: https://github.com/jfdenise/2-phases-s2i The wildfly-s2i-jdk11 that has been (lightly) adjusted to handle the 2 phases: quay.io/jfdenise/wildfly-s2i-2-phases-jdk11:latest
You could do the same using openshift, using the s2i commands locally do:
So the second image uses the builder generated in the first step. You could create whatever application images from this builder image.
I am pretty sure we could use 2 WildFly Helm charts to achieve that on openshift. That is what I will add to the POC next.
Jean-François, This might work with a labeling scheme that allows deployment of the app to the separate server build. Preferable the automatic labeling would allow the base server and galleon layers to be picked up by the app deployment. The server images could be in the internal image registry and be garbage collected if unused for a while. I’ll try your suggestion (next post) tomorrow Thanks, Bruce On Oct 4, 2022, at 3:45 AM, Jean-François Denise @.***> wrote:
CAUTION: This email originated from outside of BCIT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
@bdlinkhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbdlink&data=05%7C01%7Cbruce_link%40bcit.ca%7C220f57d4665b411e1e1c08daa5f59c62%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004771550987541%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=58zvq84175mfHr8mqvwC5ALldBT9q%2Fw%2BeUNLvDhAdxk%3D&reserved=0 , I am thinking at a 2 level of s2i builds (raw thinking):
Doing so we would only retrieve the dependencies of the app, not the WildFly ones.
This would imply some changes to the current wildfly-builder image (to detect the case of an existing server in the builder). As I said, that is raw thinking, would that work in your very realistic use-case?
— Reply to this email directly, view it on GitHubhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fwildfly%2Fwildfly-s2i%2Fissues%2F392%23issuecomment-1266757016&data=05%7C01%7Cbruce_link%40bcit.ca%7C220f57d4665b411e1e1c08daa5f59c62%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004771550987541%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Us7X3vwyNXaXJfmU8sNz%2FjnLfvLxBHXwdNxNNppP%2BfI%3D&reserved=0, or unsubscribehttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAAEBGE7GABG2N5OR7BB5NJLWBQDGDANCNFSM6AAAAAAQ3ZLLJI&data=05%7C01%7Cbruce_link%40bcit.ca%7C220f57d4665b411e1e1c08daa5f59c62%7C8322cefd0a4c4e2cbde5b17933e7b00f%7C0%7C0%7C638004771550987541%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=USbhdcRq4LF9Zm5ZGxOqpryHnnU3vDP3Pcn%2FF%2FFxedQ%3D&reserved=0. You are receiving this because you were mentioned.Message ID: @.***>
@bdlink , FYI: I have updated the demo project https://github.com/jfdenise/2-phases-s2i with README and 2 Helm yaml files.
Some enhancement to make the intermediate builder image generation more efficient when using Helm Charts:
Fixed in 1.2.0 image
Since the v2 s2i process builds the Wildfly server from scratch (and possibly for other reasons) each deployment pulls down massive maven repositories for each deployment. These are the same over and over, resulting in potentially unneeded time and network traffic. In return there is flexibility in tailoring the Wildfly server with the Galleon layers desired. I assess this as a net positive.
Unfortunately, the build products are not cached on the cluster so have to be rebuilt for each application build, What can be done to reduce the duplicated effort between rebuilds of the same application, between builds of applications using the same galleon and base layers, or between deployments using the same maven dependencies?
One possibility would be to have the galleon layers prebuilt as container layers. This would be an extension to the v1 way of doing things.
Another possibility would be to have a shared Maven repository that is build up on the cluster (this reduces the network load on a development machine). Using a local maven central clone nearer to the cluster would help a little, having a shared mount would help more (probably),
Using something like
(from a 3.x era blog post) might help. What would be most useful would be for a .m2-like shared folder that would be updated as the applications are build. Of course there would be occasional problems requiring maintenance of the shared repo.
In any case, the current architecture has this issue (I am not at the point of proposing a solution). Suggestions welcome!