microsoft / AL-Go

The plug-and-play DevOps solution for Business Central app development on GitHub
MIT License
293 stars 125 forks source link

[Enhancement]: BC image for Github Enterprise #1209

Open gntpet opened 2 months ago

gntpet commented 2 months ago

Feature description

Github enterprise does support custom/partner images for the large runners

Would it be possible that to create one with all the BC stuff needed for Al-Go:

having those tools preinstalled/predownloaded would greatly help to reduce time needed to spin BC container. Yes we still do it because we want to test the code.

See more info about runner-images and their github repo

image

Best Regards, Gintautas

freddydk commented 1 month ago

I like the idea, the problem is just that sandbox artifacts change very very frequently, so having the latest pre-downloaded would mean that we would have to re-do the images many times a day, which is not going to happen. The generic image is updated once a month and it would certainly be possible to re-build an azure VM image with this every time. Currently, there are a few things, which takes time when using GitHub hosted runners

  1. Downloading the generic image takes ~100 seconds
  2. Determining artifact url
    • If people are using latest artifacts with a specific country, determining artifact url takes a long time (~60 seconds) because of the way we query the artifacts. If people use artifact setting like f.ex. '//24.0//first', it takes only a few seconds.
  3. Downloading the used artifact
    • This is a killer - takes 4-5 minutes. If people are using latest artifacts, we cannot get this right as they change very frequently. If people are using a specific artifact, we really cannot pre-download all these artifacts to pre-build images anyway. A better mechanism could be to utilize GitHub cache (like we do the compiler folder) - then downloading artifacts will go down to a few seconds.
  4. Creating a container or container image
    • Is the second killer - around 6 minutes. Having pre-built docker images for all artifacts isn't possible for the same reason as above and caching docker images also isn't possible unless you have a self-hosted runner.
  5. Downloading BcContainerHelper
    • Takes ~30 seconds and we could definitely cache the latest version of ContainerHelper, but we would still have to import it. We couldn't pre-install it. The downloading is only ~5 seconds.

So creating an image like this in a generic way would probably only save us 100 seconds (which still is a lot). We will investigate more...

gntpet commented 1 month ago
  1. Downloading the generic image takes ~100 seconds

For us it takes even longer

image

  1. Determining artifact url

will give it a go. I assume it gives the same result

  1. Downloading the used artifact

I second that. it's quite frequent that CDN fails to download it. We often see big time difference between two projects. (no containers, no tests, just simple compilation with different versions). image

One downloads from CDN quickly, second chokes. image

I agree that it is hard to pre-cache everything. But, perhaps you can see very clear patterns on your storage statistics. For e.g. we are compiling using sandbox artifacts for 24.0. It's latest version does not change that frequent anymore.

freddydk commented 1 month ago

We actually have a meeting tomorrow where we need to discuss the future of artifacts storage - hopefully we can solve this problem and include the performance problem

freddydk commented 2 weeks ago

On the artifacts storage, we will probably shift to use OCI artifacts and also refactor the artifacts to a different layer structure to better match how they are going to be used. Timeline is still unknown