Closed RonKoppelaar closed 5 months ago
@RonKoppelaar please share a standalone script with which I can repro. Will try to use my standard with -useNewDatabase - but it helps a lot if you try and showcase something you can run, which fails.
@RonKoppelaar
I tried this script:
New-BCContainer -containerName $containerName `
-accept_eula -accept_insiderEula `
-Auth $auth `
-artifactUrl $artifactUrl `
-multitenant:$false `
-Credential $credential -useNewDatabase -licenseFile $LicenseFileUrl
Which works fine
@RonKoppelaar - That test was with a sandbox artifact - but it also works with an onprem artifact.
So, please please please always include script - even though it might seem obvious, I am wasting a lot of time trying to figure out the script which causes the issue.
@freddydk thanks for updating the insider, i will do some tests and keep you updated.
by the way: I noticed that when we do use ps5 in BC24 some commands are not only slow but absurdly slow...
For Example Get-NavContainerAppInfo
takes about 1-2 Minutes to complete on one of our servers compared to seconds when run in BC23 or BC24 with pwsh (with bccontainerhelper 6.0.15)
Realy looking foreward for the update
@freddydk will log on to the build machine and test manually there. I'll post the script is still failing. I couldnt repro on my PC either.
@marknitek in the latest containerHelper I have removed all -usepwsh:$false - and there is a pscoreoverrides.ps1 in c:\run with these lines:
function Invoke-SqlCmd { SqlServer\Invoke-Sqlcmd @args -Encrypt Optional }
function Backup-SqlDatabase { SqlServer\Backup-SqlDatabase @args -Encrypt Optional }
function Restore-SqlDatabase { SqlServer\Restore-SqlDatabase @args -Encrypt Optional }
You can override this file by adding one with the same name in the my folder if you don't want these or if you want to add more. Remember to invoke c:\run\pscoreoverrides.ps1 if you want to include MS overrides.
These pscoreoverrides are loaded in the prompt.ps1 together with the ps7 BC modules.
@freddydk Once I get everything together I'll post a new issue. I don't think it's anything for BCContainerHelper so far. The size of the compiler folder seems to have increased by +/-30% to over 1GB and ALTool is significantly slower when enumerating through the artifacts compared the code analysis that was being run before. I think Microsoft has dumped every app in there as well.
Thanks @MattTraxinger - I saw that myself. The codeanalysis cannot be used anymore though will check whether we can add the cache to the artifacts in the first place maybe
@freddydk Fair enough. Those are really the two big differences I saw. Compiler Folder is now 1.34GB instead of 984MB. The increase in copy time is proportional to the increase in size. So be it.
The enumerating apps part is really bad, though. It's gone from 3s to 38s in my tests.
It's all just a bunch of small things. 30s here. 30s there. 15s over there. Not that these were long builds to begin with, but suddenly they take twice as long. I won't put it in a separate issue unless you want me to since there's not a lot to be done.
@freddydk Below script to create my container License file to be changed in your own...
$BCArtifactURL = Get-BCArtifactUrl -type OnPrem -country nl -version "24.0" -Verbose
$BCArtifactURL
$OnpremContainer="demo"
$DockerImageName="local"
$UseNewDatabase=$True
$ContainerName="demo"
$BuildCredential = New-Object System.Management.Automation.PSCredential ('devadmin', (ConvertTo-SecureString 'Welkom01!' -AsPlainText -Force))
$MultiTenant=$True
$LicenseFileName="C:\agent2\_work\1\s\ERP AL\licenses\dev.bclicense"
New-NavContainer `
-imageName $DockerImageName `
-artifactUrl $BCArtifactURL `
-containerName $ContainerName `
-auth 'NavUserPassword' `
-Credential $BuildCredential `
-memoryLimit '25G' `
-licenseFile $LicenseFileName `
-accept_eula `
-accept_outdated `
-updateHosts `
-alwaysPull `
-multitenant:$MultiTenant `
-useNewDatabase:$UseNewDatabase `
-dns 8.8.8.8 `
-doNotCheckHealth `
-isolation process
Which results in error as earlier mentioned. Server W2019.
My guess is that the ImageName parameter causes the issue - that you have an old image, which it is reusing. If you add -alwaysPull - does it work then?
Allways pull is part of the parameter list. But I can remove the old local image to be sure
Remaining local images...
PS C:\Windows\system32> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
local onprem-23.5.16502.16757-nl b4df7925a1a0 22 hours ago 14.8GB
local sandbox-23.5.16502.16887-nl-nodb-mt 9c6596361f0c 4 days ago 14.2GB
local sandbox-23.5.16502.16887-nl-mt 9c199594d95a 5 days ago 17.3GB
local sandbox-23.5.16502.16887-nl d5a51ba03403 5 days ago 15.8GB
local sandbox-23.5.16502.16887-nl-nodb 65a6ab305c02 5 days ago 14.2GB
mcr.microsoft.com/businesscentral ltsc2019 6cb4e8ccc602 5 days ago 10.2GB
Recreating container using BcContainerHelper is version 6.0.16-preview1184 Also installed .Net 8 SDK on the host (Not sure if relevant) Unfortune still same error.
After creating container these are the images
PS C:\Windows\system32> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
local onprem-24.0.16410.18056-nl fe1d3e4c3afb 6 minutes ago 15.6GB
<none> <none> 823fb407215b 16 minutes ago 12.7GB
mcr.microsoft.com/businesscentral ltsc2019-dev d65d38f7fb9f 22 hours ago 10.5GB
local onprem-23.5.16502.16757-nl b4df7925a1a0 23 hours ago 14.8GB
local sandbox-23.5.16502.16887-nl-nodb-mt 9c6596361f0c 4 days ago 14.2GB
local sandbox-23.5.16502.16887-nl-mt 9c199594d95a 5 days ago 17.3GB
local sandbox-23.5.16502.16887-nl d5a51ba03403 5 days ago 15.8GB
local sandbox-23.5.16502.16887-nl-nodb 65a6ab305c02 5 days ago 14.2GB
mcr.microsoft.com/businesscentral ltsc2019 6cb4e8ccc602 5 days ago 10.2GB
On my laptop (w11) the above script just works... :-( But not on buildserver W2019...
Did you try without the image name on the build server?
Are you using container helper preview on the build machine? It looks like it doesn’t pull the lots 2019-dev image???
BcContainerHelper is version 6.0.16-preview1184 Is being used. I'm now trying without the image.
Without local Docker image it passed. Shoudl I try to remove all images?
Worth a try -
FYI we are running multiple agents on our buildmachine. Each agent with each own local (admin) user. We install the PS modules in the user document folder. Meaning each agent can have its own PS module versions. Could there be impact on the created images? Because those are shared cross the agents.
For now I've removed ALL local images on the machine.
After cleaning up all images and using -DockerImage "local" it failed with same error.
So, on windows server 2019, it cannot build an image on the fly. Will test that
For now I put a workaround in my build to not use image if using NewDatabase and artifacturl like /24. Just to bypass this error. To at least see if other improvements do work.
@RonKoppelaar I have a repro of the problem and know what is happening - will see if I can figure out why?
I was able to run my build end-to-end. Performance wise I still see two diffs:
There still seems to be an issue there. If you want I can create a local testset with all the apps to publish and to export.
The reason why things failed when creating an image was due to session caching - which probably also means that your build was slow due to this. A new prerelease has just been shipped, which should fix this - please retry (also with -imagename)
Can confirm the "local" image is working again. Performance is a bit harder to confirm. As both builds are running on the same machine. So all in all is a bit slower. Will run a new build when these are ready.
Finished testing with latest pre-release 6.0.16-preview1186:
I'm setting up a local test case with BC23.5 and BC24. Keep you posted.
I Created testcase local. Based on the results (using HyperV) all is kind of comparable. No big performance diff. Testing on buildserver is kind of unexpected due to other influences (Other builds / Azure in general).
Local 23.5 Create container: 245.6628468 4 min. Publish time: 501.6631784 8 min. Export runtime pkg: 125.6097803 2 min.
Local 24.0 Create container: 299.1621385 5 min. Publish time: 764.543689 12,5 min. Export runtime pkg: 112.3128834 1,5 min.
Not sure if process isolation make big difference but I'll check.
I would like to investigate the publish time - that one is the only thing that feels wrong. If you can provide me with your local test scripts and I will have a look where this is off.
Thanks for this investigation @RonKoppelaar
Latest preview of ContainerHelper shuold fix the last issues... Also found that session caching didn't really work for PS7 - fixed now.
Builds are running on my end again. Will also do the local check with the testcase I provided yesetrday
Builds are kind of comparable with 23.5 again. Stil a bit slower but as you mentioned the publish cmdlet itself in BC24 is a bit slower. Also ran the local test case again compared to yesterday preview build I can see good improvements:
Local 24.0 - HyperV - with preview build 24/04
Create container: 299.1621385
Publish time: 764.543689
Export time: 112.3128834
Local 24.0 - HyperV - with preview build 25/04
Create container: 282.792769
Publish time: 575.700574
Export time: 75.2510381
For me its case closed. Thx freddy and all other contributors on solving this topic!
BcContainerHelper 6.0.16 has shipped together with generic images 1.0.2.20
@freddydk Fair enough. Those are really the two big differences I saw. Compiler Folder is now 1.34GB instead of 984MB. The increase in copy time is proportional to the increase in size. So be it.
The enumerating apps part is really bad, though. It's gone from 3s to 38s in my tests.
It's all just a bunch of small things. 30s here. 30s there. 15s over there. Not that these were long builds to begin with, but suddenly they take twice as long. I won't put it in a separate issue unless you want me to since there's not a lot to be done.
@freddydk is the issue regarding the enumeration of the apps solved as well?
I was not able to see that in the history of this issue
I noticed a performance degradation in the build pipeline moving to BC24. If I look at the compile and publish steps these are the time used in minutes and seconds.
BC24 - https://bcartifacts.azureedge.net/sandbox/24.0.16410.18040/nl Compile: 12:13 (mm:ss) Publish: 8:34 (mm:ss)
BC23.5 - https://bcartifacts.azureedge.net/sandbox/23.5.16502.16887/nl Compile 8:18 (mm:ss) Publish: 3:58 (mm:ss)
Both pipelines compile and publish the same amount of apps using same build scripts
Buildservers are running with W2019 and uses Process Isolation. Do you know there are known issues? According to things I read new platform is based on .Net8 which should actually be faster then .Net6
For Compile and publish I uses the std. cmdlets from BCContainer: Compile-AppInNavContainer Publish-NavContainerApp
BcContainerHelper is version 6.0.15