Open verschaevesiebe opened 2 years ago
There is no immediate workaround other than to break things out into separate root deployments. Are you making extensive use of loops? That is where we see the limit being hit most often.
Separating or changing deployments can't be done through an incremental mode.
I'd be interested in exploring this one more. If the project is broken into modules, then there should be some ways to break up the single deployment into smaller components. Do you have a way to easily share the structure of the project (how modules are being called, etc.)?
Hi Alex,
Thanks for your quick reply.
I can't immediatly share anything since i'm under an NDA. We're deploying all the API Management Services, Versions sets, Operations, Subscriptions, Back-ends through Bicep as a complete build for multiple API Management instances in the same resourcegroup. (We also have some OpenAPI specs and policies added through loadTextContent() which are definently contributing to the 4MB limit)
The reason we're doing a complete build is because when versions are being deleted or changes happen we need that to be reflected onto all environments. It would be a serious task to go over them in the portal and remove all the versionsets 1 by 1 on the API Management instances. So that's the main reason for us not to go with incremental.
Besides that, complete deploys offer an assurance as well when comparing TEST & PROD environments, because through that way you are sure that the things you've deployed are actually reproduced 1:1.
We can definently look into transfering everything into seperate deployments, deployment pipelines as incremental but it will be a very painfull task, both on transfering + maintaining API Management instances to delete versions or operations incase they are being removed.
We're definently using modules alot since we designed our own framework so a nice feature would be to break them up when compiled. (Maybe that should be a feature so that they could be uploaded on storage for a linkedtemplate uri deploy instead).
Would you have any other proposal ? Is there any particular reason for the 4Mb limit ?
Thank you π
@verschaevesiebe -This was one usecase where I saw using TemplateSpecs to be more flexible with an overall controller/orchestrator deployment ARM template (whihc can also shell out other smaller sectional orchestrator deployment templates). I chose this way as you could ensure seperate deployments happened and I believed this would likely mitigate that 4Mb limit limit but never found a scenario where I needed or could test this. This may be possible usng the similar bicep registry method instead but is also not yet something that I have yet had chance to test.
Content from the registry will end up injected as nested templates, but template specs are a good alternative that will remain a linked reference, not a nested one. Modules can call templates specs using the scheme ts
. Good suggestion @kilasuit
Is there any particular reason for the 4Mb limit ?
That's related to how we store ARM request data. It is a hard limit based on our current technology stack. If may eventually be lifted, but not on any particular schedule.
@alex-frankel @kilasuit alright so after looking into your suggestions do you believe it would be possible to separate the API Management services deployment, operations, version sets, .... all into 1 template spec and include that as a module through the registry?
Digging some deeper I found that here in the documentation there is actually a 2MB size limit on a single file spec so separating it as a one-off module isn't an option.
Looks like I'm running into a dead end. π’ Hope to hear back from you.
@alex-frankel
Content from the registry will end up injected as nested templates,
Could we for this add an additional property so that the module could instead be called as linked reference (or is that going to be a headache) ?
@verschaevesiebe - i'd split your overall APIM deploy into smaller specs per part and reference them all together in unified spec that calls the sub specs as that would likely ensure you aren't hitting any limits
Could we for this add an additional property so that the module could instead be called as linked reference (or is that going to be a headache) ?
The question is what would that be a linked reference to? Would the idea be to pull the content from the registry at run time instead of build time?
Splitting everything up in separate Modules is definitely already applied. The issue is just that everything is compiled to a Nested resource. As a workaround "hopefully for now" we're going to be creating a pipeline per environment per API (pipeline hell) and deploying them as an incremental. I guess we'll have to take care of removing things manually or use the Azure CLI / API for bulk operations. However, I do think we need a solution for it.
@alex-frankel
The question is what would that be a linked reference to? Would the idea be to pull the content from the registry at run time instead of build time?
I believe that would be sufficient to get around any limitation in the future as you could just include all modules as a registry. Either that or remove the 4Mb limit. π
BTW: I do understand that 4Mb is a lot but when you look at a landscape full of APIs which are provisioned through templating then that's a whole different world. Feel free to reach out personally for help/more investigation on this.
@alex-frankel if you supplied the additional property then I would expect it to work like template specs and that it was pulled at the point of deployment by ARM not during the build as it should be treated that if you are referencing something that way that it has already been previously validated and tested before it was made available, or one should hope. But that's a process that your team shouldn't really overly concern yourselves with (other than maybe giving additional guidance docs around this)
There's a way you can upload openapi.json and policy.cshtml files from a remote storage. Instead using xml
or rawxml
as type
for Microsoft.ApiManagement service/policies
or in case of Microsoft.ApiManagement service/apis
- format
of values like openapi
or swagger-json
- use their equivalents with link
inside, i.e. openapi-link
or rawxml-link
. Then - instead loading file content to the value
parameter - provide there a link to a storage account with a SAS key where you previously upload the files.
Hi @miqm,
Understandable but I'm not sure whether that gets added as a child resource at runtime or at build time. The hassle of maintaining a storage account to store all the policies and open API specs and making sure that all of that is also in line with different feature branches and environment branches sounds like a very big task and becomes unmaintainable with such an amount of APIs, operations and version sets.
We've swapped over to incremental builds and are now deploying API per API (with all their versions and operations). Removing APIs we'll do through CLI.
It is actually maintainable and still fairs well with different branches and multiple teams. PS: it's unlikely but we could still exceed 4MB per API, in that case, we'll split off a separate incremental deploy per API per Version π
This solution doesn't mean that I'm not in favor of removing that 4MB limit though. π
It's added (read by API Management RP) during the deployment, so you need to keep it during that time only.
Also chiming in on this thread - I agree this is a limit that I think large organizations will likely hit when it comes to APIM. Here's my own example ci/cd repo with Bicep to manage and update APIM in a DevOps approach:
https://github.com/haithamshahin333/apim-devops-bicep
If you focus on the APIs folder, this is likely what would get you over the limit (the 'service' and 'shared' folders are also relevant to APIM, but as other folks mentioned on this thread the APIs are going to contribute the most especially if you add the openapi spec right in the folder as I do here).
It sounds like a good approach and what you're doing @verschaevesiebe is having your pipeline iterate through every API under the APIs folder and run those as separate deployments instead of linking the module in something like 'main.bicep' as I have it.
@miqm any thoughts on other potential approaches?
Linking to another issue that's about other ARM limits: #5100
We are also experiencing this limit with our management deployment as it also holds our sentinel deployment.
Is there any plans to increase this?
Bicep version v0.4.1124
Describe the bug When compiling our main.bicep (which is refferring to multiple modules which in turn have sub-modules) We are hitting a limit of 4Mb when doing a what-if and a deploy.
It's clear to us that there's a limitation on the compiled ARM file however, the question arises is on how we can mitigate this ? We've been looking around for solutions but Bicep keeps including the modules as a nested child inside of the compile main.json (compiled bicep main.bicep)
Are there ways to tackle this issue / limitation ? Our bicep includes infrastructure, configuration and api management services and operations. That is the main reason why we're hitting the 4MB limit.
We have an obligation of doing complete builds to keep multiple environments perfectly aligned to eachother. Seperating or changing deployments can't be done through an incremental mode.
We've been putting months of work in transition to Bicep just to finally run into this issue in our growing architecture. (300 + of APIs) We really can't move to ARM LinkedURI templates. π
To Reproduce Compile a bicep template with modules which is more than 4Mb and try to what-if or deploy the solution to cloud (Azure)