Open davidkarlsen opened 2 years ago
I have this issue too. The issue is clearly with the registry_username and registry_password in the docker block.
To get it working I have to put the username and password in app_settings:
application_stack {
docker {
registry_url = "${azurerm_container_registry.MYREGISTRY.login_server}"
image_name = "functions"
image_tag = "dev"
}
}
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = false
"DOCKER_REGISTRY_SERVER_USERNAME" = azurerm_container_registry.MYREGISTRY.admin_username
"DOCKER_REGISTRY_SERVER_PASSWORD" = azurerm_container_registry.MYREGISTRY.admin_password
}
Then every time I run after that I get "changes needed":
~ app_settings = {
+ "DOCKER_REGISTRY_SERVER_PASSWORD" = (sensitive)
+ "DOCKER_REGISTRY_SERVER_USERNAME" = "MYUSERNAME"
~ site_config {
# (33 unchanged attributes hidden)
~ application_stack {
# (2 unchanged attributes hidden)
~ docker {
- registry_password = (sensitive value)
- registry_username = (sensitive value)
# (3 unchanged attributes hidden)
}
}
}
For now, I've ignored those changes in the lifecycle.
Further, I tried to modify the docker settings in the GUI and save. That also caused a pull failure.
@davidkarlsen Thanks for raising this issue, can you confirm that the docker image input in the TF config exists and can be accessed without any issues? I used below config, and everything works fine.
resource "azurerm_linux_function_app" "test" {
name = "xiaxintest-LFA824"
location = data.azurerm_resource_group.test.location
resource_group_name = data.azurerm_resource_group.test.name
service_plan_id = azurerm_service_plan.test.id
storage_account_name = azurerm_storage_account.test.name
storage_account_access_key = azurerm_storage_account.test.primary_access_key
site_config {
application_stack {
docker {
registry_url = "https://mcr.microsoft.com"
image_name = "dotnet/samples"
image_tag = "aspnetapp"
}
}
}
We didn't add any restrictions to the usage of the docker related properties, such as the registry_url
has to be used together with registry_username
, but the service will try to pull the image and throws errors if there is any access issue.
A correct application_stack
for docker should look like this (using a registry server configured in terraform):
application_stack {
docker_image_name = "somepath/myimage:latest"
docker_registry_url = "https://${azurerm_container_registry.app-container.login_server}"
docker_registry_username = azurerm_container_registry.app-container.admin_username
docker_registry_password = azurerm_container_registry.app-container.admin_password
}
Please note that the docker_registry_url
must have the https://
scheme prefix (http also works).
The settings should not appear in app_settings
any more.
You also have to put, because the required prefix will trigger the change detection on every run.
lifecycle {
ignore_changes = [
site_config.0.application_stack.0.docker_registry_url
]
}
If anyone comes across this, I did a bunch of testing and here's what I've figured out. I have updated this comment about 6 times after testing many edge cases, I'm still not sure it's 100% correct.
First this applies to all azure app services, not just linux function apps
There's a hidden app setting called DOCKER_CUSTOM_IMAGE_NAME
that controls which run-time stack is loaded.
It has two parts separated with a pipe '|' character.
With "standard" stacks like .NET, Node, Python, etc... this will be the stack identifier and the version. (e.g. "DOTNETCORE|7.0", "PYTHON|3.10", etc.)
With custom docker containers it will be DOCKER|<server-host-name-without-protocol>/<image_name>:<image_tag>
(e.g. 'DOCKER|example.azurecr.io/myimage:latest`)
In addition, if these three settings exist: DOCKER_REGISTRY_SERVER_URL
, DOCKER_REGISTRY_SERVER_PASSWORD
, and DOCKER_REGISTRY_SERVER_USERNAME
. They are passed into a docker login
command, before it attempts to do a docker pull. These three settings, (and their values) are visible in the UI under Settings > Configuration > Application Settings. These only need to be set if you're using a custom docker image in a private repository (and you're not using managed identities with ACR)
The portal attempts to abstract the DOCKER_CUSTOM_IMAGE_NAME
value and make it look like different settings, and you will see different UIs depending on it's value. The exact UI will vary, but none of the settings you see in the UI are stored as they appear. They are parsed out of the DOCKER_CUSTOM_IMAGE_NAME
on load, and recombined to make the DOCKER_CUSTOM_IMAGE_NAME
value on save.
You can see this play out... if you set the docker repository to https://registry.gitlab.com/<user>/<project>
and the image to <imagename>:<tag>
it will work. But when you come back to the UI, you'll see instead the repository is https://registry.gitlab.com
and the image becomes <user>/<project>/<imagename>:<tag>
because it parses up to the first slash as the repository.
The UI does a few other things as well. It enforces that the registry name starts with https://
. Under the hood doesn't care since the protocol part is ignored by docker login
and doesn't exist in the DOCKER_CUSTOM_IMAGE_NAME
.
It is possible to confuse the UI, because it assumes that the DOCKER_REGISTRY_SERVER_URL
will always have the same hostname as the one found in DOCKER_CUSTOM_IMAGE_NAME
. If they don't and you open the UI change nothing but resave, these values will change.
Also, the UI looks at the DOCKER_REGISTRY_SERVER_URL
appsetting to determine if it should show the UI for a custom docker image, if it's not set (which it doesn't need to be for it to work under the hood if a docker login isn't needed) the UI won't be correct but the container will still load.
The AZ cli works at a lower level than the UI, the command to use for a custom docker container is az webapp config container set
and this command does less abstraction than the portal UI. When using the CLI, with the -i
aka -c
aka --docker-custom-image-name
parameter, it sets the application settings to what you pass in pretty much verbatim. The only exception being that the value passed in as the --docker-custom-image-name
is prefixed with DOCKER|
before setting the DOCKER_CUSTOM_IMAGE_NAME
app setting.
The cli doesn't care if you put https://
at the start of the --docker-registry-server-url
aka -r
parameter. But you do need to do that for the portal UI and for terraform to correctly understand what you're doing.
Terraform attempts to mimic what the portal UI does for abstraction in the application_stack
block. In the provider it's also just creating a DOCKER_CUSTOM_IMAGE_NAME
app setting and parsing it out again. Terraform also enforces that docker_registry_url
starts with https://
just like the UI even though app services doesn't actually care under the hood.
There are some differences in behaviour though, notably when you're using managed identities for docker login.
If you set any of the custom docker image parameters in tf, tf will set ALL of the app settings. So even if you're not using them TF will always set the DOCKER_REGISTRY_SERVER_PASSWORD
, and DOCKER_REGISTRY_SERVER_USERNAME
app settings to a null value and an empty string respectively.
If however you then clear these settings from the CLI with az webapp config container delete
and then do another terraform apply, the DOCKER_REGISTRY_SERVER_URL
setting will not be applied. Terraform detects that it's not applied on subsequent terraform apply calls, but won't attempt to actually set the value without a username and password as well.
Knowing all this, to setup tf in a way that doesn't result in it triggering a change detection on each apply. You need to do the following:
docker_registry_url
needs to be set to the hostname of the container repository with a 'https://' in front and without any path components.docker_image_name
should be set to the image name and tag, including any path partsdocker_registry_username
and docker_registry_password
parts appropriately.docker_registry_username
and docker_registry_password
and instead set container_registry_use_managed_identity = true
in you're site_config
block. You may need to manually set the DOCKER_REGISTRY_SERVER_URL value with the cli, an api call, or through the portal.I'm not sure if things have changed since @pgampe comment above.
But at the time of this writing, there's still a defect in the provider when parsing the the DOCKER_CUSTOM_IMAGE_NAME
. TF will correctly set the DOCKER_CUSTOM_IMAGE_NAME
based on the docker_image_name
and docker_registry_url
values of the application_stack
block, but when refreshing state, it will parse the DOCKER_CUSTOM_IMAGE_NAME
it just set incorrectly. Under the hood, the DOCKER_CUSTOM_IMAGE_NAME
needs to contain the registry's hostname as a prefix. The UI will correctly remove the hostname from DOCKER_CUSTOM_IMAGE_NAME
when parsing it to get the correct image name and tag. TF does not do this correctly when refreshing state and will include the value.
As a workaround, you can put:
lifecycle {
ignore_changes = [
site_config[0].application_stack[0].docker_registry_url,
site_config[0].application_stack[0].docker_image_name
]
}
Note that if you're using managed identity for login, you need to ignore the docker_image_name
in addition to the docker_registry_url
. If you just ignore the image name it will incorrectly read the the docker_registry_url as missing, if you just ignore the registry url it doesn't get added to the DOCKER_CUSTOM_IMAGE_NAME consistently.
Also since tf won't set the DOCKER_REGISTRY_SERVER_URL
if you don't also have docker_registry_username
and docker_registry_password
set, you need to manually set DOCKER_REGISTRY_SERVER_URL
with the cli (only once) with the CLI for the portal to UI to render correctly.
We are also experiencing the same issue mentioned above.
It started happening intermittently when we upgraded hashicorp/azurerm
from version 3.39.1
to 3.65.0
.
The issue is that DOCKER_CUSTOM_IMAGE_NAME
is missing the registry URL, even though DOCKER_REGISTRY_SERVER_URL
is set - resulting in a bad request error when docker pull occurs within the function app.
However, when we remove the DOCKER_REGISTRY_SERVER_URL
from the function app configuration and re-run our terraform to re-add. The docker pull is successful.
Please keep us updated when this is fixed in a later version, we went with the above workaround for now and it seems to have done the trick.
Hi Guys,
I was recently having this specific issue (similar on its root to that one of the azure functions) in my case was with linux_web_app, same story under the app, it was impossible to get it right (always complaining about multiple definition of runtimes).
application_stack{
docker_image_name = string
docker_registry_url = string
docker_registry_username = string
docker_registry_password = string
dotnet_version = string
go_version = string
java_server = string
java_server_version = string
java_version = string
node_version = string
php_version = string
python_version = string
ruby_version = string
}
}
[!IMPORTANT] If you are trying (for the specific case of web app services) define a Linux web app service (of the container type) you should only define the docker related properties/attributes (meaning the first 4). If you define those + any of the runtime (node, python, etc..) then the error described shows up.
Is there an existing issue for this?
Community Note
Terraform Version
1.1.9
AzureRM Provider Version
3.3.0
Affected Resource(s)/Data Source(s)
azurerm_linux_function_app
Terraform Configuration Files