Azure / Azurite

A lightweight server clone of Azure Storage that simulates most of the commands supported by it with minimal dependencies
MIT License
1.74k stars 309 forks source link

The Azurite instance inside a docker image becomes unreachable when deploying the image to k8s #2368

Closed Arash-Sabet closed 4 months ago

Arash-Sabet commented 4 months ago

Which service(blob, file, queue, table) does this issue concern?

All of them

Which version of the Azurite was used?

The recent one

Where do you get Azurite? (npm, DockerHub, NuGet, Visual Studio Code Extension)

npm

What's the Node.js version?

The latest

What problem was encountered?

Steps to reproduce the issue?

Please review this issue first for a background. Also, many thanks to @blueww who helped making the azurite become reachable per her reply. The issue that we just encountered after deploying the built docker image to a k8s cluster is that the .NET application (which is an Xunit test project) existing inside the docker image is unable to communicate with the Azurite's instance within the same docker image. However, this image is working perfectly on a development machine i.e. a laptop.

The error message reads as:

(Name or service not known (host.docker.internal:10000))

Have you found a mitigation/solution?

No

@blueww Are you able to suggest a solution?

blueww commented 4 months ago

@Arash-Sabet

It looks your testing project is trying to access azurite with "host.docker.internal:10000" instead of "127.0.0.1:10000".

"host.docker.internal" is a special DNS name provided by Docker Desktop to allow services running in a container to connect to services running on the host. Since both your testing project and Azurite are running in same container, it looks your testing project should NOT connect Azurite with "host.docker.internal". Could you check you testing project why it connect to "host.docker.internal"?

Besides that, you might can try:

  1. change host file to map host.docker.internal to 127.0.0.1.
  2. Start Azurite with parameter "--disableProductStyleUrl". (Or Azurite will take "host.docker.internal" as product style Uri and think "host" is account name.) See more details.
Arash-Sabet commented 4 months ago

Thanks @blueww

I changed the connection string and it now looks like the following line but the connection is still refused:

 "ConnectionString": "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;"

Do I still need to add --disableProductStyleUrl to the following docker line?

RUN sudo azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 -s -l /usr/local/lib/node_modules/azurite -d /usr/local/lib/node_modules/azurite/debug.log &

Please note that the test projects inside the container connected to host.docker.internal:10000 successfully on the development laptop.

blueww commented 4 months ago

@Arash-Sabet

You might need to check the client side (your test project), and see why it try to connect to "host.docker.internal", when your connection string is "127.0.0.1".

Arash-Sabet commented 4 months ago

@blueww The client code (Xunit) inside the container connects to azurite on "host.docker.internal" successfully when the docker container is running on a development laptop. The Xunit inside the container uses 127.0.0.1 when the docker container is running in a k8s cluster serving as an Azure DevOps build agent. Communication over 127.0.0.1 fails.

Arash-Sabet commented 4 months ago

@blueww Is it possible that docker never allows 127.0.0.1 to be accessible inside itself to the other internal apps in the same container?! This behavior is weird. Should we enable something when creating the docker image? Or is it possible that the azurite instance does not launch at all?

blueww commented 4 months ago

@blueww The client code (Xunit) inside the container connects to azurite on "host.docker.internal" successfully when the docker container is running on a development laptop. The Xunit inside the container uses 127.0.0.1 when the docker container is running in a k8s cluster serving as an Azure DevOps build agent. Communication over 127.0.0.1 fails.

I am really confused. "host.docker.internal" is a special DNS name provided by Docker Desktop to allow services running in a container to connect to services running on the host. But as you indicated, your test code and Azurite are in same docker container, so it looks "host.docker.internal" should not be used. And do you mean the error for Communication over 127.0.0.1 fails is: (Name or service not known (host.docker.internal:10000))

@blueww Is it possible that docker never allows 127.0.0.1 to be accessible inside itself to the other internal apps in the same container?! This behavior is weird. Should we enable something when creating the docker image?

Our teams owns Azurite, but not docker. As this question is a docker question, we might not be the best person to answer it.

Arash-Sabet commented 4 months ago

@blueww Me too. I think the root cause of the confusion to me as well is due to the following line:

RUN sudo azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 -s -l /usr/local/lib/node_modules/azurite -d /usr/local/lib/node_modules/azurite/debug.log &

This line is likely a no-op docker command which does not launch azurite. I think azurite must be launched form a build Azure DevOps pipeline in a bash command. I will have it tried.

Arash-Sabet commented 4 months ago

We can close this issue. The root cause of the problem was that azurite was not launched as a process by CMD. I had it launched in the Azure DevOps yaml file successfully.