Closed augi closed 5 years ago
I really don't know enough about Windows containers to answer your question - IF a Docker Engine on Windows and a Docker Engine on Linux can be part of the same Swarm, then potentially this could be a possibility.
cc @friism Maybe you can answer that better?
@augi this would be done with labels and scheduling constraints which are already part of Docker and Docker Compose. It's demo'ed as a prototype here: https://www.youtube.com/watch?v=GCRbH4aa7VI
Since Swarm-mode with overlay networking between Linux and Windows is not yet supported, Compose can't yet run this.
@friism I am interested as well; so to make sure I understand that means no, or no if you want the services to talk to each other? In my case I don't have the need to have cross platform communication but I do need to have windows and linux containers.
@debo If you just want to run separate apps, one using only Linux containers and another using only Windows containers, then you can switch back and forth: stefanscherer.github.io/run-linux-and-windows-containers-on-windows-10/
Having one app with one docker-compose.yml that's meant to start both Windows and Linux containers won't work.
@friism so I admittedly know very little about the all docker ecosystem yet, the use case is having an orchestrator or schedule that can do be instructed to launch different apps in different environment. Something like: "ok, this is a windows app and it should be created/added there and this is a linux app and so I will create/add it elsewhere where supported."
Basically, to clarify better, imagine what you would potentially do on AWS where you could launch independantly a windows based ec2 instance or a linux one. Is there a sort of equivalent in the docker world? Or do I need to different control nodes?
@debo check out the youtube link above. We're working hard on making this reality: https://blog.docker.com/2016/09/dockerforws2016/
@friism I will thanks
Have you guys made any progress on this? My use case involves the fact that my company uses a windows-only tool (RedGate's SQL Server Source Control) to manage our database schemas, but we have a number of Ruby applications that talk to this database.
It'd be great if I could (in a single docker-compose.yml) spin up a fresh MSSQL database (using sqlserver-linux) and then start a windows container to run redgate and populate the new database.
@l8nite with the release of Win 2016 and its full support for docker we managed, with my team, to run Linux and Windows container in parallel without particular issues.
@debo how did you managed that?
"Simply" using two workers node one running linux and dealing with linux container and one running Win 2016 server taking care of the windows containers all orchestrated via Nomad. I will maybe create a blog post when we have some more concrete to work with.
can the containers work together, or do i need to manage it as if it's two different servers with docker on each? can i create it with one docker-compose command?
@gal-novus no sure if I get the question, you can't run Windows containers on Linux because of the way containers work. You can do the opposite though via Hyper-V for what I understand though.
I just stumbled over this issue. My use cases also involves running a linux and a windows container in parallel on my Windows 10 host system.
As you mentioned above, this should not be a problem anymore. However, I get an error message when running compose.
Should I create a new issue for this or is this a general misunderstanding and the use case I described is not possible?
In the video referenced above, the "run both Linux and Windows containers in the same Swarm" demo starts at 34:37.
You can do this now by creating a Docker swarm that includes both Linux and Windows workers. The use docker stack deploy --compose-file
(don't use docker-compose
) to deploy your mixed Linux/Windows app.
My use cases also involves running a linux and a windows container in parallel on my Windows 10 host system.
This is also what I am attempting to achieve. I would like to be able to spin up the whole environment from one compose file, running on my windows 10 machine. As opposed to having to spin up the linux container(s), manually switch to windows containers, and then spin up my windows container(s).
Apologies if I'm missing something here, it's still not clear to me if this is possible, or how to achieve it if it is?
@jaymickey it's not possible to do that with the default Docker for Windows right now. You'll have to:
docker stack deploy
@friism No worries, thanks for the clarification!
@friism @jaymickey This is exactly what I've been trying to get to work the whole week. This issue is the first time I've read that this is not possible after all. Can you please explain in brief detail why the Windows container running on a Windows host cannot talk to Linux containers running ion a VM? I don't understand why it is supposed to work when the Windows containers are running in a Windows VM.
tl;dr Docker (windows) containers running on Windows 10 Host cannot talk to docker linux containers running in Hyper-V VM but can when running in Windows 10 VM.
Here's a picture of what I want to achive:
@riker09 can you share the compose file and / or docker service
commands you use to launch these services ?
Here's my compose file:
version: '3'
services:
db:
image: mongo:3.4
hostname: db
networks:
- mynet
volumes:
- db:/data/db
deploy:
placement:
constraints:
- node.labels.os == linux
api:
image: myprivateregistry.com/docker/image:windows
ports:
- "1234:1234"
depends_on:
- db
networks:
- mynet
deploy:
placement:
constraints:
- node.labels.os == windows
volumes:
db:
networks:
mynet:
external: true
The api
service accesses the db
service. I have created two versions for my api
service (it is a simple NodeJS application), one for windows, one for linux. When I use the linux image and place both services on different nodes it is working fine (as expected). But when I use the windows image and place the api
service on my host node it cannot reach the db
service.
I've created an attachable external overlay network beforehand:
docker network create -d overlay --attachable --subnet 192.168.50.0/24 mynet
I deploy the services with
docker stack deploy -c stack.yml MYSTACKNAME
I can "manually" create a service and attach it to the overlay network, but that doesn't seem to make any difference.
docker service create --name api --network mynet --constraint 'node.labels.os == windows' --publish 9000:9000 myprivateregistry.com/docker/image:windows
Currently I'm checking out the proposed solution, placing the windows service on a VM windows node. But I'm running into trouble there as well (HNS failed with error : {ObBject already exists ...
, but that's another story. 😄
I'm also exactly trying to do this: have Windows Containers (Hyper-V) communicate with Linux Containers (MobyLinux Hyper-V VM) on a Windows 10 host. Even if it's a dirty temporary solution until LinuxKit is generally available. Is there some way to have them communicate within a single physical host?
I'm also exactly trying to do this: have Windows Containers (Hyper-V) communicate with Linux Containers (MobyLinux Hyper-V VM) on a Windows 10 host. I do have exactly the same use-case.
I'm guessing we'll be able to do this easily with Docker for Windows soon, after this PR was merged: https://github.com/moby/moby/pull/34859
@stebet Is there any way to get it working currently?
I would also very much like to know if there is a way to get it working currently. I have a use case where I need Linux and Windows containers running on a windows 10 box to communicate with each other.
I tried the following from a windows Bash shell without success. The first service (running a linux container) is started and the container comes up successfully. The second service (Windows container) gets stuck in starting and never comes up :-(
What I did was:
Switch to windows containers
docker swarm init --advertise-addr=10.100.1.244 --listen-addr 10.100.1.244:237
Switch to Linux containers
docker swarm join --token SWMTKN-1-65rdxopimmk1orkssv8u0yladax3ggs6dc0ep5uurr45z0g1tk-aivhg7w3ywtrxk7zxfwdri9g2 10.100.1.244:2377
Switch to windows containers and get node names
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
k9kx8skklv3djvmax5b4x9mmw * W10-LT-TD-NEW Ready Active Leader
ab6nxsj545ikylifptnd72jke linuxkit-00155d013b12 Ready Active
Add labels to the nodes
docker node update --label-add os=windows W10-LT-TD-NEW
docker node update --label-add os=linux linuxkit-00155d013b12
Create an overlay network
docker network create --driver=overlay mylocal
Create the services
docker service create --name bamboo --endpoint-mode dnsrr --network mylocal --publish mode=host,target=8085 --publish mode=host,target=54663 --constraint 'node.labels.os==linux' nemlig/bamboo:0.0.1
docker service create --name agent --endpoint-mode dnsrr --network mylocal --constraint 'node.labels.os==windows' nemlig/base-bamboo-agent:0.0.1
I can successfully start the windows container with a docker run. Just not when running it as a service. Pretty sure this is a network issue.
@cblackuk I don't know. Haven't really tried it. That's a question for @friism and co I'd guess. I can manually do a git pull with the --platform flag to fetch linux images and run them side by side with windows images in Windows Container mode but ideally we could put the platform property in the docker-compose file and docker would handle fetching the correct images. Not sure if that's on the drawing board somewhere.
Cc @patricklang @moscowrage @carlfischer1
Is it really impossible to get this to work with the simplicity of Docker Compose? Having to set up Docker Swarm just to be able to run containers on different hosts seem overly complicated.
All I really want is to be able to set a different DOCKER_HOST
for one of the containers in my docker_compose.yml
file. Something like this would do just fine:
octopus:
image: "octopusdeploy/octopusdeploy"
docker_host: "tcp://172.28.128.3:2375"
ports:
- "81:81"
tty: true
All of the other containers run on Linux, but Octopus Deploy only runs on Windows, so I need to assign it to another host (Windows Server 2016 running inside VirtualBox). Is adding support for docker_host
to docker-compose.yml
impossible?
With edge channel, docker daemon can simultaneously run Linux and Windows containers "natively" (with hyper-v isolation but without a Linux hyper-v VM). But how to specify this in for docker swarm? As a simple first step, I just want to run a docker swarm with the docker daemon in LCOW mode ("windows container" in the system tray) with only Linux containers. This is the degenerate case of count=0 for one of the container types. I can do this by switching to Linux Containers mode and run the MobyLinux VM, but this is an LCOW question. Here's the simple docker-compose.yml I borrowed from another thread:
networks:
mynet:
driver: overlay
services:
nginx:
image: nginx:alpine
deploy:
replicas: 2
networks:
mynet:
aliases:
- nginx.finomial.io
curler:
image: nathanleclaire/curl
command: sh -c 'while true; do curl -si nginx.finomial.io| grep HTTP; sleep 1; done'
networks:
- mynet
Windows 10 Fall Creator Update, Version 18.03.0-ce-win58 (16761) Channel: edge, in "Windows Container" (LCOW) mode.
@friism
Has this progressed at all? Am I correct in understanding that this is a limitation in the underlying docker engine for Windows or the way it does networking between container platforms that is the underlying issue - rather than docker compose (why it keeps coming up as the culprit, I'm not sure)?
My use case is driven by the fact that MS offer their best SQL server instances on Linux, but I'm wanting to have an ASP.Net application connect to it on the same host while I'm containerising it. Surely I'm not alone in this requirement, and yet I see very little on the internet about it!
Am I missing a key bit of information here, or am I really on the 'bleeding edge'?
ping @carlfischer1
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not recent activity during the stale period.
Our use-case is .NET application running in Windows Container that accesses various services available in Linux containers only (Postgres, Cassandra).
It would be very nice if we could describe this applications in one Compose file. This would require possibility to specify
platform
of eachservice
(linux
orwindows
).Is this even doable? It would require to have connection to more Docker hosts (now, there is one Docker connection for whole Compose), ensure networking (must have) and volumes linking (not so important imho).