docker / for-linux

Docker Engine for Linux
https://docs.docker.com/engine/installation/
753 stars 85 forks source link

Two instances of WordPress result randomly with status 301 Ask Question #467

Closed raven-wing closed 5 years ago

raven-wing commented 5 years ago

Expected behavior

When I go to WP1_IP I get app on first WP container When I go to WP2_IP I get app on second WP container

Actual behavior

When I go to WP1_IP I get app on first wordpress container, but after refreshing 3 times I get contents of the second container. I get response 301 which is redirecting to first container. Same goes for WP2_IP addres.

Steps to reproduce the behavior

wordpress.yml

version: '3.5'

services:
  db:
    image: mysql:5.7
    networks:
      - proxynet
    volumes:
      - db-data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: changeme
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wp
      MYSQL_PASSWORD: changemetoo
    deploy:
      placement:
        constraints:
          - node.role == manager

  word:
    depends_on:
      - db
    image: wordpress
    networks:
      - proxynet
    volumes:
      - wp-content:/var/www/html
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_NAME: wordpress
      WORDPRESS_DB_USER: wp
      WORDPRESS_DB_PASSWORD: changemetoo

volumes:
  db-data:
  wp-content:

network.yml:

networks:
  proxynet:
    name: proxynet

docker stack deploy --compose-file wordpress.yml WP1 docker stack deploy --compose-file wordpress.yml WP2

curl -I ${WP1_IP}

Output of docker version:

Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:25:02 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:28:38 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Output of docker info:

Containers: 7
 Running: 5
 Paused: 0
 Stopped: 2
Images: 4
Server Version: 18.06.1-ce
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 54
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: lpedryo4kttlldigbi9slxcub
 Is Manager: true
 ClusterID: 65d5pn8jvt8ipcuno66kmrnd9
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 10
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.99.100
 Manager Addresses:
  192.168.99.100:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.93-boot2docker
Operating System: Boot2Docker 18.06.1-ce (TCL 8.2.1); HEAD : c7e5c3e - Wed Aug 22 16:27:42 UTC 2018
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.6MiB
Name: test
ID: JN5X:PQDT:5HZQ:IN2R:LXGO:JTAO:IY2N:YYED:JPQM:R3S6:IFI6:6HHI
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.)

running on docker-machine version 0.13.0, build 9ba6da9

thaJeztah commented 5 years ago

I see you have both stacks connected to the same (proxynet) network. What I expect is happening is;

The db and word containers are connected to the network, and will be reachable by any container connected to that network.

Services (and their containers) will be accessible through a number of hostnames. For example, the db service of the WP1 stack will be accessible through the following hostnames;

And the db service for the second stack (WP2);

This will be a problem, because in both stacks, you have the word container configured to be on the proxynet network and to connect to the database, using db as hostname.

Because the database of both WP1 and WP2 is connected to the same network, db could be either one, and in fact, Docker's embedded DNS will use a "round robin" load balancing for those IPs (thus randomly return the IP for db (of the WP1 stack) and db (of the WP2 stack)).

Reproduction of the problem

Here is a simple stack to reproduce this (I'm deploying on a single-node swarm, so all containers will be on my local node so that I can easily docker exec into each container);

Create the "proxynet" network

docker network create -d=overlay --scope=swarm proxynet

Deploy stack WP1 (I use "heredoc" so that I don't have to create a file first :sweat_smile:)

docker stack deploy -c- WP1 <<EOF
version: '3.5'
services:
  db:
    image: emilevauge/whoami
    hostname: WP1-db
    networks:
      - proxynet
  word:
    image: nginx:alpine
    networks:
      - proxynet
networks:
  proxynet:
    name: proxynet
    external: true
EOF

Deploy stack WP2:

docker stack deploy -c- WP2 <<EOF
version: '3.5'
services:
  db:
    image: emilevauge/whoami
    hostname: WP2-db
    networks:
      - proxynet
  word:
    image: nginx:alpine
    networks:
      - proxynet
networks:
  proxynet:
    name: proxynet
    external: true
EOF

Wait for the services to be up;

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES
de5f0ec74aa0        nginx:alpine               "nginx -g 'daemon of…"   34 seconds ago      Up 33 seconds       80/tcp              WP2_word.1.r4f73rmyte0tx17wh157a9px1
bb4a03cea164        emilevauge/whoami:latest   "/whoamI"                37 seconds ago      Up 35 seconds       80/tcp              WP2_db.1.9z4m1lknokoz9bw5unt2leaud
9683e95dd143        nginx:alpine               "nginx -g 'daemon of…"   47 seconds ago      Up 44 seconds       80/tcp              WP1_word.1.ldr16g26n2k3kxlygdzsx6fzj
61320520dd4f        emilevauge/whoami:latest   "/whoamI"                49 seconds ago      Up 44 seconds       80/tcp              WP1_db.1.x65h40wyihg0ru8vmtra3b3wm

Now open a shell in the container for WP1_word:

docker exec -it WP1_word.1.ldr16g26n2k3kxlygdzsx6fzj sh

Install some tools (dig, curl) in the container;

apk add --no-cache bind-tools curl

Observe that all the hosts I listed work; but also that IP-addresses for both "db" services are returned (I removed some of the dig output to make it a bit more readable);

/ # dig db

;; QUESTION SECTION:
;db.                IN  A

;; ANSWER SECTION:
db.         600 IN  A   10.0.1.23
db.         600 IN  A   10.0.1.19

db.proxynet (so, with the network-name suffix)

/ # dig db.proxynet

;; QUESTION SECTION:
;db.proxynet.           IN  A

;; ANSWER SECTION:
db.proxynet.        600 IN  A   10.0.1.23
db.proxynet.        600 IN  A   10.0.1.19

Using the WP1_ prefix will only return the db service for WP1

/ # dig WP1_db

;; QUESTION SECTION:
;WP1_db.                IN  A

;; ANSWER SECTION:
WP1_db.         600 IN  A   10.0.1.19

However, the WP2 db service is accessible, even though we're in a container of the word service for the WP1 stack;

/ # dig WP2_db

;; QUESTION SECTION:
;WP2_db.                IN  A

;; ANSWER SECTION:
WP2_db.         600 IN  A   10.0.1.23

So, what happens if I try to connect to db? Let's try a few times:

/ # curl db
Hostname: WP1-db
IP: 127.0.0.1
IP: 10.0.1.20
IP: 172.18.0.4
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

/ # curl db
Hostname: WP2-db
IP: 127.0.0.1
IP: 10.0.1.24
IP: 172.18.0.6
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

/ # curl db
Hostname: WP1-db
IP: 127.0.0.1
IP: 10.0.1.20
IP: 172.18.0.4
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

/ # curl db
Hostname: WP1-db
IP: 127.0.0.1
IP: 10.0.1.20
IP: 172.18.0.4
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

I'm randomly connected to the db service for WP1 or WP2.

As a result, your WordPress service will randomly connect to one database or the other, and (given that WordPress stores the full URL in the database), will thus send a redirect if the database doesn't match the website.

How to resolve this?

You need to use multiple networks. Don't connect an internal service (in this case; a database) to a network that is shared with other stacks.

Connecting internal services to a shared network is a security risk; other stacks can now connect to your database (and those stacks may be managed by "someone else", and now have access to your database :scream:).

In your example, the db services are connected the proxynet network; from the name of that network, I suspect this network is used by a front-end proxy (e.g. nginx-proxy or https://traefik.io), which means that the database may even become publicly accessible.

Docker networks are a "sandbox" mechanism, and allow you to isolate services, only giving other services access if they are connected to the same network.

In this example, you want;

So, to realize this;

Updated example

Here's what the example looks like with those changes:

Create the "proxynet" network. This network is shared with the proxy server, and other stacks.

docker network create -d=overlay --scope=swarm proxynet

Just to illustrate; the "proxy" stack, which publishes its ports, so will be publicly accessible:

docker stack deploy -c- proxystack <<EOF
version: '3.5'
services:
  proxy:
    image: nginx:alpine
    networks:
      - proxynet
networks:
  proxynet:
    name: proxynet
    external: true
EOF

Deploy stack WP1

docker stack deploy -c- WP1 <<EOF
version: '3.5'
services:
  db:
    image: emilevauge/whoami
    hostname: WP1-db
    networks:
      - private
  word:
    image: nginx:alpine
    networks:
      - private
      - proxynet
networks:
  private:
  proxynet:
    name: proxynet
    external: true
EOF

Deploy stack WP2:

docker stack deploy -c- WP2 <<EOF
version: '3.5'
services:
  db:
    image: emilevauge/whoami
    hostname: WP2-db
    networks:
      - private
  word:
    image: nginx:alpine
    networks:
      - private
      - proxynet
networks:
  private:
  proxynet:
    name: proxynet
    external: true
EOF

After the stacks have been deployed, you'll see that the three networks have been created proxynet, WP1_private and WP2_private;

docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
x87foyu2y37q        WP1_private         overlay             swarm
jwsym6k7rr0u        WP2_private         overlay             swarm
7042da47ef5f        bridge              bridge              local
3e32a906b55f        docker_gwbridge     bridge              local
ac25d483c79e        host                host                local
9ebmyr0vctzj        ingress             overlay             swarm
69771e2906b2        none                null                local
1598r4x4j4e4        proxynet            overlay             swarm

Now, let's check connectivity again;

Open a shell in the container for WP1_word:

docker exec -it WP1_word.1.pwt0ukt8lz8v5ews5duwfeu9k sh

Install some tools (dig, curl) in the container;

apk add --no-cache bind-tools curl

Get the IP address for the db host; notice that there's now only 1 IP address returned

dig db

;; QUESTION SECTION:
;db.                IN  A

;; ANSWER SECTION:
db.         600 IN  A   10.0.1.5

The db service for this stack is connected to the "private" network, so can also be accessed with that suffix;

dig db.WP1_private

;; QUESTION SECTION:
;db.WP1_private.            IN  A

;; ANSWER SECTION:
db.WP1_private.     600 IN  A   10.0.1.5

The WordPress container is connected to the proxynet network, so is also able to connect to the WordPress container of the other stack;

curl WP2_word

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

As well as the proxy service:

curl proxystack_proxy

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

But cannot connect to the other stack's database;

curl WP2_db

curl: (6) Could not resolve host: WP2_db

And, as expected: connecting to the db service will always connect with the db service in the same stack;

/ # curl db
Hostname: WP1-db
IP: 127.0.0.1
IP: 10.0.1.6
IP: 172.18.0.5
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

/ # curl db
Hostname: WP1-db
IP: 127.0.0.1
IP: 10.0.1.6
IP: 172.18.0.5
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

/ # curl db
Hostname: WP1-db
IP: 127.0.0.1
IP: 10.0.1.6
IP: 172.18.0.5
GET / HTTP/1.1
Host: db
User-Agent: curl/7.61.1
Accept: */*

Success! :tada:

So, what's left?

In the above example, both the WP1 and WP2 stack are connected to the proxynet network. This does mean that both stacks can connect to the "WordPress" service from the other stack (WP1_word can connect to WP2_word, and vice-versa). This is likely "ok" (assuming the WordPress services will be accessible publicly anyway), however, if you don't want to allow this, and only want the proxy to be able to connect to both, you need to give each stack a "public" network, and connect the proxy to that network.

Something like this;

compose file for WP1 and WP2:

version: '3.5'
services:
  db:
    image: emilevauge/whoami
    networks:
      - private
  word:
    image: nginx:alpine
    networks:
      - private
      - public
networks:
  private:
  public:

compose file for the proxy; the proxy service is connected to the "public" networks of both the WP1 and WP2 stack:

version: '3.5'
services:
  proxy:
    image: nginx:alpine
    networks:
      - WP1_public
      - WP2_public
raven-wing commented 5 years ago

Oh my... This is what I call "right man in right place". That problem with both db and wordpress containers on public network is a serious problem, I tried to find out why do I get different answers so hard, so I did all things public - this is crucial. Thanks for that notice.

I had no idea that there are two ways to connect to db - I was sure that I can connect only if I use prefix e.g. "WP1_db". The fact you wrote, that "db" is also available is a game changer (is there some option as "docker network dns" to check it?).

Once again, @thaJeztah - you are the man. Thank you a lot. And I don't know how much do they give you, but for sure you should get a rise! Great job. Thank you.

thaJeztah commented 5 years ago

That problem with both db and wordpress containers on public network is a serious problem, I tried to find out why do I get different answers so hard, so I did all things public - this is crucial. Thanks for that notice.

Yes, it's definitely something to take into account. Note that "public" is not really "public" (as you'd still have to make it accessible from outside the host), but in your case it was "shared" between stacks, therefore services from other stacks could access it. However, yes, if the "proxy" service would (automatically) make any service it knows about accessible, then it would definitely be an issue.

I had no idea that there are two ways to connect to db - I was sure that I can connect only if I use prefix e.g. "WP1_db". The fact you wrote, that "db" is also available is a game changer

The reason for both being available is that services within the stack can access other services (without having to know what the stack was named when deployed); the prefixed version is so that there is a "unique" / "non ambiguous" host to connect to a service (in case services from different stacks have to connect).

And I don't know how much do they give you, but for sure you should get a rise!

Ha! I'll mention that 👯 perhaps it works 🤞🤞🤞

I sometimes spend a bit more time on specific issues; also to keep myself "sharp". So; go through the whole issue; reproduce, try different scenarios. People landing on GitHub through a Google search may find it useful, and perhaps some it this should be added to the docs (but... I never have enough time to do that, haha)