lancachenet / sniproxy

SNI Proxy for HTTPS Pass-through
https://hub.docker.com/r/lancachenet/sniproxy/
MIT License
70 stars 22 forks source link

Allow sniproxy to run on separate IP(macvlan) #18

Open doino-gretchenliev opened 4 years ago

doino-gretchenliev commented 4 years ago

Describe the issue you are having

Running monolithic and sniproxy containers with macvlan network.

How are you running the container(s)

version: '3'
services:
  game-cache:
    image: lancachenet/monolithic:latest
    container_name: 0700_game_cache
    volumes:
    - /volume2/Games:/data/cache:rw
    networks:
      bridge:
        ipv4_address: 172.29.0.2
    labels:
    - SERVICE_80_NAME=game-cache
    environment:
    - CACHE_DISK_SIZE=500000m
    - CACHE_SLICE_SIZE=8m
    - TZ=Europe/Sofia
    - UPSTREAM_DNS="1.1.1.1 1.0.0.1"
    hostname: game_cache
    restart: always
    dns:
    - 1.0.0.1
    - 1.1.1.1

  game-https-proxy:
    build:
      context: /volume1/docker/sniproxy
    container_name: 0701_game_https_proxy
    expose:
    - 443
    - 80
    networks:
      bridge:
      macvlan:
        ipv4_address: 192.168.1.253
    labels:
    - SERVICE_80_NAME=game-https-proxy
    environment:
    - TZ=Europe/Sofia
    - UPSTREAM_DNS=1.1.1.1 1.0.0.1
    hostname: game_https_proxy
    restart: always
    dns:
    - 1.0.0.1
    - 1.1.1.1

networks:
  macvlan:
    driver: macvlan
    driver_opts:
      parent: ovs_bond0
    ipam:
      config:
      - subnet: 192.168.1.0/24
  bridge:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.enable_icc: 'true'
      com.docker.network.bridge.enable_ip_masquerade: 'true'
      com.docker.network.driver.mtu: 1500
    ipam:
      config:
      - subnet: 172.29.0.0/16

In my docker-compose setup, I'm building my image overlaying your image like so:

FROM lancachenet/sniproxy:latest

COPY sniproxy.conf /etc/sniproxy.conf

sniproxy.conf see below.

Solution

Introduce new env variable GAME_CACHE_IP and refactor the sniproxy.conf to:

user nobody

pidfile /var/run/sniproxy.pid

resolver {
    nameserver UPSTREAM_DNS
    mode ipv4_only
}

access_log {
    filename /dev/stdout
    priority notice
}

error_log {
    filename /dev/stderr
}

listener 0.0.0.0:443 {
    protocol tls
    table https_hosts
}

listen 80 {
    protocol http
    table http_hosts
}

table https_hosts {
    .* *:443
}

table http_hosts {
    .* GAME_CACHE_IP
}

In my case GAME_CACHE_IP=172.29.0.2

Rodman101 commented 4 years ago

I've been searching high and low for this. It seems that no one is running monolithic through a macvlan network. I run it this way to view traffic stats along with my other computers.

This would be a wonderful addition to the merge into sniproxy.

GotenXiao commented 4 years ago

If you're using macvlan as your network driver, another way to achieve this (and probably the better way) is to have the services share a network namespace.

For example: docker-compose.yml diff from latest commit:

diff --git a/docker-compose.yml b/docker-compose.yml
index 38cb8aa..eaa4c72 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -1,23 +1,37 @@
 version: '2'
 services:
+  lancache-ip:
+    image: alpine:latest
+    command: ['tail', '-f', '/dev/null']
+    networks:
+      routed:
+        ipv4_address: ${LANCACHE_IP}
+
   dns:
     image: lancachenet/lancache-dns:latest
     env_file: .env
 #    restart: unless-stopped
-    ports:
-      - ${DNS_BIND_IP}:53:53/udp
+    depends_on:
+      - lancache-ip
+    network_mode: service:lancache-ip
   sniproxy:
     image: lancachenet/sniproxy:latest
     env_file: .env
 #    restart: unless-stopped
-    ports:
-      - 443:443/tcp
+    depends_on:
+      - lancache-ip
+    network_mode: service:lancache-ip
   monolithic:
     image: lancachenet/monolithic:latest
     env_file: .env
 #    restart: unless-stopped
-    ports:
-      - 80:80/tcp
+    depends_on:
+      - lancache-ip
+    network_mode: service:lancache-ip
     volumes:
       - ${CACHE_ROOT}/cache:/data/cache
       - ${CACHE_ROOT}/logs:/data/logs
+
+networks:
+  routed:
+    external: true

With DNS_BIND_IP and LANCACHE_IP set to a valid IP address in your configured macvlan range, and where routed is the name of your macvlan docker network.

Making sniproxy the first port of call definitely "solves" one problem (multiple services on a single IP), but could result in multiple others.

doino-gretchenliev commented 4 years ago

I agree. That is better solution. However, the best solution is to remove the need for sniproxy altogether. Nginx is capable of streaming https traffic with ngx_stream_ssl_preread_module and requires several lines of configuration to do so:

stream {
    server {
        resolver UPSTREAM_DNS;
        listen      443;
        proxy_pass  $ssl_preread_server_name:443;
        ssl_preread on;
    }
}

http {
...
}

monolithic could do both tasks: https://github.com/lancachenet/generic/issues/106

MathewBurnett commented 4 years ago

our sniproxy has been separate so far as it has allowed us the freedom to replace and scale it for different setups. Thats not to say we won't implement a 443 built into the nginx in the future.