need4swede / Portall

Port Management Interface
MIT License
299 stars 5 forks source link

bug: importing from docker-compose files fails on YAML anchors #20

Closed sammcj closed 4 weeks ago

sammcj commented 1 month ago

I went to try out Portall but found that it fails to import any of my docker-compose files.

Digging into the logs it looks like the parser seems to fail when it hits a (completely valid) YAML anchor (e.g. &name mycontainer:, which might later be used with something like hostname: *name).

      ^
second occurrence
  in "<unicode string>", line 312, column 3:
      &name unifi:
      ^

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/app/utils/routes/imports.py", line 46, in import_data
    imported_data = import_docker_compose(file_content)
  File "/app/utils/routes/imports.py", line 165, in import_docker_compose
    raise ValueError(f"Invalid Docker-Compose YAML format: {str(e)}")
ValueError: Invalid Docker-Compose YAML format: found duplicate anchor 'name'; first occurrence
  in "<unicode string>", line 244, column 3:
      &name plex:
      ^
second occurrence
  in "<unicode string>", line 312, column 3:
      &name unifi:
      ^

Relevant part of docker-compose file:

services:
  &name unifi:
    <<: [*autoupdate, *restart, *secopts, *limits-mem-1536]
    image: lscr.io/linuxserver/unifi-network-application:latest
    container_name: *name
    hostname: *name
need4swede commented 1 month ago

Thank you for submitting this issue.

I've updated the import logic to include anchors and it will be supported in the next release. It should be going live shortly.

need4swede commented 1 month ago

Please try the latest release and let me know if you experience any issues

sammcj commented 1 month ago

Thanks @need4swede, I just pulled the latest container and it appears to have the same issue:

name  |
name  | During handling of the above exception, another exception occurred:
name  |
name  | Traceback (most recent call last):
name  |   File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1473, in wsgi_app
name  |     response = self.full_dispatch_request()
name  |   File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 882, in full_dispatch_request
name  |     rv = self.handle_user_exception(e)
name  |   File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 880, in full_dispatch_request
name  |     rv = self.dispatch_request()
name  |   File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 865, in dispatch_request
name  |     return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
name  |   File "/app/utils/routes/imports.py", line 52, in import_data
name  |     imported_data = import_docker_compose(file_content)
name  |   File "/app/utils/routes/imports.py", line 177, in import_docker_compose
name  |     raise ValueError(f"Invalid Docker-Compose YAML format: {str(e)}")
name  | ValueError: Invalid Docker-Compose YAML format: found duplicate anchor 'name'; first occurrence
name  |   in "<unicode string>", line 149, column 3:
name  |       &name textgen:
name  |       ^
name  | second occurrence
name  |   in "<unicode string>", line 287, column 3:
name  |       &name invokeai:
name  |       ^
name  | INFO:werkzeug:172.22.0.10 - - [14/Jul/2024 04:25:07] "POST /import HTTP/1.1" 500 -
need4swede commented 1 month ago

Could you please share your compose file with me?

sammcj commented 1 month ago

Sure thing, there's a number of compose files for various projects but here's a good random sampling of some with a few things redacted:

---
name: nas

include:
  - path: ./docker-compose-traefik.yaml
    env_file:
      - .env
      - env/.traefik.env
  - path: ./docker-compose-authentik.yaml
    env_file:
      - .env
      - env/.authentik.env
  - path: ./docker-compose-ai.yaml
    env_file:
      - .env
      # - env/.ai.env
  - path: ./docker-compose-adhoc.yaml
    env_file:
      - .env
      # - env/.adhoc.env

### YAML Anchors ###
# https://hotio.dev/pullio/
x-autoupdate: &autoupdate
  labels:
    org.hotio.pullio.update: true
    traefik.docker.network: traefik-servicenet

x-restart: &restart
  restart: unless-stopped

x-secopts: &secopts
  security_opt:
    - no-new-privileges:true

x-limits-mem-512: &limits-mem-512
  deploy:
    resources:
      limits:
        memory: 512M

x-limits-mem-1024: &limits-mem-1024
  deploy:
    resources:
      limits:
        memory: 1024M

x-limits-mem-1536: &limits-mem-1536
  deploy:
    resources:
      limits:
        memory: 1536M

##################################################################
### Secrets ###
##################################################################
secrets:
  REDACTED:
    file: /path/to/redacted

##################################################################
### Networks ###
##################################################################
networks:
  eth0: # unmanaged
    name: eth0
    external: true
    enable_ipv6: false

  default: # Internal and (outbound) internet access
    enable_ipv6: false
    ipam:
      config:
        - subnet: fd00:1:0:0::/64

  public: # Potentially exposed to the internet (requires other configuration)
    name: public
    enable_ipv6: false

  internal: # No internet/network access if only connected to this network
    name: internal
    internal: true
    enable_ipv6: false

  traefik-servicenet: # Network for containers with internal services (proxy)
    external:
      true # created with:
    name: traefik-servicenet

  traefik-authentik: # Network for traefik/authentik communication
    external: false
    internal: true
    enable_ipv6: false
    name: traefik-authentik
    ipam:
      config:
        - subnet: 172.19.0.0/16

  authentik-internal: # Network for internal authentik communication
    name: authentik-internal
    internal: true

  docker-proxynet: # Used for securely exposing docker.sock to traefik (only)
    name: docker-proxynet
    external: true # externally created with:

  unifi: # Used for unifi
    name: unifi

services:
  &name pip:
    container_name: *name
    hostname: *name
    <<: [*autoupdate, *restart, *secopts, *limits-mem-1536]
    image: epicwink/proxpi:latest
    environment:
      PROXPI_CACHE_DIR: /var/cache/proxpi
      PROXPI_CACHE_SIZE: 21474836480 # 20GB
      GUNICORN_CMD_ARGS: "--log-level error"
      PROXPI_EXTRA_INDEX_URLS: "https://pypi.ngc.nvidia.com/,https://pypi.python.org/simple/"
      PIP_TRUSTED_HOST: "pypi.ngc.nvidia.com,download.pytorch.org,pypi.python.org,pip.my.internal.domain"
    volumes:
      - ${MOUNT_DOCKER_DATA}/proxpi/cache:/var/cache/proxpi:rw
    networks:
      - traefik-servicenet # internal services
    labels:
      org.hotio.pullio.update: true
      traefik.enable: true
      traefik.http.routers.pip.rule: Host(`pip.my.internal.domain`)
      traefik.http.routers.pip.tls.certresolver: le
      traefik.http.routers.pip.entrypoints: websecure
      traefik.http.routers.pip.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.services.pip-service.loadbalancer.server.port: 5000
      whalewall.enabled: true
      whalewall.rules: |

        mapped_ports: ## inbound traffic
          external:
            allow: true
            ips:
              - "172.16.0.0/13" # all internal services
          localhost:
            allow: true
        output: ## outbound traffic
          - proto: tcp
            dst_ports:
              - 443
              - 80
          - proto: udp
            dst_ports:
              - 53 # allow DNS requests out

  ### Dozzle for container logs ###
  &name dozzle:
    <<: [*autoupdate, *restart, *secopts, *limits-mem-512]
    container_name: *name
    hostname: *name
    profiles:
      - *name
    image: amir20/dozzle:latest
    environment:
      - DOZZLE_NO_ANALYTICS=true
      - DOZZLE_LEVEL=warn
      - DOZZLE_HOSTNAME=logs.my.internal.domain
      - DOZZLE_ENABLE_ACTIONS=true
      - DOZZLE_REMOTE_HOST=tcp://dockerproxy:2375
    secrets:
      - dozzle_user
    networks:
      - traefik-servicenet # internal services
      - docker-proxynet # secure docker access
    labels:
      org.hotio.pullio.update: true
      traefik.enable: true
      traefik.http.routers.logs.rule: "Host(`logs.my.internal.domain`) || Host(`dozzle.my.internal.domain`)"
      traefik.http.routers.logs.tls.certresolver: le
      traefik.http.routers.logs.entrypoints: websecure
      traefik.http.routers.logs.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.services.logs-service.loadbalancer.server.port: 8080
      traefik.http.routers.logs.middlewares: authentik

      whalewall.enabled: true
      whalewall.rules: |

        mapped_ports: ## inbound traffic
          external:
            allow: true
            ips:
              - "172.16.0.0/13"
          localhost:
            allow: true
        output: ## outbound traffic
          - network: docker-proxynet
            proto: tcp
            dst_ports:
              - 2375
          - proto: tcp
            dst_ports:
              - 443
              - 80
          - proto: udp
            dst_ports:
              - 53

  &name apt:
    hostname: *name
    container_name: *name
    <<: [*autoupdate, *restart, *secopts, *limits-mem-1024]
    image: sameersbn/apt-cacher-ng
    environment:
      HOSTNAME: apt
      DOMAIN: my.internal.domain
    ports:
      - "3142:3142"
    expose:
      - "3142"
    volumes:
      - ${MOUNT_DOCKER_DATA}/apt:/var/cache/apt-cacher-ng
    networks:
      - traefik-servicenet # internal services
      - default
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.docker.network: traefik-servicenet
      traefik.http.routers.apt.rule: "Host(`apt.my.internal.domain`)"
      traefik.http.routers.apt.tls.certresolver: le
      traefik.http.routers.apt.entrypoints: websecure
      traefik.http.routers.apt.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.services.apt-service.loadbalancer.server.port: 3142

  &name homarr:
    <<: [*autoupdate, *restart, *secopts, *limits-mem-512]
    container_name: *name
    image: ghcr.io/ajnart/homarr:latest
    environment:
      - BASE_URL=https://home.my.internal.domain
    env_file:
      - .env
      - env/.homarr.env
    volumes:
      - ${MOUNT_DOCKER_DATA}/homarr/configs:/app/data/configs
      - ${MOUNT_DOCKER_DATA}/homarr/data:/app/data/data
      - ${MOUNT_DOCKER_DATA}/homarr/data:/data
      - ${MOUNT_DOCKER_DATA}/homarr/icons:/app/public/icons
      - ${MOUNT_DOCKER_DATA}/homarr/backgrounds:/app/public/imgs/backgrounds
    networks:
      traefik-servicenet: # internal services
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.docker.network: traefik-servicenet
      traefik.http.routers.home.rule: "Host(`home.my.internal.domain`) || Host(`homarr.my.internal.domain`) || Host(`dash.my.internal.domain`)"
      traefik.http.routers.home.tls.certresolver: le
      traefik.http.routers.home.entrypoints: websecure
      traefik.http.routers.home.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.services.home-service.loadbalancer.server.port: 7575
      traefik.http.routers.home.middlewares: authentik
      traefik.http.middlewares.home.headers.customrequestheaders.X-authentik-email: "true"

  ### Unifi Wireless Management ###
  &name unifi:
    <<: [*autoupdate, *restart, *secopts, *limits-mem-1536]
    image: lscr.io/linuxserver/unifi-network-application:latest
    depends_on:
      unifi-db:
        restart: false
        condition: service_started
    links:
      - unifi-db
    container_name: *name
    hostname: *name
    env_file:
      - .env
      - env/.unifi.env
    environment:
      MEM_LIMIT: 1024
      MEM_STARTUP: 384
      DB_URI: mongodb://mongo/unifi
      STATDB_URI: mongodb://mongo/unifi_stat
      DB_NAME: unifi
    volumes:
      - ${MOUNT_DOCKER_DATA}/unifi:/config:rw #,Z #,Z
      - ${MOUNT_DOCKER_DATA}/unifi/container/cert:/usr/lib/unifi/cert:rw
      - ${MOUNT_DOCKER_DATA}/unifi/backup:/unifi/data/backup
    networks:
      - traefik-servicenet # internal services
      - default
      - unifi
    ports:
      - 3478:3478/udp # STUN
      - 10001:10001/udp # AP discovery
      - 8480:8480
      - 8080:8080 # Device/ controller comm.
      - 3748:3748
      - 8880:8880 # HTTP portal redirection
      - 8843:8843 # HTTPS portal redirection
      - 8443:8443 # Controller GUI/API as seen in a web browser
    labels:
      org.hotio.pullio.update: true
      traefik.enable: true
      traefik.http.routers.unifi.rule: Host(`unifi.my.internal.domain`)
      traefik.http.routers.unifi.tls.certresolver: le
      traefik.http.routers.unifi.entrypoints: websecure
      traefik.http.routers.unifi.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.unifi.middlewares: authentik
      traefik.http.services.unifi-service.loadbalancer.server.port: 8443
      traefik.http.routers.unifi.service: unifi-service
      traefik.http.services.unifi.loadbalancer.server.scheme: https

  &name unifi-db:
    <<: [*autoupdate, *restart, *secopts, *limits-mem-1024]
    image: mongo:7
    container_name: *name
    hostname: *name
    env_file:
      - env/.unifi.env
      - .env
    networks:
      - unifi
    ports:
      - 27017
    environment:
      MEM_LIMIT: 1024
      MEM_STARTUP: 512
    command: ["timeout", "43200", "mongod", "--bind_ip", "0.0.0.0"] #, "--auth"
    volumes:
      - ${MOUNT_DOCKER_DATA}/unifi-db/data:/data
need4swede commented 1 month ago

I see. The yaml parser has issues with duplicate anchors in your compose file.

I actually ended up rewriting the entire import logic of Portall. I tried your compose file and it worked for me. Please re-install Portall with the latest release and let me know if this solved the issue for you.

sammcj commented 1 month ago

That got a lot closer! It now seems to bork out when parsing lines that contain comments:

ERROR:app:Exception on /import [POST]
Traceback (most recent call last):
  File "/app/utils/routes/imports.py", line 230, in import_docker_compose
    "port": int(port),
ValueError: invalid literal for int() with base 10: '127.0.0.1'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/app/utils/routes/imports.py", line 50, in import_data
    imported_data = import_docker_compose(file_content)
  File "/app/utils/routes/imports.py", line 240, in import_docker_compose
    raise ValueError(f"Error parsing Docker-Compose file: {str(e)}")
ValueError: Error parsing Docker-Compose file: invalid literal for int() with base 10: '127.0.0.1'
INFO:werkzeug:172.22.0.10 - - [14/Jul/2024 21:33:00] "POST /import HTTP/1.1" 500 -

Which I think comes from:

  playwright:
    <<: [*autoupdate, *restart, *limits-mem-1536] #chrome being chrome doesn't like *secopts
    hostname: playwright
    container_name: playwright
    image: browserless/chrome:latest
    # user: ${UID:-1001}
    # ports:
    #   - 127.0.0.1:23000:3000

and:

INFO:werkzeug:172.22.0.10 - - [14/Jul/2024 21:33:00] "POST /import HTTP/1.1" 500 -
ERROR:app:Exception on /import [POST]
Traceback (most recent call last):
  File "/app/utils/routes/imports.py", line 230, in import_docker_compose
    "port": int(port),
ValueError: invalid literal for int() with base 10: '6379 #'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/app/utils/routes/imports.py", line 50, in import_data
    imported_data = import_docker_compose(file_content)
  File "/app/utils/routes/imports.py", line 240, in import_docker_compose
    raise ValueError(f"Error parsing Docker-Compose file: {str(e)}")
ValueError: Error parsing Docker-Compose file: invalid literal for int() with base 10: '6379 #'
INFO:werkzeug:172.22.0.10 - - [14/Jul/2024 21:36:19] "POST /import HTTP/1.1" 500 -

where I think '6379 #' comes from:

  &name missingstudioredis:
    <<: [*ai-common]
    profiles:
      - missingstudio
    image: redis:7.2-alpine
    container_name: *name
    hostname: *name
    ports:
      - 6379 #:6379 

I'd share all my compose files with you but there's a lot and it requires some redaction, so I've been trying to test it with a good sample of different definitions. Hopefully the outcome of this will be that you end up with quite a battle-tested compose parser!

Thanks again for looking into this, I have no expectation of support or priority from me - it's an interesting looking tool - but I know how painful parsing YAML can be 😅

need4swede commented 1 month ago

Thanks for getting back to me.

Yes, it’s a bit of a pain to parse given the variety of options. What I ended up doing was translate the yaml into json and import the data that way.

The more edge cases I account for, the more robust the import logic - as you mentioned! I thought I had something for comments, but maybe I forgot to integrate it. I’ll have to get back to it.

Can you confirm if it works otherwise? Just remove lines with comments.

sammcj commented 1 month ago

Unfortunately not, it doesn't error but it doesn't seem to import anything:

image

Here's another docker-compose example for you:

---
x-gpu: &gpu
  group_add:
    - "39"
  devices:
    - /dev/nvidia0:/dev/nvidia0
    - /dev/nvidia1:/dev/nvidia1
    - /dev/nvidia2:/dev/nvidia2

  runtime: nvidia
  deploy:
    resources:
      reservations:
        devices:
          - driver: nvidia
            count: all
            capabilities: ["compute", "utility", "graphics"]
          - driver: cdi
            device_ids:
              - nvidia.com/gpu=all
            capabilities: ["compute", "utility", "graphics"]

x-autoupdate: &autoupdate
  labels:
    org.hotio.pullio.update: true

x-restart: &restart
  restart: on-failure

x-secopts: &secopts
  security_opt:
    - no-new-privileges=true

x-apt: &apt
  volumes:
    - ./apt-proxy.conf:/etc/apt/apt.conf.d/01proxy:ro

x-ai-common: &ai-common
  <<: [*autoupdate, *restart, *secopts, *apt]
  environment:
    - PUID=${PUID:-1001}
    - PGID=${PGID:-1001}
  volumes:
    - ${LOCAL_TIME_FILE}:${LOCAL_TIME_FILE}:ro
    - ./apt-proxy.conf:/etc/apt/apt.conf.d/01proxy:ro
    - /usr/local/cuda-11.8:/usr/local/cuda-host-11.8:ro
    - /usr/local/cuda-12.2:/usr/local/cuda-12.2:ro
    - /usr/local/cuda-12.3:/usr/local/cuda-12.3:ro
    - /usr/local/cuda-12.4:/usr/local/cuda-12.4:ro
    - /usr/local/cuda-12.5:/usr/local/cuda-12.5:ro
    - /usr/local/cuda:/usr/local/cuda-host:ro 
    - ${MOUNT_DOCKER_DATA}/_utils:/utils:ro
  env_file:
    - env/.ai.env
  restart: on-failure
  labels:
    org.hotio.pullio.update: true
  security_opt:
    - no-new-privileges=true

configs:
  perplexica-config:
    file: ./perplexica/config.toml

secrets:
  SLACK_APP_TOKEN:
    file: /opt/docker-secrets/SLACK_APP_TOKEN
  SLACK_BOT_TOKEN:
    file: /opt/docker-secrets/SLACK_BOT_TOKEN
  HUGGINGFACE_TOKEN:
    file: /opt/docker-secrets/HUGGINGFACE_TOKEN

name: ai

services:
  &name piper:
    container_name: *name
    hostname: *name

    <<: [*autoupdate, *restart, *secopts]
    image: rhasspy/wyoming-piper:latest
    ports:
      - 10200 
    volumes:
      - /mnt/llm/piper/models:/data
    command: ["--voice", "en_GB-northern_english_male-medium"]
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.tcp.routers.piper.rule: HostSNI(`*`)
      traefik.tcp.routers.piper.entrypoints: piper
      traefik.tcp.routers.piper.service: piper
      traefik.tcp.services.piper.loadbalancer.server.port: 10200
      traefik.ucp.routers.piper.rule: HostSNI(`*`)
      traefik.ucp.routers.piper.entrypoints: piper
      traefik.ucp.routers.piper.service: piper
      traefik.ucp.services.piper.loadbalancer.server.port: 10200

  &name llamafactory:
    <<: [*ai-common, *restart, *secopts, *gpu]
    hostname: *name
    container_name: *name
    build:
      context: ./llamafactory
      dockerfile: Dockerfile

    ipc: host
    command: ["llamafactory-cli", "webui"]
    profiles:
      - *name
    ports:
      - 7860
    volumes:
      - /mnt/llm/llamafactory/hf_cache:/root/.cache/huggingface/
      - /mnt/llm/llamafactory/data:/app/data
      - /mnt/llm/llamafactory/output:/app/output
      - /mnt/llm/llamafactory/config:/app/config
      - /mnt/llm/llamafactory/saves:/app/saves
      - /mnt/llm/llamafactory/cache:/app/cache
      - /usr/lib64/libcuda.so.1:/usr/lib64/libcuda.so:ro
      - /usr/local/cuda-12.2:/usr/local/cuda-12.2:ro
      - /usr/local/cuda-12.3:/usr/local/cuda-12.3:ro
      - /usr/local/cuda-12.4:/usr/local/cuda-12.4:ro
      - /usr/local/cuda-12.5:/usr/local/cuda-12.5:ro
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.llamafactory.rule: Host(`llamafactory.my.internal.domain`)
      traefik.http.routers.llamafactory.tls.certresolver: le
      traefik.http.routers.llamafactory.entrypoints: websecure
      traefik.http.routers.llamafactory.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.llamafactory.service: llamafactory-service
      traefik.http.services.llamafactory-service.loadbalancer.server.port: 7860
      traefik.http.routers.llamafactory.middlewares: authentik

  &name agi:
    <<: [*ai-common, *secopts, *restart, *autoupdate]
    image: ghcr.io/enricoros/big-agi:development
    container_name: *name
    hostname: *name
    ports:
      - 3000
    env_file:
      - ./env/.ai.env
      - ./env/.agi.env
    command: ["next", "start", "-p", "3000"]
    links:
      - ollama
    stop_grace_period: 2s
    depends_on:
      ollama:
        condition: service_started
        restart: false
    extra_hosts:
      - host.docker.internal:host-gateway
    networks:
      - traefik-servicenet
      - default
    labels:
      org.hotio.pullio.update: true
      traefik.enable: true
      traefik.http.routers.agi.rule: Host(`agi.my.internal.domain`)
      traefik.http.routers.agi.tls.certresolver: le
      traefik.http.routers.agi.entrypoints: websecure
      traefik.http.routers.agi.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.agi.service: agi-service
      traefik.http.services.agi-service.loadbalancer.server.port: 3000
      traefik.http.routers.agi.middlewares: authentik

  &name agidb:
    <<: [*secopts, *restart, *ai-common]
    profiles:
      - agi
    image: postgres:16
    container_name: *name
    hostname: *name
    ports:
      - 5432
    env_file:
      - ./env/.agi.env
    links:
      - agi
    networks:
      - default
    labels:
      traefik.enable: false
      org.hotio.pullio.update: true

  &name nvapi:
    <<: [*ai-common, *restart, *secopts, *gpu, *autoupdate]
    build:
      context: ../NVApi
      dockerfile: Dockerfile
      tags:
        - localhost:5000/sammcj/nvapi:latest
    container_name: *name
    hostname: *name
    pid: host

    ports:
      - 9999

    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.nvapi.rule: Host(`nvapi.my.internal.domain`)
      traefik.http.routers.nvapi.tls.certresolver: le
      traefik.http.routers.nvapi.entrypoints: websecure
      traefik.http.routers.nvapi.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.nvapi.service: nvapi-service
      traefik.http.services.nvapi-service.loadbalancer.server.port: 9999

  &name jan:
    <<: [*ai-common, *restart, *secopts, *gpu, *autoupdate]
    image: ghcr.io/janhq/jan-server:dev-cuda-12.2-latest

    container_name: *name
    hostname: *name
    profiles:
      - *name
    environment:
      JAN_API_CORS: false
      JAN_SERVER_CORS: false

    ports:
      - 3000
      - 1337
      - 3928
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.jan.rule: Host(`jan.my.internal.domain`)
      traefik.http.routers.jan.tls.certresolver: le
      traefik.http.routers.jan.entrypoints: websecure
      traefik.http.routers.jan.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.jan.service: jan-service
      traefik.http.services.jan-service.loadbalancer.server.port: 3000
      traefik.http.middlewares.jan-cors.headers.customresponseheaders.Access-Control-Allow-Origin: "*"

  &name searxng:
    <<: [*ai-common, *secopts, *restart]
    container_name: *name
    hostname: *name

    image: docker.io/searxng/searxng:latest
    networks:
      traefik-servicenet:
        aliases:
          - searxng
          - search
    ports:
      - 8080 
    volumes:
      - ./searxng/config:/etc/searxng:rw
    environment:
      - SEARXNG_BASE_URL=https:
      - SEARXNG_HOSTNAME=searxng.my.internal.domain
      - SEARXNG_DOMAIN=http:
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.searxng.rule: Host(`search.my.internal.domain`) || Host(`searxng.my.internal.domain`)
      traefik.http.routers.searxng.tls.certresolver: le
      traefik.http.routers.searxng.entrypoints: websecure
      traefik.http.routers.searxng.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.searxng.service: searxng-service
      traefik.http.services.searxng-service.loadbalancer.server.port: 8080
      traefik.http.routers.searxng.middlewares: authentik

  &name tabby:
    <<: [*ai-common, *restart, *gpu, *secopts]
    container_name: *name
    env_file:
      - env/.ai.env
    hostname: *name
    ipc: host
    build:
      context:
      dockerfile: docker/Dockerfile
      args:
        - DO_PULL=true
        - APT_PROXY=https:
        - APT_PROXY_HTTP=http:
        - APT_PROXY_HTTPS=https:
        - PIP_INDEX_URL=https:
        - PIP_TRUSTED_HOST=pip.my.internal.domain
        - NPM_CONFIG_REGISTRY=https:
        - TORCH_CUDA_ARCH_LIST=8.6
      tags:
        - localhost:5000/tabbyapi:latest
        - tabbyapi:latest
        - tabbyapi
    ports:
      - 5000
    environment:
      - NAME=tabby
      - NVIDIA_VISIBLE_DEVICES=all
    volumes:
      - /mnt/llm/models/exl2:/usr/src/app/models
      - /mnt/llm/models/exl2:/app/models
      - /mnt/llm/models/exl2/loras:/app/loras
      - ./tabbyapi/templates:/app/templates
      - ./tabbyapi/templates:/usr/src/app/templates
      - ./tabbyapi/config.yml:/app/config.yml
      - ./tabbyapi/api_tokens.yml:/app/api_tokens.yml
      - ./tabbyapi/config.yml:/usr/src/app/config.yml
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.tabby.rule: Host(`tabby.my.internal.domain`)
      traefik.http.routers.tabby.tls.certresolver: le
      traefik.http.routers.tabby.entrypoints: websecure
      traefik.http.routers.tabby.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.tabby.service: tabby-service
      traefik.http.services.tabby-service.loadbalancer.server.port: 5000

  &name tabbyloader:
    <<: [*ai-common, *restart, *secopts]
    container_name: *name
    env_file:
      - ./env/.tabby.env
    hostname: *name
    build:
      context: ./tabbyloader
      dockerfile: Dockerfile
      tags:
        - localhost:5000/tabbyloader:latest
    ports:
      - 7860
    volumes:
      - /mnt/llm/tabbyloader/presets:/app/presets
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.tabbyloader.rule: Host(`tabbyloader.my.internal.domain`)
      traefik.http.routers.tabbyloader.tls.certresolver: le
      traefik.http.routers.tabbyloader.entrypoints: websecure
      traefik.http.routers.tabbyloader.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.tabbyloader.service: tabbyloader-service
      traefik.http.services.tabbyloader-service.loadbalancer.server.port: 7860
      traefik.http.routers.tabbyloader.middlewares: authentik

  &name flowise:
    container_name: *name
    hostname: *name
    profiles:
      - *name
    <<: [*ai-common, *secopts, *secopts, *restart]
    image: flowiseai/flowise
    volumes:
      - ${MOUNT_DOCKER_DATA}/flowise/flowise:/root/.flowise
      - ${MOUNT_DOCKER_DATA}/flowise/db:/db
    env_file:
      - ./env/.ai.env
      - ./env/.flowise.env
    networks:
      - traefik-servicenet
    extra_hosts:
      - host.docker.internal:host-gateway
    links:
      - chromadb
      - ollama
      - ollama-concurrent
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.flowise.rule: Host(`flowise.my.internal.domain`)
      traefik.http.routers.flowise.tls.certresolver: le
      traefik.http.routers.flowise.entrypoints: websecure
      traefik.http.routers.flowise.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.flowise.service: flowise-service
      traefik.http.services.flowise-service.loadbalancer.server.port: 3000
      traefik.http.routers.flowise.middlewares: authentik

  &name chromadb:
    container_name: *name
    hostname: *name
    profiles:
      - flowise
    <<: [*ai-common, *gpu, *secopts, *restart, *autoupdate]
    image: ghcr.io/chroma-core/chroma:latest
    environment:
      - IS_PERSISTENT=TRUE
    volumes:
      - /mnt/llm/chromadb:/chroma/chroma/
    ports:
      - 8000
    networks:
      - traefik-servicenet
    extra_hosts:
      - host.docker.internal:host-gateway
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.chromadb.rule: "Host(`chromadb.my.internal.domain`)"
      traefik.http.routers.chromadb.tls.certresolver: le
      traefik.http.routers.chromadb.entrypoints: websecure
      traefik.http.routers.chromadb.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.services.chromadb-service.loadbalancer.server.port: 8000

  &name omniparse:
    container_name: *name
    hostname: *name
    profiles:
      - omniparse
    <<: [*ai-common, *gpu, *secopts, *restart, *autoupdate]
    image: savatar101/omniparse:0.1
    ports:
      - 8000
    networks:
      - traefik-servicenet
    extra_hosts:
      - host.docker.internal:host-gateway
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.omniparse.rule: "Host(`omniparse.my.internal.domain`)"
      traefik.http.routers.omniparse.tls.certresolver: le
      traefik.http.routers.omniparse.entrypoints: websecure
      traefik.http.routers.omniparse.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.services.omniparse-service.loadbalancer.server.port: 8000
      traefik.http.routers.omniparse.middlewares: authentik

  &name plandex-db:
    <<: [*ai-common, *secopts, *restart]
    profiles:
      - plandex
    image: postgres:latest
    container_name: *name
    hostname: *name
    env_file:
      - ./env/.plandex.env
    ports:
      - 5432
    volumes:
      - /mnt/llm/plandex/git/scripts/init-db:/docker-entrypoint-initdb.d:ro
      - /mnt/llm/plandex/db:/var/lib/postgresql/data
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: false
      org.hotio.pullio.update: true

  &name plandex:
    <<: [*ai-common, *secopts, *restart, *autoupdate, *gpu]
    profiles:
      - *name
    container_name: *name
    hostname: *name
    build:
      context: ./plandex
      dockerfile: Dockerfile
    volumes:
      - /mnt/llm/plandex/server:/plandex-server
      - /mnt/llm/plandex/server-root:/root
    depends_on:
      - plandex-db
    links:
      - plandex-db
    env_file:
      - ./env/.ai.env
      - ./env/.plandex.env
    networks:
      - traefik-servicenet
    command: ["/bin/sh", "-c", "/scripts/wait-for-it.sh plandex-db:5432 -- ./plandex-server"]
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.plandex.rule: Host(`plandex.my.internal.domain`)
      traefik.http.routers.plandex.tls.certresolver: le
      traefik.http.routers.plandex.entrypoints: websecure
      traefik.http.routers.plandex.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.plandex.service: plandex-service
      traefik.http.services.plandex-service.loadbalancer.server.port: 8080

  &name mistralrs:
    <<: [*ai-common, *restart, *secopts, *gpu]
    build:
      context: https:
      dockerfile: Dockerfile.cuda-all
      args:
        - FEATURES=cuda,flash-attn,cudnn
        - CUDA_COMPUTE_CAP=86
        - THREADS=6
        - RUSTFLAGS="-Z threads=6"
    container_name: *name
    hostname: *name
    profiles:
      - *name
    ports:
      - 80
    volumes:
      - /mnt/llm/mistralrs/data:/data
      - /mnt/llm/models:/models
    command: gguf -m . -f /models/DeepSeek-Coder-V2-Instruct.IQ2_XXS.gguf
    environment:
      HUGGING_FACE_HUB_TOKEN: ${HUGGINGFACE_TOKEN}
      KEEP_ALIVE_INTERVAL: 100
    networks:
      - traefik-servicenet
    labels:
      traefik.enable: true
      org.hotio.pullio.update: true
      traefik.http.routers.mistralrs.rule: Host(`mistralrs.my.internal.domain`)
      traefik.http.routers.mistralrs.tls.certresolver: le
      traefik.http.routers.mistralrs.entrypoints: websecure
      traefik.http.routers.mistralrs.tls.domains[0].main: "*.my.internal.domain"
      traefik.http.routers.mistralrs.service: mistralrs-service
      traefik.http.services.mistralrs-service.loadbalancer.server.port: 80
need4swede commented 1 month ago

So the way it currently looks for stuff is roughly something like this:

Find the ‘services’ section. Use the ‘image’ as the descriptor for the port. Use ‘ports’ as the port.

For the ports, it relies on mapping to resemble something like this:

ports:

The /protocol is optional, but it looks for mappings to understand what port Portall should register to your host. So it’s very possible that Portall can’t find anything if port mapping isn’t in this format, or if images aren’t included.

This is the most conventional way of handling docker compose files, if I’m not mistaken, but perhaps less useful for those that contain multiple services in one file. The next major release will include Portainer support and wider Docker support, so I’ll try to catch more of these edge cases in future releases. For now, I think sticking to the more conventional docker compose setup is enough to get by.

Thanks again for sharing more of your setup. I’ll try my best to fit these into the logic for the updated import tool. Cheers!