ahembree / ansible-hms-docker

Ansible playbook for automated home media server setup
GNU General Public License v3.0
389 stars 45 forks source link

403 when accessing containers #68

Open ahembree opened 3 weeks ago

ahembree commented 3 weeks ago
          So it finally went through however when trying to access plex.domain it states forbidden however the overseerr.domain does resolve but because the plex.domain is forbidden their is no way of continuing setup.

Aren't the host/subdomains defined automatically from the config? I saw under my cloudflare account where the overseerr record was created then made a cname record for plex pointing to the overseer.domain. Ports are forwarded properly in router but even locally cannot access the plex server.

All other subdomain eg. uptime-kuma, radarr, sonarr, lead to a forbidden page nothing is resolving some guidance on what would be causing this would be helpfull.

Originally posted by @ryu777mtg in https://github.com/ahembree/ansible-hms-docker/issues/63#issuecomment-2106035170

ahembree commented 3 weeks ago

Aren't the host/subdomains defined automatically from the config?

Yes, this is configured per-container, which can be controlled in the container_map.yml file (if using the "advanced" config). This only impacts how Traefik routes requests to the correct container though.

Ports are forwarded properly in router but even locally cannot access the plex server.

Port-forwarding really only matters for exposing services to the public internet, you should still be able to access it internally by going to <internal host IP>:32400/web since the plex container will always expose port 32400. If you're familiar with running port scans on an internal network, you can verify if the plex port is open by running nmap -p 32400 <host IP> -Pn (assuming you have nmap installed or install it). I'd also recommend ensuring the plex container is actually running by checking with docker logs -f plex. If you're accessing Plex from a remote location, you might be able to continue setup by going to <public IP>:32400/web.

All other subdomain eg. uptime-kuma, radarr, sonarr, lead to a forbidden page nothing is resolving some guidance on what would be causing this would be helpfull.

This is semi-conflicting info since if the hostnames are not resolving, then you shouldn't be seeing a Forbidden page, you'd probably be getting a ERR_NAME_NOT_RESOLVED error within the browser.

However, since you're getting the Forbidden page I'm going to assume DNS is working correctly. This may be due to the Traefik allow-list rule. By default, the Traefik config will allow all RFC-1918 (internal) address space. If you're getting a Forbidden page, you may be trying to access the container(s) through a non-internal address.

On the host running the containers, you can verify Traefik is working correctly by running curl localhost -L -k -H "Host: sonarr.domain". You should receive the HTML output of the target container (in this case, sonarr.domain)

ryu777mtg commented 3 weeks ago

All containers are running I can validate the <internal host IP>:32400/web is working correctly forgot about the /web Also when looking at settings of plex server after setting it up it states it's not available outside my network either.

When running the Traefik command curl localhost -L -k -H "Host: sonarr.domain" of course replacing domain with correct domain it reports Forbidden

I guess I'm trying to figure out how to have the containers with the right names browsable using the TLD that was specified in the main_custom.yml for the hms_docker_domain variable.

The only resolvable one is overseerr.domain

I'm aware of the port forwarding is to allow for external use and wanted to use phone to watch on the go but it's unreachable.

If you're getting a Forbidden page, you may be trying to access the container(s) through a non-internal address.

How do I correct this if I may ask? Plex Server and this script is installed on a headless server connected to same network as the Machine that is ssh'ed into to run the scripts on the same LAN subnet.

ahembree commented 3 weeks ago

Also when looking at settings of plex server after setting it up it states it's not available outside my network either

I'd recommend using a port check tool (such as https://portchecker.co/check-it) to verify that the plex port you have specified is open to the public. If it reports as open, then it should be fine and plex may fix itself eventually.

Looks like I messed up the curl command, let me know if this works instead: curl --resolve sonarr.domain:443:127.0.0.1 https://sonarr.domain/ --header "Host: sonarr.domain" -k -L

I guess I'm trying to figure out how to have the containers with the right names browsable using the TLD that was specified in the main_custom.yml for the hms_docker_domain variable.

For this the following needs to be in place:

  1. Correct configuration of this project
  2. Deploy this project
  3. Containers running
  4. DNS records pointing to private IP of host
  5. Validate

I now realize that you may have configured your DNS records to point to the public IP that the host uses. It's a bit insecure, but can you try updating one of the records to point to the internal IP address of the host and see if that fixes it?

I think I should also add some clarification around the suggested DNS server(s) to the readme file. For added security, you should be running an internal DNS server that resolves the hostnames to internal IP addresses. You can configure public DNS records (such as within Cloudflare) and point those to internal IPs, but then someone can query those same DNS records and see which IP space you use internally and it may cause issues if you attempt to query those hostnames at a different location. Not a big deal, but something to be aware of.

How do I correct this if I may ask? Plex Server and this script is installed on a headless server connected to same network as the Machine that is ssh'ed into to run the scripts on the same LAN subnet.

Lets see if the DNS records pointing to the private IP fixes this first as I suspect it may be due to some NAT stuff if the DNS records are currently pointing to the public IP

ryu777mtg commented 3 weeks ago

I now realize that you may have configured your DNS records to point to the public IP that the host uses. It's a bit insecure, but can you try updating one of the records to point to the internal IP address of the host and see if that fixes it?

DNS is pointed to Public IP of network with Plex Server using an A record generated by the API set at overseerr and second CNAME record with wildcard to hopefully resolve the other subdomains. (Also tried CNAME for plex,sonarr,radarr,etc)

I'm not well versed with DNS how can I even point to a private IP/internal IP using DNS? Just say the Internal IP of the Plex Server is 10.0.0.200 how would you enter that into a record?

Unfortunately I do not have an internal DNS any suggestions?

ryu777mtg commented 3 weeks ago

Using an A record and pointing it to a private IP of host seems to resolve issue. Though I'd prefer to have a better way of going about this.

ahembree commented 3 weeks ago

You can modify your local system hosts file (the computer you'll use to access the services, not the host running plex) to be a "local DNS", but this will only work for that specific system. This is a pretty good guide: https://docs.rackspace.com/docs/modify-your-hosts-file

Unfortunately that's the only other solution to get it to work without deploying your own internal DNS server, such as AdGuard Home or a PiHole and both block ads so I recommend deploying if you can

ryu777mtg commented 3 weeks ago

Unfortunately that's the only other solution to get it to work without deploying your own internal DNS server, such as AdGuard Home or a PiHole and both block ads so I recommend deploying if you can

Can the DNS be ran off of same machine as the Plex host? I see PiHole and Adguard have docker so was wondering if it could be merged into the docker-compose.yml etc?

ahembree commented 3 weeks ago

Yeah it should be able to depending on the ports used by the container, but any changes you make to the compose file will be overwritten on the next make apply.

You can create another docker-compose.yml file in another directory somewhere with just that container and run it that way

ryu777mtg commented 3 weeks ago

So this is what I have for Pihole docker-compose.yml

version: "3"

# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
      - "8008:80/tcp"
    networks:
      - "traefik_net"
    environment:
      TZ: 'America/New_York'
      WEBPASSWORD: 'pihole'
    # Volumes store your data between container upgrades
    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
    restart: unless-stopped

networks:
  "traefik_net":
    driver: bridge
    attachable: true

Pretty much the default changed the port 80 to 8008 added the network hoping it would join on the same network if it needed to.

Disable ubuntu port 53 as pihole needs it for DNS by doing the following edit /etc/systemd/resolved.conf uncommenting and setting DNSStubListener=no

sudo systemctl restart systemd-resolved.service sudo rm /etc/resolve.conf sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

I'm sure their is something else that needs to be done but that gets past the port 80 and 53 conflicts I was having. Then going to public ip port 8008 I get error 403

ahembree commented 3 weeks ago

Unfortunately I can't really help anymore here since this is starting to go outside the scope of this repo