ahembree / ansible-hms-docker

Ansible playbook for automated home media server setup
GNU General Public License v3.0
402 stars 51 forks source link

hmsdocker : Get public IP from Transmission VPN container. / /root/.docker/config.json: is a directory #25

Closed ghost closed 1 year ago

ghost commented 1 year ago

Hi, so i've edited my config to use cloudflare, and after reloading the playbook i get this error :

TASK [hmsdocker : Get public IP from Transmission VPN container.] **************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["docker", "exec", "transmission", "curl", "-s", "icanhazip.com"], "delta": "0:00:12.489838", "end": "2023-06-29 21:02:35.376014", "msg": "non-zero return code", "rc": 52, "start": "2023-06-29 21:02:22.886176", "stderr": "WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory", "stderr_lines": ["WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory"], "stdout": "", "stdout_lines": []}

Never got this problem despite the fact that i'm tweaking the config pretty often. Reverting what i've change didn't change anything.. Hope that u can help me, thanks!

ahembree commented 1 year ago

Thanks for the bug report and included error message!

Based on a quick glance while on mobile, this appears to be an issue with the Docker installation/configuration.

It is saying that the item located at /root/.docker/config.json is a directory and not a file.

I don't believe this repo touches that config file, but I'll investigate while at a computer shortly to confirm.

If you're running other containers outside of this project on the same host, try checking those containers volume mounts to see if any of them reference that config path. I believe Docker assumes all volume mounts are directories, so that may be where the issue is.

ahembree commented 1 year ago

I've confirmed that this repo does not touch the /root/.docker/config.json file by running a search with find . -type f -exec grep config.json {} \; against my local copy of the repo and by using the GitHub code search: https://github.com/search?q=repo%3Aahembree%2Fansible-hms-docker%20config.json&type=code

Based on this, I believe this /root/.docker/config.json was created outside of this repo.

If you're unsure about what to do, I would do the following to troubleshoot:

  1. Stop all containers
  2. Stop the Docker service (sudo systemctl stop docker)
  3. Create a backup of the current file/directory by running mv /root/.docker/config.json /root/.docker/config.json.bak (this assumes you are currently the root user. If you're not currently root, you'll need to add sudo in front)
  4. Create a new, empty file by running touch /root/.docker/config.json
  5. Start the Docker service again (sudo systemctl start docker) and see if this config file is modified
  6. If not, that narrows it down to being a container that is modifying/creating it.
  7. Start the containers one by one, checking to see if the config file has been updated.

Hopefully this helps. Let me know if you need any more assistance, if you encounter any other issues, or if you're able to resolve it!

ghost commented 1 year ago

Hi, thanks for the quick response, now i get this error :

TASK [hmsdocker : Obtain public IP.] **********************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: socket.timeout: The read operation timed out
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1688125365.1823823-205883-209492233730181/AnsiballZ_ipify_facts.py\", line 107, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1688125365.1823823-205883-209492233730181/AnsiballZ_ipify_facts.py\", line 99, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1688125365.1823823-205883-209492233730181/AnsiballZ_ipify_facts.py\", line 47, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.general.plugins.modules.net_tools.ipify_facts', init_globals=dict(_module_fqn='ansible_collections.community.general.plugins.modules.net_tools.ipify_facts', _modlib_path=modlib_path),\n  File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_ipify_facts_payload_obrbu1we/ansible_ipify_facts_payload.zip/ansible_collections/community/general/plugins/modules/net_tools/ipify_facts.py\", line 105, in <module>\n  File \"/tmp/ansible_ipify_facts_payload_obrbu1we/ansible_ipify_facts_payload.zip/ansible_collections/community/general/plugins/modules/net_tools/ipify_facts.py\", line 99, in main\n  File \"/tmp/ansible_ipify_facts_payload_obrbu1we/ansible_ipify_facts_payload.zip/ansible_collections/community/general/plugins/modules/net_tools/ipify_facts.py\", line 83, in run\n  File \"/usr/lib/python3.8/http/client.py\", line 472, in read\n    s = self._safe_read(self.length)\n  File \"/usr/lib/python3.8/http/client.py\", line 613, in _safe_read\n    data = self.fp.read(amt)\n  File \"/usr/lib/python3.8/socket.py\", line 669, in readinto\n    return self._sock.recv_into(b)\n  File \"/usr/lib/python3.8/ssl.py\", line 1241, in recv_into\n    return self.read(nbytes, buffer)\n  File \"/usr/lib/python3.8/ssl.py\", line 1099, in read\n    return self._sslobj.read(len, buffer)\nsocket.timeout: The read operation timed out\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Kinda scary..

ghost commented 1 year ago

So after a second attempt it worked! I don't know if you can help me on this issue, but i've got an other problems, when using ssl i get the error "ERR_TOO_MANY_REDIRECTS", when i disable ssl i only can access overseer, if you have an easy path to get the ssl working and also be able to access everything from outside of my network it will be really appreciated. Also i want to apologize for my poor english, i hope u will understand this message. Wish you a nice day!

ahembree commented 1 year ago

The other error you got about socket timeout I believe is due to the IPify Ansible module attempting to get your public IP, but the response from the server it uses to do that didn't respond within 10 seconds so it timed out. I had this issue previously too, nothing you can do on your end since it's a backend server issue that IPify uses, so just keep retrying and it will eventually work.

For the SSL redirect issue, there may be a "loop" somewhere in how stuff is configured, I'd need to see the Traefik debug logs and trace logs to see where that loop is. When disabling SSL, is there an error you get when attempting to access the other containers besides overseerr?

I would personally recommend against opening all services (other than overseerr and Plex) directly to the internet due to their lack of authentication by default. If you really want to do it, I'd recommend using the Authentik container to handle authentication at the proxy level, but that setup is fairly involved. You could also use the Tailscale container to have a VPN back into the network to access these if you're not home without exposing them to the internet.

ghost commented 1 year ago

No error just the fact that the access is forbidden. If you can guide me on how to get those traefik logs that would be appreciated. Maybe there is a way to only expose radarr, sonarr and overseerr ? By the way i can't access my container in my LAN since i've activated cloudflare

ahembree commented 1 year ago

You can increase the log level by modifying the traefik_log_level variable and setting it to DEBUG, this will print more output about the Traefik service.

To enable the access logs, set the traefik_enable_access_logs variable to yes.

ahembree commented 1 year ago

Closing due to inactivity.