friendica / docker

Docker image for Friendica
https://friendi.ca
GNU Affero General Public License v3.0
48 stars 19 forks source link

localhost is not working for db host #175

Closed ne20002 closed 2 years ago

ne20002 commented 2 years ago

Bug Description

I'm settign up a friendica instance for private use with Podman on Debian Bullseye. Using 'localhost' as MYSQL_HOST is not working. Using '127.0.0.1' is working.

Steps to Reproduce

  1. Crate a new friendica instance.
  2. Use 'localhost' as MYSQL_HOST
  3. friendica fpm container can't connect to host.

The same applies if I don't use MYSQL_HOST in container setup. Now the installation page comes up with empty database field. Using 'localhost' gives 'can't connect to database'. Using 127.0.0.1 is working.

Actual Result:

'localhost' is not working as db host, '127.0.0.1' is working.

Expected Result:

'localhost' should work.

Screenshots

Platform Info

Debian Bullseye, Podman.

Friendica Version: 2021.09 docker image

Installation steps for Podman used: ` $ podman pod create --name friendica --hostname friendica -p 8012:80

$ podman run -d --name friendica-mariadb \ --pod friendica \ -e PUID=1001 \ -e PGID=1001 \ --mount type=bind,src=/mnt/pods/friendica/db,dst=/var/lib/mysql \ --restart=unless-stopped \ --env MYSQL_HOST=localhost \ --env MYSQL_PORT=3306 \ --env MYSQL_DATABASE=friendica \ --env MYSQL_USER=friendica \ --env MYSQL_PASSWORD=xxxxxxx \ --env MYSQL_RANDOM_ROOT_PASSWORD=yes \ mariadb:latest

$ podman run -d --name friendica-fpm \ --pod friendica \ --env TZ=Europe/Berlin \ -e PUID=1001 \ -e PGID=1001 \ --mount type=bind,src=/mnt/pods/friendica/html,dst=/var/www/html \ --restart=unless-stopped \ --env MYSQL_USER=friendica \ --env MYSQL_PASSWORD=xxxxxxx \ --env MYSQL_DATABASE=friendica \ --env FRIENDICA_ADMIN_MAIL=admin@at.home \ --env FRIENDICA_SITENAME=friendica.at.home \ --env FRIENDICA_TZ='Europe/Zurich' \ --env FRIENDICA_URL='https://friendica.at.home' \ friendica:fpm

$ podman run -d --name friendica-web \ --pod friendica \ -e PUID=1001 \ -e PGID=1001 \ --restart=unless-stopped \ --mount type=bind,src=/mnt/pods/friendica/html,dst=/var/www/html \ --mount type=bind,src=/mnt/pods/friendica/nginx/nginx.conf,dst=/etc/nginx/nginx.conf,ro=true \ nginx:latest

`

BrokenGabe commented 2 years ago

You would have to edit your /etc/hosts file and add (the following) into it to work. 127.0.0.1 localhost

echo '127.0.0.1 localhost' | sudo tee -a /etc/hosts
nupplaphil commented 2 years ago

@BrokenGabe - I transferred it to the docker repo, because it's a docker topic, not the codebase itself :-)

nupplaphil commented 2 years ago

networking inside containers is really a pain .. Why do you want to run at the same node, this doesn't sound right to me.

If you use

$ podman run -d --name friendica-mariadb
--pod friendica-db
...

you can set MYSQL_HOST to nextcloud-db.dns.podman and this would work out of the box :-)

ne20002 commented 2 years ago

Networking in containers is a pain. ;) That's why I use Podman and pods. I've set up a number of pods, where each pod includes all neccessary containers. It works for Nextcloud, Synapse + Admin, Tvheadend. Using pods I have a dedicated Network per pod where all services are accessible on localhost. For me it looks like the easiest way to get rid of networking pain. I also have each pod running under its own limited user. Also, I have a single virtual disk per pod, so I can move / backup pods in one piece. When I have set up a pod to what how I want it, I used podman kube to generate a kube file, use this for starting the pod with systemd and also for upgrades.

nupplaphil commented 2 years ago

If you can wait until https://github.com/containers/podman/issues/12003 is fixed, this config would work as well:

$ podman run -d --name friendica-fpm
--pod friendica
[...]
--env MYSQL_HOST=friendica-mariadb
[...]

because "normally" the hosts file c

ne20002 commented 2 years ago

Works the way as recommend ny nupplaphil.

nupplaphil commented 2 years ago

@ne20002 the root cause seems to be fixed now, so you could try it again without my workaround, jfyi :)