Closed haiwu closed 1 year ago
It seems Postgres container was missing the following env var on master
: https://github.com/asciinema/asciinema-server/blob/55eae3c8d8baa9948b2542810f0a53f090125502/docker-compose.yml#L11
This is fixed now. Try these:
docker pull asciinema/asciinema-server
git pull
on your master
branch checkout to get compatible docker-compose.yml file (or download it manually from master
branch on GH)Thanks, but I already manually ensured that line in master branch for docker-compose.yml. It still failed the same way. A few days ago when I tried, after I manually ensuring this line in master branch, it worked, but now it is no longer working..
@sickill : Any suggestions? I just tried all the steps from installation guide, and it is still failing in the exact same way.
It seems this is due to ipv6 disabled on the host. Before I could just comment out the ipv6 line in file docker/nginx/asciinema.conf
, and it would just work after that, this no longer works after recent updates.
I'll try to reproduce this with ipv6 disabled when I get a chance.
Hi @haiwu, I'm facing the same problem with you when upgrading my self hosted asciinema server. Check your postgres container logs.
DETAIL: The data directory was initialized by PostgreSQL version xx, which is not compatible with this version xx
In my case, the root problem likely caused by incompatible files in the volume of the old PostgreSQL 12 container with the latest upstream version (v14). This make my postgres
and phoenix
container keep restarting.
I'm able to fix the issue, TLDR:
If you follow the official upgrade process, stop until you pull and merge with your branch. Then edit your docker-compose.yml
and use the old PostgreSQL version that working (in my case: v12).
version: '2'
services:
postgres:
image: postgres:12-alpine
container_name: asciinema_postgres
### blah blah blah
docker-compose up -d postgres
) and do database backup:
docker exec -it <DOCKER_CONTAINER_ID> pg_dump postgres -U postgres > asciinemadump.sql
docker-compose.yml
and revert the config according to the upstream version (PostgreSQL 14)../volumes/postgres
(or even better: backup by moving it to another directory)docker-compose up -d postgres
), copy your dumped sql db to your new created ./volumes/postgres
directory and restore the backup:
docker exec -it <DOCKER_CONTAINER_ID> psql -d postgres -U postgres -f /var/lib/postgresql/data/asciinemadump.sql
docker-compose up -d
The phoenix
container should able to communicate with postgres
container now. Wait until the database migration process by phoenix container is complete and try to access your instance again.
Dunno if this is the "right" way to fix it, and correct me if i'm wrong @sickill.
@ditatompel this is great, thanks for investigating. Your instructions will for sure help others.
@haiwu have you checked postgres container logs? Any errors there?
If you still believe it's because of IPv6 then please pull latest ghcr.io/asciinema/asciinema-server:latest
and add ECTO_IPV6=1
env var for phoenix
container. This should help.
@sickill : It seems you made quite a few updates recently, and the installation is now broken. Have you tested your recent updates with the installation process? It was working a few days ago. This happens when there's no release number, and the installation process is only via
git clone
I assume.Do you know what we need to do to get the installation to work again?
It kept failing with the following message for
asciinema_phoenix
container (shown below). All other containers are ok, no errors for those.ERROR MESSAGES:
Running db migrations... 00:05:57.398 [error] Could not create schema migrations table. This error usually happens due to the following:
To fix the first issue, run "mix ecto.create".
To address the second, you can run "mix ecto.drop" followed by "mix ecto.create". Alternatively you may configure Ecto to use another table for managing migrations:
The full error report is shown below.
** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2986ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
See DBConnection.start_link/2 for more information