Closed pylebecq closed 3 months ago
We encountered the same/similar issue with remote schema
when having tried to bump up hasura graphql engine (HGE) to v2.0.6
from v1.3.3
.
HGE connects remote schema even when remote schema starts listening after HGE has started.
HGE does NOT connect remote schema when remote schema starts listening after HGE has started.
It says Inconsistent Metadata!
.
The following is a full error log.
{"type":"metadata","timestamp":"2021-08-13T13:48:46.153+0000","level":"warn","detail":{"message":"Inconsistent Metadata!","info":{"objects":[{"definition":{"definition":{"timeout_seconds":60,"url_from_env":"HASURA_GRAPHQL_REMOTE_SCHEMA_TO_API","forward_client_headers":true},"name":"api","permissions":[],"comment":""},"reason":"Inconsistent object: HTTP exception occurred while sending the request to http://host.docker.internal:3000/graphql","name":"remote_schema api","type":"remote_schema","message":{"message":"ConnectionFailure Network.Socket.connect: <socket: 24>: does not exist (Connection refused)","request":{"proxy":null,"secure":false,"path":"/graphql","responseTimeout":"ResponseTimeoutMicro 60000000","method":"POST","host":"host.docker.internal","requestVersion":"HTTP/1.1","redirectCount":"10","port":"3000"}}}]}}}
hasura scripts update-project-v3
.hasura console
is not available.hasura metadata reload
and an equivalent API call can solve the issue.
The way I've handled this is via a service health check in docker compose. This way, the db
service and server
service must be healthy before Hasura can start. Example compose file:
services:
db:
image: postgres:13.2-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- ${POSTGRES_PORT_HOST}:${POSTGRES_PORT_CONTAINER}
volumes:
- db:/var/lib/postgres/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 5s
timeout: 5s
retries: 5
networks:
- app
server:
build: ./server
environment:
PORT: 4000
HASURA_GRAPHQL_URL: ${HASURA_GRAPHQL_URL}
HASURA_GRAPHQL_ADMIN_SECRET: ${HASURA_GRAPHQL_ADMIN_SECRET}
volumes:
- ./server:/usr/src/app
ports:
- ${SERVER_PORT_HOST}:${SERVER_PORT_CONTAINER}
healthcheck:
test: ["CMD-SHELL", "netstat -tulnp | grep 4000"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app
graphql-engine:
image: hasura/graphql-engine:v2.0.8.cli-migrations-v3
ports:
- ${HASURA_PORT_HOST}:${HASURA_PORT_CONTAINER}
depends_on:
db:
condition: service_healthy
server:
condition: service_healthy
restart: always
environment:
HASURA_GRAPHQL_LOG_LEVEL: warn
HASURA_GRAPHQL_DATABASE_URL: ${HASURA_GRAPHQL_DATABASE_URL}
HASURA_GRAPHQL_UNAUTHORIZED_ROLE: ${HASURA_GRAPHQL_UNAUTHORIZED_ROLE}
HASURA_GRAPHQL_ENABLE_REMOTE_SCHEMA_PERMISSIONS: "true"
HASURA_GRAPHQL_ADMIN_SECRET: ${HASURA_GRAPHQL_ADMIN_SECRET}
HASURA_GRAPHQL_JWT_SECRET: ${HASURA_GRAPHQL_JWT_SECRET}
networks:
- app
networks:
app:
driver: bridge
volumes:
db:
external: true
@AesSedai Thank you for sharing your solution. I used https://github.com/roerohan/wait-for-it to achieve a similar result, but I had to change the entrypoints. I will probably try to use the health checks instead, I find it cleaner.
Closing this, as the health check solution proposed by @AesSedai works for my use-case. Thanks!
Hello,
I'm currently working on migrating a project to hasura 2.0 and I noticed some differences between 2.0 in "backward compatible mode" and 2.0 fully migrated to support multiple database setup. And since I could not find any documentation about this difference.
So basically, we have project with a remote schema (which I will call API), which is a nodejs graphql API (written in typescript). In development, we have the following setup:
docker
andapi
.We are using a tool to run the Procfile and make sure everything we need is running. Basically, the Procfile will run two things at the same time:
docker-compose up
to start the postgres and hasura containersyarn start:dev
to build and run the APIThe thing is, the API needs Postgres to run, and hasura needs Postgres and the API to run, because it's used as a remote schema. But the API is taking some time to build and be up and running.
When running with the hasura container using tag
v2.0.3.cli-migrations-v2
, (using v1.3.3 directory structure), when everything starts, the following error happen in hasura:That's okay because the container is restarted, again, and again, and at some point the API will be up and running and the hasura container will run fine.
After upgrading to the new directory structure using
hasura scripts update-project-v3
, and starting hasura again using tagv2.0.3.cli-migrations-v3
, here the following error happen:We can see that the container is not exiting anymore when encountering this error. And the worse part is that even when the API is finally up and running, hasura is stuck with this error forever. I have to manually restart the hasura container after the API is available, and then it runs fine.