Open bgiegel opened 1 year ago
Actually by looking at the graphql code I can see that it tries to call the metadata server. Which takes A huge time to answer :
time curl https://tokens.cardano.org/metadata/healthcheck
real 0m31.178s
user 0m0.016s
sys 0m0.013s
Is there another public url that I can call for the metadata ?
I suppose then that my other node will probably fail the same way on the next restart...
I’ll try to look if there is a config to increase the timeout.
Ok so it looks like there is no timeout at least configured on graphql so this is coming from the server or something in between... weird that I don’t have the same error when querying with curl.
Any ideas ? Something that I miss ?
This is what is configured in my config for cardano-graphql :
CARDANO_NODE_CONFIG_PATH: "/config/cardano-node/config.json"
HASURA_CLI_PATH: "/usr/local/bin/hasura"
HASURA_GRAPHQL_ENABLE_TELEMETRY: "false"
HASURA_URI: "http://127.0.0.1:8181"
METADATA_SERVER_URI: "https://tokens.cardano.org"
NETWORK: "mainnet"
OGMIOS_HOST: "127.0.0.1"
POSTGRES_DB: "cardano"
POSTGRES_HOST: "127.0.0.1"
POSTGRES_PASSWORD_FILE: "/configs/postgresql.pwd"
POSTGRES_PORT: "5432"
POSTGRES_USER: "cardano"
ok so it seems that it’s due to cloudflare. From the other region (zurich) querying the above endpoint is fast (2 sec). But from geneva it’s not.. Is there any other public service for the metadata that we can use ?
Unfortunately not. Depending on this centralised service is unfortunate, although at the time of original implementation, was required. Have you been able to start the service?
Ask a question
Hello !
I don’t know if it’s a bug so I’m just asking question for now.
We are running 2 cardano nodes in 2 different openshift cluster. On of them is currently unavailable because the graphql server is returning the error upon initialisation :
I tried to restart it. Even wipe the volume and restart from the latest snapshot but it failed with the same error.
I don’t want to update to the latest version yet for technical reason internal to the company I’m working. Do on of you has potential leads of where that is coming from ? We didn’t do any update recently in the config... 522 seems to suggest that the node or something is not answering in a decent amount of time But I don’t know how I can debug that.
Here are the version that we are using :