Closed jk979 closed 3 years ago
I'm seeing the same error within a dockered budibase instance. When trying to create the admin user the Post
call to /api/global/users/init
fails with 404 - not found.
Digging through the request log I've noticed this call:
/api/global/configs/checklist?tenantId=default which returns a 404
with the content {"message":"Database does not exist.","status":404}
Maybe the init script didn't run completely and for some reason the db wasn't set up correctly?
@tboschek is this a brand new Budibase installation?
@jk979 I just spun up a new digitalocean droplet and it worked as expected on:
<droplet-url>:10000
using the console to start the builder.
What do you mean? You should not have to use anything in the console to start the builder once you have spun up your one click droplet.
Hi
yes, I just spun it up via docker-compose
. I've also tried to delete the entire installation including all volumes and to re-create it. The server is an ubuntu 20LTS.
Interestingly enough I see some this within the containers logs when trying to create the admin user
bbapps | INFO [1630570004402] (28 on 8beb26cdefee): request completed
bbapps | res: {
bbapps | "statusCode": 404,
bbapps | "headers": {
bbapps | "content-type": "text/plain; charset=utf-8",
bbapps | "content-length": "9"
bbapps | }
bbapps | }
bbapps | responseTime: 1
bbapps | req: {
bbapps | "id": 13,
bbapps | "method": "POST",
bbapps | "url": "/api/global/users/init",
bbapps | "headers": {
bbapps | "host": "****:10000",
bbapps | "content-length": "75",
bbapps | "pragma": "no-cache",
bbapps | "cache-control": "no-cache",
bbapps | "x-budibase-app-id": "",
bbapps | "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36",
bbapps | "content-type": "application/json",
bbapps | "accept": "*/*",
bbapps | "origin": "http://****.local:10000",
bbapps | "referer": "http://****.local:10000/builder/admin",
bbapps | "accept-encoding": "gzip, deflate",
bbapps | "accept-language": "de,en-US;q=0.9,en;q=0.8",
bbapps | "cookie": "ph_Oe***%22%24initial_referrer%22%3A%20%22%24direct%22%2C%22%24initial_referring_domain%22%3A%20%22%24direct%22%7D; AuthSession=YnVk***SA",
bbapps | "x-forwarded-proto": "http",
bbapps | "x-request-id": "0b8133af-5b6c-4004-926c-35d2e29ad4e5",
bbapps | "x-envoy-expected-rq-timeout-ms": "15000"
bbapps | },
bbapps | "remoteAddress": "::ffff:172.31.0.10",
bbapps | "remotePort": 58904
bbapps | }
hi @shogunpurple , this worked as expected--I had been using 4002 instead of 10000. Maybe missed something in the documentation.
Here is my setup. Everything should be of the shelf and standard. I've tried to start an instance on my System, which also shows the same behavior. Please note, I had to use platform: linux/amd64
on the db since I'm on an M1. Other than that, I didn't modify anything.
docker-compose.yml
version: "3"
# optional ports are specified throughout for more advanced use cases.
services:
app-service:
restart: always
image: budibase/apps
container_name: bbapps
ports:
- "${APP_PORT}:4002"
environment:
SELF_HOSTED: 1
COUCH_DB_URL: http://${COUCH_DB_USER}:${COUCH_DB_PASSWORD}@couchdb-service:5984
WORKER_URL: http://worker-service:4003
MINIO_URL: http://minio-service:9000
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
INTERNAL_API_KEY: ${INTERNAL_API_KEY}
BUDIBASE_ENVIRONMENT: ${BUDIBASE_ENVIRONMENT}
PORT: 4002
JWT_SECRET: ${JWT_SECRET}
LOG_LEVEL: info
SENTRY_DSN: https://a34ae347621946bf8acded18e5b7d4b8@o420233.ingest.sentry.io/5338131
ENABLE_ANALYTICS: "true"
REDIS_URL: redis-service:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
volumes:
- ./logs:/logs
depends_on:
- worker-service
- redis-service
worker-service:
restart: always
image: budibase/worker
container_name: bbworker
ports:
- "${WORKER_PORT}:4003"
environment:
SELF_HOSTED: 1
PORT: 4003
CLUSTER_PORT: ${MAIN_PORT}
JWT_SECRET: ${JWT_SECRET}
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
MINIO_URL: http://minio-service:9000
COUCH_DB_USERNAME: ${COUCH_DB_USER}
COUCH_DB_PASSWORD: ${COUCH_DB_PASSWORD}
COUCH_DB_URL: http://${COUCH_DB_USER}:${COUCH_DB_PASSWORD}@couchdb-service:5984
INTERNAL_API_KEY: ${INTERNAL_API_KEY}
REDIS_URL: redis-service:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
volumes:
- ./logs:/logs
depends_on:
- redis-service
- minio-service
- couch-init
minio-service:
restart: always
image: minio/minio
volumes:
- minio_data:/data
ports:
- "${MINIO_PORT}:9000"
environment:
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
MINIO_BROWSER: "off"
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
proxy-service:
restart: always
image: envoyproxy/envoy:v1.16-latest
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
ports:
- "${MAIN_PORT}:10000"
depends_on:
- minio-service
- worker-service
- app-service
- couchdb-service
couchdb-service:
restart: always
image: ibmcom/couchdb3
platform: linux/amd64
environment:
- COUCHDB_PASSWORD=${COUCH_DB_PASSWORD}
- COUCHDB_USER=${COUCH_DB_USER}
ports:
- "${COUCH_DB_PORT}:5984"
volumes:
- couchdb3_data:/opt/couchdb/data
couch-init:
image: curlimages/curl
environment:
PUT_CALL: "curl -u ${COUCH_DB_USER}:${COUCH_DB_PASSWORD} -X PUT couchdb-service:5984"
depends_on:
- couchdb-service
command: ["sh","-c","sleep 10 && $${PUT_CALL}/_users && $${PUT_CALL}/_replicator; fg;"]
redis-service:
restart: always
image: redis
command: redis-server --requirepass ${REDIS_PASSWORD}
ports:
- "${REDIS_PORT}:6379"
volumes:
- redis_data:/data
watchtower-service:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --debug --http-api-update bbapps bbworker
environment:
- WATCHTOWER_HTTP_API=true
- WATCHTOWER_HTTP_API_TOKEN=budibase
- WATCHTOWER_CLEANUP=true
labels:
- "com.centurylinklabs.watchtower.enable=false"
ports:
- 6161:8080
volumes:
couchdb3_data:
driver: local
minio_data:
driver: local
redis_data:
driver: local
envoy.yaml
static_resources:
listeners:
- name: main_listener
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress
codec_type: auto
route_config:
name: local_route
virtual_hosts:
- name: local_services
domains: ["*"]
routes:
- match: { prefix: "/app/" }
route:
cluster: app-service
prefix_rewrite: "/"
- match: { path: "/v1/update" }
route:
cluster: watchtower-service
- match: { prefix: "/builder/" }
route:
cluster: app-service
- match: { prefix: "/builder" }
route:
cluster: app-service
- match: { prefix: "/app_" }
route:
cluster: app-service
# special case for worker admin API
- match: { prefix: "/api/admin/" }
route:
cluster: worker-service
- match: { path: "/" }
route:
cluster: app-service
# special case for when API requests are made, can just forward, not to minio
- match: { prefix: "/api/" }
route:
cluster: app-service
- match: { prefix: "/worker/" }
route:
cluster: worker-service
prefix_rewrite: "/"
- match: { prefix: "/db/" }
route:
cluster: couchdb-service
prefix_rewrite: "/"
# minio is on the default route because this works
# best, minio + AWS SDK doesn't handle path proxy
- match: { prefix: "/" }
route:
cluster: minio-service
http_filters:
- name: envoy.filters.http.router
clusters:
- name: app-service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: app-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: app-service
port_value: 4002
- name: minio-service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: minio-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: minio-service
port_value: 9000
- name: worker-service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: worker-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: worker-service
port_value: 4003
- name: couchdb-service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: couchdb-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: couchdb-service
port_value: 5984
- name: watchtower-service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: watchtower-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: watchtower-service
port_value: 8080
hosting.properties
# Use the main port in the builder for your self hosting URL, e.g. localhost:10000
MAIN_PORT=10000
# This section contains all secrets pertaining to the system
# These should be updated
JWT_SECRET=testsecret
MINIO_ACCESS_KEY=budibase
MINIO_SECRET_KEY=budibase
COUCH_DB_PASSWORD=budibase
COUCH_DB_USER=budibase
REDIS_PASSWORD=budibase
INTERNAL_API_KEY=budibase
# This section contains variables that do not need to be altered under normal circumstances
APP_PORT=4002
WORKER_PORT=4003
MINIO_PORT=4004
COUCH_DB_PORT=4005
REDIS_PORT=6379
BUDIBASE_ENVIRONMENT=PRODUCTION
@tboschek your envoy.yaml
is outdated and you are using the old hosting.properties
format. This is why I asked if it was a brand new installation, and you said yes. You have an old/existing budibase installation.
https://raw.githubusercontent.com/Budibase/budibase/master/hosting/envoy.yaml
You can see that you are missing some rules from here, causing the 404.
Please update your envoy config using:
wget https://raw.githubusercontent.com/Budibase/budibase/master/hosting/envoy.yaml
docker-compose restart proxy-service
You can then update your hosting.properties with:
echo "WATCHTOWER_PORT=6161" >> hosting.properties
cp hosting.properties .env
Thanks
Thanks for your explaination! You are right. It seems that my compose-file was outdated. Took it from a project that a set up a few days ago. Looks like it was to stale.
Thanks for the new routing-definitions and the env-var. I've now got it running again. The app preview still seems not to be too happy, but thats something else.
Thanks for your support!
I have started Budibase for the first time using a DigitalOcean droplet as recommended, and using the console to start the builder. I enter a new email and password for the first time and click Create super admin user, but it throws an error "failed to create admin user."
Desktop: