Closed sjugraj closed 4 months ago
Hello, can you please provide an example on how you are running Storage, that would help
I am using helm chart with docker image supabase/storage-api:v1.3.1
Link to Helm Chart
in storage.environment
i am overriding the following
DB_HOST: selfhosted_postgres_ip
DB_USER: supabase
DB_PORT: 5432
DB_DRIVER: postgres
DB_SSL: disable # disable, allow, prefer, require, verify-ca, verify-full
PGOPTIONS: -c search_path=storage,public
FILE_SIZE_LIMIT: "52428800"
STORAGE_BACKEND: s3 # file, s3
# FILE_STORAGE_BACKEND_PATH: /var/lib/storage commented as we are using S3
TENANT_ID: stub
REGION: us-east-1
GLOBAL_S3_BUCKET: supabase-storage-***
in the secret.s3
is am providing as followed
keyId: "****"
accessKey: "***"
I can confirm the credentials have access to read and write to the S3 bucket.
The table buckets
are available under the storage
schema in Postgres. It is not available to the public
.
So if it is trying to access it without putting that schema name like storage.buckets
, it might say the relation doesn't exist.
Hi @sjugraj the helm chart envs don't seem up to date. Can you please use these as reference: https://github.com/supabase/storage/blob/master/docker-compose.yml#L18-L47
With these settings it will work, feel free to comment below if it is not the case I'll be happy to re-open
I checked the environmental variables in the Helm Chart, and it is creating the Database URL environment on the fly. I am sharing the Deployment file as it deploys. If you look at the env section, it is about creating the Database URL.
Am I missing something? It still shows the same relation not exist error.
Error: select "id", "name", "public", "owner", "created_at", "updated_at", "file_size_limit", "allowed_mime_types" from "buckets" - relation "buckets" does not exist
at Object.DatabaseError (/app/dist/storage/errors.js:250:38)
at DBError.fromDBError (/app/dist/storage/database/knex.js:554:40)
at Function.<anonymous> (/app/dist/storage/database/knex.js:491:31)
at Object.onceWrapper (node:events:634:26)
at Function.emit (node:events:519:28)
at Client_PG.<anonymous> (/app/node_modules/knex/lib/knex-builder/make-knex.js:304:10)
at Client_PG.emit (node:events:531:35)
at /app/node_modules/knex/lib/execution/internal/query-executioner.js:46:12
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:318:14)
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 1
selector:
matchLabels:
...
template:
metadata:
labels:
...
spec:
restartPolicy: Always
serviceAccountName: *******
securityContext:
{}
initContainers:
- name: init-db
image: postgres:15-alpine
imagePullPolicy: IfNotPresent
env:
- name: DB_HOST
value: "*******"
- name: DB_USER
valueFrom:
secretKeyRef:
name: supabase-db
key: username
- name: DB_PORT
value: "*******"
command: ["/bin/sh", "-c"]
args:
- |
until pg_isready -h $(DB_HOST) -p $(DB_PORT) -U $(DB_USER); do
echo "Waiting for database to start..."
sleep 2
done
- echo "Database is ready"
containers:
- name: supabase-storage
securityContext:
{}
image: "supabase/storage-api:v1.3.1"
imagePullPolicy: IfNotPresent
env:
- name: AWS_DEFAULT_REGION
value: "*******"
- name: DB_DRIVER
value: "postgres"
- name: DB_HOST
value: "*******"
- name: DB_INSTALL_ROLES
value: "true"
- name: DB_PORT
value: "*******"
- name: DB_SSL
value: "disable"
- name: DB_USER
value: "supabase"
- name: FILE_SIZE_LIMIT
value: "52428800"
- name: FILE_STORAGE_BACKEND_PATH
value: "/var/lib/storage"
- name: GLOBAL_S3_BUCKET
value: "stylumia-supabase-storage"
- name: GLOBAL_S3_FORCE_PATH_STYLE
value: "true"
- name: PGOPTIONS
value: "-c search_path=storage,public"
- name: REGION
value: "us-east-1"
- name: STORAGE_BACKEND
value: "s3"
- name: TENANT_ID
value: "stub"
- name: POSTGREST_URL
value: http://supabase-supabase-rest:3000
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: supabase-db
key: password
- name: DB_PASSWORD_ENC
valueFrom:
secretKeyRef:
name: supabase-db
key: password_encoded
- name: DB_NAME
valueFrom:
secretKeyRef:
name: supabase-db
key: database
- name: DATABASE_URL
value: $(DB_DRIVER)://$(DB_USER):$(DB_PASSWORD_ENC)@$(DB_HOST):$(DB_PORT)/$(DB_NAME)?search_path=auth,storage,public&sslmode=$(DB_SSL)
- name: PGRST_JWT_SECRET
valueFrom:
secretKeyRef:
name: supabase-jwt
key: secret
- name: ANON_KEY
valueFrom:
secretKeyRef:
name: supabase-jwt
key: anonKey
- name: SERVICE_KEY
valueFrom:
secretKeyRef:
name: supabase-jwt
key: serviceKey
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: supabase-s3
key: keyId
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: supabase-s3
key: accessKey
livenessProbe:
httpGet:
path: /status
port: 5000
initialDelaySeconds: 3
ports:
- name: http
containerPort: 5000
protocol: TCP
volumeMounts:
- mountPath: /var/lib/storage
name: storage-data
volumes:
- name: storage-data
emptyDir:
medium: ""
I checked the environmental variables in the Helm Chart, and it is creating the Database URL environment on the fly. I am sharing the Deployment file as it deploys. If you look at the env section, it is about creating the Database URL.
Am I missing something? It still shows the same relation not exist error.
Error: select "id", "name", "public", "owner", "created_at", "updated_at", "file_size_limit", "allowed_mime_types" from "buckets" - relation "buckets" does not exist at Object.DatabaseError (/app/dist/storage/errors.js:250:38) at DBError.fromDBError (/app/dist/storage/database/knex.js:554:40) at Function.<anonymous> (/app/dist/storage/database/knex.js:491:31) at Object.onceWrapper (node:events:634:26) at Function.emit (node:events:519:28) at Client_PG.<anonymous> (/app/node_modules/knex/lib/knex-builder/make-knex.js:304:10) at Client_PG.emit (node:events:531:35) at /app/node_modules/knex/lib/execution/internal/query-executioner.js:46:12 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:318:14)
apiVersion: apps/v1 kind: Deployment metadata: ... spec: replicas: 1 selector: matchLabels: ... template: metadata: labels: ... spec: restartPolicy: Always serviceAccountName: ******* securityContext: {} initContainers: - name: init-db image: postgres:15-alpine imagePullPolicy: IfNotPresent env: - name: DB_HOST value: "*******" - name: DB_USER valueFrom: secretKeyRef: name: supabase-db key: username - name: DB_PORT value: "*******" command: ["/bin/sh", "-c"] args: - | until pg_isready -h $(DB_HOST) -p $(DB_PORT) -U $(DB_USER); do echo "Waiting for database to start..." sleep 2 done - echo "Database is ready" containers: - name: supabase-storage securityContext: {} image: "supabase/storage-api:v1.3.1" imagePullPolicy: IfNotPresent env: - name: AWS_DEFAULT_REGION value: "*******" - name: DB_DRIVER value: "postgres" - name: DB_HOST value: "*******" - name: DB_INSTALL_ROLES value: "true" - name: DB_PORT value: "*******" - name: DB_SSL value: "disable" - name: DB_USER value: "supabase" - name: FILE_SIZE_LIMIT value: "52428800" - name: FILE_STORAGE_BACKEND_PATH value: "/var/lib/storage" - name: GLOBAL_S3_BUCKET value: "stylumia-supabase-storage" - name: GLOBAL_S3_FORCE_PATH_STYLE value: "true" - name: PGOPTIONS value: "-c search_path=storage,public" - name: REGION value: "us-east-1" - name: STORAGE_BACKEND value: "s3" - name: TENANT_ID value: "stub" - name: POSTGREST_URL value: http://supabase-supabase-rest:3000 - name: DB_PASSWORD valueFrom: secretKeyRef: name: supabase-db key: password - name: DB_PASSWORD_ENC valueFrom: secretKeyRef: name: supabase-db key: password_encoded - name: DB_NAME valueFrom: secretKeyRef: name: supabase-db key: database - name: DATABASE_URL value: $(DB_DRIVER)://$(DB_USER):$(DB_PASSWORD_ENC)@$(DB_HOST):$(DB_PORT)/$(DB_NAME)?search_path=auth,storage,public&sslmode=$(DB_SSL) - name: PGRST_JWT_SECRET valueFrom: secretKeyRef: name: supabase-jwt key: secret - name: ANON_KEY valueFrom: secretKeyRef: name: supabase-jwt key: anonKey - name: SERVICE_KEY valueFrom: secretKeyRef: name: supabase-jwt key: serviceKey - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: supabase-s3 key: keyId - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: supabase-s3 key: accessKey livenessProbe: httpGet: path: /status port: 5000 initialDelaySeconds: 3 ports: - name: http containerPort: 5000 protocol: TCP volumeMounts: - mountPath: /var/lib/storage name: storage-data volumes: - name: storage-data emptyDir: medium: ""
@fenos Any Thoughts on this issue? is there a workaround?
Bug report
Describe the bug
The docker image
v1.3.1
is throwing this error as it is trying to access the buckets table in the storage schema. I ran the query in my terminal by selectingsearch_path
asstorage
, and it worked.My intuition is that it is not selecting the
search_path
asstorage
and trying to query it in thepublic
schema