Open VCasecnikovs opened 4 months ago
This is how store is added volumes:
I found the issue (and it will be fixed in the upcoming version).
If you define the top-level volumes like you did:
volumes:
store:
driver: local
Coolify does not modify this value (as it thinks that as you defined this part, so you know what you are doing). This is a "bug" as then Docker defines the volume name based on the current dir, which is different for each deployment. That is why you have different volumes.
In the next version, Coolify will check if you defined the name
property, and if not, it will define it, so it won't be randomly generated.
The storage view should be empty as everything is hardcoded.
I have a plan to improve this part, to be similar as the services view.
Thank you, so, to fix it right now I should set name
value to the store?
Yes.
What if we deploy two docker compose projects with the same volume name? That shouldn't be an issue but it is.
I noticed similar bug that kinda relates to this, so I won't open new issue yet.
Issue / bug: Coolify is not adding dynamic name for the volumes. This creates problems when deploying multiple services with identical Docker Compose files on a single server, as they both attempt to use the same volume.
In documentation it says that each volume should get dynamic name. But that is not the case. (At least when deploying with docker compose) "To prevent storage overlapping between resources, Coolify automatically adds the resource’s UUID to the volume name." Persistent Storage - Coolify Docs
Example fo docker-compose.yaml
where this happens.
When deploying this twice, both pocketbase
containers tries to use same volume pb_data
services:
sveltekit:
build:
context: ./sveltekit
dockerfile: Dockerfile
environment:
NODE_ENV: production
depends_on:
- pocketbase
pocketbase:
build:
context: ./pocketbase
dockerfile: Dockerfile
environment:
GO_ENV: production
volumes:
- pb_data:/home/nonroot/app/pb_data
When creating new resource using previous docker-compose file, coolify just adds this to it. No dynamic tag.
volumes:
pb_data:
name: pb_data
I still can't fix this. The volumes are being lost on each redeploy.
My docker-compose.yaml
volumes are:
volumes:
project_db_data:
name: project_db_data
I have the same problem. Some templates like dragonfly let me set a volume name and the data is persistent across restarts. Other templates or pure docker compose deployments set a random string in front of the volume name. I tried setting volume names in the docker compose, but that did not work.
Volumes:
gitea_data:
drivers: local
gitea_config:
driver: local
postgresql_data:
driver: local
or
volumes:
gitea-data:
name: gitea_data
driver: local
gitea-timezone:
name: gitea_timezone
driver: local
gitea-localtime:
name: gitea_localtime
driver: local
postgresql-data:
name: postgresql_data
driver: local
When I pull a new image and restart the containers, I get a new random string in front of the volume names and the old volume is lost.
This is a huge problem, docker compose builds cannot be used at all. I hope we find a solution soon. If I find anything I'll update here.
Can you please check again with the latest version? I have added a few fixes.
You need to create your application again to test it.
Hey Andras,
This is my database inside the docker-compose file
db:
image: bitnami/postgresql:latest
platform: linux/amd64
restart: always
volumes:
- db_data:/bitnami/postgresql
ports:
- ${POSTGRESQL_PORT}:5432
environment:
- POSTGRESQL_DATABASE=${POSTGRESQL_DATABASE}
- POSTGRESQL_USERNAME=${POSTGRESQL_USERNAME}
- POSTGRESQL_PASSWORD=${POSTGRESQL_PASSWORD}
logging: *default-logging
volumes:
db_data:
name: 'db_data'
driver: local
The problem still persists for me. Can someone else check just so we're sure it's not on my side.
I cannot replicate the issue on the latest v315 version.
How did you test id?
Can you give me the docker-compose.yaml
file that you used for testing?
@OmkoBass did you got any resolution?
No, but I believe Andras fixed it, he probably knows his own software better than me 😅 I'm a bit busy now with other projects. I'll try again soon and get back to you if I find a solution.
I tried deploying a new one again today and the issue persists.
I've moved my project from there for now :)
On Thu, 25 Jul, 2024, 4:10 pm Omer Batilović, @.***> wrote:
No, but I believe Andras fixed it, he probably knows his own software better than me 😅 I'm a bit busy now with other projects. I'll try again soon and get back to you if I find a solution.
— Reply to this email directly, view it on GitHub https://github.com/coollabsio/coolify/issues/2376#issuecomment-2250021158, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANOEJQTLJWZT2U33G2AWIA3ZODI3DAVCNFSM6AAAAABI6QXDHWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJQGAZDCMJVHA . You are receiving this because you commented.Message ID: @.***>
Would it be possible to persist the volume name if I deploy a repository as docker compose?
Use case: I have an SSD attached to my Raspberry Pi, now I want the volume to being /home/<user>/ssd/services/data
but every time it picks up docker compose file, it adds some UUID as a prefix.
Example: Docker compose which I am testing now, which also has volume name defined
Actual coolify: 'acks0gs-uptime-kuma:/app/data'
Expected: 'uptime-kuma:/app/data'
I have the same problem. Not figured out how exactly it reproduces yet, but every day my data is being lost after some of redeploys. I use postgres and docker-compose with a volume.
Description
For each redeploy of an app - new volume is created. I use docker compose app with mentioned volume
It creates a new volume for each redeploy:
It does not see storage in storages:
Minimal Reproduction (if possible, example repository)
Create docker compose service with storage -> redeploy it
Exception or Error
No response
Version
v4.0.0-beta.294 Latest