Closed simon-fa closed 11 months ago
The problem comes from the volume caddy data (/data)
I added :
logging:
options:
max-size: 50m
The problem seems to has stopped for now. But why did the logs increase like crazy since 2.7.3 ?
Like 1G each 2/3 mn...
Thanks for opening an issue! We'll look into this.
It's not immediately clear to me what is going on, so I'll need your help to understand it better.
Ideally, we need to be able to reproduce the bug in the most minimal way possible using the latest version of Caddy. This allows us to write regression tests to verify the fix is working. If we can't reproduce it, then you'll have to test our changes for us until it's fixed -- and then we can't add test cases, either.
I've attached a template below that will help make this easier and faster! This will require some effort on your part -- please understand that we will be dedicating time to fix the bug you are reporting if you can just help us understand it and reproduce it easily.
This template will ask for some information you've already provided; that's OK, just fill it out the best you can. :+1: I've also included some helpful tips below the template. Feel free to let me know if you have any questions!
Thank you again for your report, we look forward to resolving it!
## 1. Environment
### 1a. Operating system and version
```
paste here
```
### 1b. Caddy version (run `caddy version` or paste commit SHA)
This should be the latest version of Caddy:
```
paste here
```
## 2. Description
### 2a. What happens (briefly explain what is wrong)
### 2b. Why it's a bug (if it's not obvious)
### 2c. Log output
```
paste terminal output or logs here
```
### 2d. Workaround(s)
### 2e. Relevant links
## 3. Tutorial (minimal steps to reproduce the bug)
Environment: Please fill out your OS and Caddy versions, even if you don't think they are relevant. (They are always relevant.) If you built Caddy from source, provide the commit SHA and specify your exact Go version.
Description: Describe at a high level what the bug is. What happens? Why is it a bug? Not all bugs are obvious, so convince readers that it's actually a bug.
Tutorial: What are the minimum required specific steps someone needs to take in order to experience the same bug? Your goal here is to make sure that anyone else can have the same experience with the bug as you do. You are writing a tutorial, so make sure to carry it out yourself before posting it. Please:
curl
.Example of a tutorial:
Create a config file: ``` { ... } ``` Open terminal and run Caddy: ``` $ caddy ... ``` Make an HTTP request: ``` $ curl ... ``` Notice that the result is ___ but it should be ___.
I use caddy docker 2.7.3 with mercure and vulcain (from api platform)
` FROM caddy:${CADDY_VERSION}-builder-alpine AS api_platform_caddy_builder
RUN xcaddy build \ --with github.com/dunglas/mercure \ --with github.com/dunglas/mercure/caddy \ --with github.com/dunglas/vulcain \ --with github.com/dunglas/vulcain/caddy
FROM caddy:${CADDY_VERSION} AS api_platform_caddy
WORKDIR /srv/api
COPY --from=api_platform_caddy_builder /usr/bin/caddy /usr/bin/caddy COPY --from=api_platform_php /srv/api/public public/ COPY docker/caddy/Caddyfile /etc/caddy/Caddyfile `
It seems I have a problem with mercure.db, that keep increasing...
I also have this kind of errors in the logs :
{"level":"info","ts":1692304021.0643678,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:54501f60-71a5-44a8-9b1e-1c46c8982860","last_event_id":"","remote_addr":"41.213.166.56:59410","topic_selectors":["users/102/chats","users/102/notifications","users/102/device-alive","users/102/device-connected","users/102/device-disconnected","users/102/business/46/notifications"],"topics":["users/102/business/46/notifications"]},"error":"**deadline exceeded**"}
Unable to flush and deadline exeeded everywhere
{"level":"info","ts":1692305144.9292586,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:8ce659f1-3549-40af-8811-0509bd7113db","last_event_id":"","remote_addr":"83.112.237.64:56703","topic_selectors":["users/677/chats","users/677/notifications","users/677/device-alive","users/677/device-connected","users/677/device-disconnected","users/677/business/202/notifications"],"topics":["users/677/notifications"]},"error":"**deadline exceeded**"} {"level":"info","ts":1692305155.261554,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:79c190dc-53d5-43c2-b774-16b8a827527c","last_event_id":"","remote_addr":"89.157.64.11:61529","topic_selectors":["users/777/chats","users/777/notifications","users/777/device-alive","users/777/device-connected","users/777/device-disconnected","users/777/business/203/notifications"],"topics":["users/777/business/203/notifications"]},"error":"**deadline exceeded**"} {"level":"info","ts":1692305170.522563,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:f7efeab9-ada8-4673-a2df-709bd5318a8e","last_event_id":"","remote_addr":"82.64.221.11:65104","topic_selectors":["users/60/chats","users/60/notifications","users/60/device-alive","users/60/device-connected","users/60/device-disconnected","users/60/business/16/notifications"],"topics":["users/60/business/16/notifications"]},"error":"**deadline exceeded**"} {"level":"info","ts":1692305186.1313157,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:2261a2f7-7be7-496c-ad6e-a79a38278cb3","last_event_id":"","remote_addr":"41.213.166.56:59410","topic_selectors":["businesses/46/appointments"],"topics":["businesses/46/appointments"]},"error":"**deadline exceeded**"} {"level":"info","ts":1692305207.786153,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:b8a45409-156f-44cc-8a8e-162f7e5437a5","last_event_id":"","remote_addr":"176.169.102.207:51851","topic_selectors":["users/236/chats","users/236/notifications","users/236/device-alive","users/236/device-connected","users/236/device-disconnected","users/236/business/110/notifications"],"topics":["users/236/business/110/notifications"]},"error":"**deadline exceeded**"} {"level":"info","ts":1692305224.9158037,"logger":"http.handlers.mercure","msg":"Unable to flush","subscriber":{"id":"urn:uuid:ff85391f-7ce6-4a35-914d-b7df98a251ad","last_event_id":"","remote_addr":"41.213.166.56:59410","topic_selectors":["users/102/chats","users/102/notifications","users/102/device-alive","users/102/device-connected","users/102/device-disconnected","users/102/business/46/notifications"],"topics":["users/102/business/46/notifications"]},"error":"**deadline exceeded**"}
Okay - probably best you open an issue with Mercure. Doesn't look like a caddy-docker problem.
It happened after the update to caddy-docker 2.7.3. But I'll file a bug there too.
Understood, but this isn't a problem with the packaging of Caddy in Docker (which is what this repo is for), it's a problem with that plugin misbehaving with the latest version of Caddy.
I updated to caddy docker 2.7.3 this morning and the volume keeps increasing until the disk is full.