Closed jonathandhn closed 6 months ago
Can you try opening a shell inside the container and running shlink visit:download-db -vvv
?
Can you try opening a shell inside the container and running
shlink visit:download-db -vvv
?
docker exec fierte.pm shlink visit:download-db -vvv
[INFO] GeoLite2 db file is up to date.
Actually, the loop stopped today, the last record is from this afternoon
Filename | Date | IP Address | City | Region | Country | ISP | Org |
---|---|---|---|---|---|---|---|
GeoLite2-City_20240220.tar.gz | 2024-02-22 15:07:27 | 20.74.17.xxx | Paris | Paris | France | Microsoft Azure | Microsoft Azure |
Sometimes there are issues with some of their files, in which the metadata used by Shlink to determine if an update is needed, is wrong.
It's probably what happened this time, which probably caused your instance to think an update was needed on every new request, and just re-download the same file again and again.
I tried to update the file in my own instance, but it skipped that version, and the most recent one seems to be fine.
I'm going to try to verify if this is the case.
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful.
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful.
Ouch! Do you have some link where this is explained? I would like to reference it from the docs.
If I manage to confirm this was the problem, I'll try to find some way to mitigate it.
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful.
Ouch! Do you have some link where this is explained? I would like to reference it from the docs.
If I manage to confirm this was the problem, I'll try to find some way to mitigate it.
Here : https://comms.maxmind.com/daily-download-limit-decreasing-2
Just checked the file from the 9th of February, and the metadata is correct. Shlink should not have tried to download it over and over.
The logic basically compares the GeoLite file's build time and checks if it's more than 35 days old, in which case it tries to download a new copy.
This is done with concurrency in mind, so a lock is set until download ends, to avoid multiple downloads in parallel.
Other potential reasons for this to happen are that there was not enough disk space to decompress the file after downloading it, or perhaps an issue with the system date that made Shlink think it was in the future.
I'll keep this open for now to see if I can think in some way to make the process more resilient.
Got the same bug last week and I also received download limit reached notification from MaxMind.
Other potential reasons for this to happen are that there was not enough disk space to decompress the file after downloading it
In my case the server has enough disk space to handle the file.
I've restart the shlink service to see if it will work.
Could any of you check if your instances have some log entry starting with GeoLite2 database download failed
?
Here's the log related to GeoLite2 database download failed
Here's the log related to
GeoLite2 database download failed
Yeah, that's basically showing that Shlink successfully downloaded new versions of the database on every visit, until it reached the API limit, and then all the instances of GeoLite2 database download failed
are due to that limit and Shlink still attempting to download every time.
Unfortunately it does not explain why Shlink was still thinking a new instance was needed to be downloaded, when it had a fresh copy.
The only solution I can think of is to change how Shlink decides when a new copy is needed. Potential options:
For context, the way it works now is that Shlink reads the database metadata, for a value that tells when was it build. If a certain a mount of days has passed (35, if I remember correctly), or the database does not exist at all, it tries to download it.
It is very straightforward, has very low impact and keeps the GeoLite file as the single source of truth, which is convenient, but it is clearly not covering some particular scenario that I'm missing.
There was a new report of this issue, but in there, it was mentioned this was happening with orphan visits specifically.
I checked again the log provided here, and I noticed there are many attempts on downloading the database as a result of an orphan visit.
I also see some attempts which do not seem to be linked with a particular request happening instants before it, though. @sparanoid could it be that you have some scheduled task to periodically download the GeoLite file, or that the logs were manipulated to remove sensitive information?
I haven't looked too closely at the code, but it appears that you are downloading the file to a temporary file and then copying it to the final location. This could potentially result in a corrupted file if multiple requests are going at once. To prevent this, you could either write the file atomically or take out appropriate locks (or preferably both).
In order to write the file atomically, you should download it to the same directory as the final file to ensure the file is on the same file system, decompress it, and then rename the file to the final file name. You would either want to take a lock to ensure that no other request is writing to the same temporary files at the same time or you would want to use random names for the temporary files.
Some other thoughts:
open_basedir
is enabled and the database path is outside of it?Edit:
I was looking at the code in shlink-ip-geolocation
when I commented above, and missed this code in this repo:
I didn't look into how that locking works, but presumably it prevents multiple downloads at once.
I'm having the same problem with my instance, which just started happening in the last few days.
I was looking at the code in shlink-ip-geolocation when I commented above, and missed this code in this repo:
I didn't look into how that locking works, but presumably it prevents multiple downloads at once.
Yes, that's correct. That lock prevents multiple downloads in parallel.
I have a suspicion of what could be the problem. There might be some stateful service somewhere down the dependency tree, that's keeping a reference to the old database file metadata, making every check resolve that the file is too old, resulting in a new download.
@oschwald answering to your comments:
In order to write the file atomically, you should download it to the same directory as the final file to ensure the file is on the same file system, decompress it, and then rename the file to the final file name. You would either want to take a lock to ensure that no other request is writing to the same temporary files at the same time
This is exactly how it's done.
Comparing the metadata time and the system time could result in excess downloads if the system time is off.
I thought about this, but it would have to be several days off, so I think it's a negligible risk.
If someone really has a system with a so messed up system time, I think it's reasonable that the expected solution in that case is to ask the admins to fix that, not to expect Shlink to work around the problem.
Ultimately, any solution that does not make a lot of MaxMind API requests would be time based, one way or another, so there's not much that can be done here.
What happens if the file system is read-only or open_basedir is enabled and the database path is outside of it?
Then nothing can be done and GeoLite files won't be downloaded. It's an unfortunate limitation due to how GeoLite db files work.
In any case, this already happened not long ago. The solution involved making sure Shlink only tries to write on its own data
directory, and incidentally, on the tmp
dir due to some external dirs.
I have a suspicion of what could be the problem. There might be some stateful service somewhere down the dependency tree, that's keeping a reference to the old database file metadata, making every check resolve that the file is too old, resulting in a new download.
I can confirm this is the problem. There's an unintentional stateful service that's reading the GeoLite file metadata when created, and holding it in memory, making every check think the database is too old.
This is affecting all versions of Shlink, so I will try to backport it to v3.x if it's not too complex.
I have just released version 4.0.2 and 3.7.4, both including the fix for this bug.
Shlink version
3.7.3
PHP version
8.2
How do you serve Shlink
Docker image
Database engine
MariaDB
Database version
10.3.23
Current behavior
Hi, I did start a container on January the 6th (root-less, on 3.7.3 form c70cf1b37087581cfcb7963d74d6c13fbee8555a7b10aa4af0493e70ade41202 docker image) and it did the job well until the MaxMind monthly renewal on February the 9
Here is the logs from January the 6th until successful download of the initial GeoIP database
and then the next relevant logs are :
We are over 2000 downloads a day of GeoLite2-City prior to receiving a warning form MaxMind as early as 5:00 in the morning :
From https://www.maxmind.com/ history for the very first occurrence :
NB : 22 of February 16:54 Paris time, I will not restart the container until 23:00 Paris time if you need more logs and data, as I will avoid the bug over locking out our MaxMind account for February 23rd and so stop and spine a new container before midnight.
Expected behavior
Succeed on success download of MaxMind Db
How to reproduce
Running the container with MaxMind Setup over 30 days.