Closed Security-Chief-Odo closed 12 months ago
Hmm, yeah there's something up, there's a bug that's not setting the redis ttl properly -.-
So the error that I got 5 days ago "Failed to get url https://beehaw.org/api/v3/site after 4 attempts: Request failed with status code 502",
is still stopping the crawler from trying agian.
I've manually expired and fixed the ttl's so it'll be scanned shortly. I'll see what's going on with these TTL's since they should be getting set when the error is written to redis.
more info
error distribution, should be no errors older than ~8 hours
Update, patched bug in crawler (will now add TTL's to error records properly for instance
scans.. 🤦♂️) https://github.com/tgxn/lemmy-explorer/commit/6087a1131ffba36cb034152e2463338d2984cc62#diff-4b2bdb4ce96d3bb4498017a614674143497a34bfd46e426da8e348b4c699cdadR138
👍
looking better, will get thru that instance queue at some point I'm sure :D
Btw sh.itjust.works also isn't showing up.
Thank you for identifying the reason and fixing it quickly!
@2014MU69
Btw sh.itjust.works also isn't showing up.
Showing up OK for me. Might have been caching?
It's working now but it wasn't working for the last few days.
there was another recent outage of one of the bigger instances that now seems fixed
Describe the bug Beehaw.org is not listed on https://lemmyverse.net/
To Reproduce Go to https://lemmyverse.net/ Search for 'beehaw.org' Get nothing found
Expected behavior Go to https://lemmyverse.net/ Search for 'beehaw.org' Beehaw.org is shown
Additional context Would like to determine why this is the case. I am a systems administrator at beehaw.org, and when I search our logs, I see successful HTTP 200's for the requested endpoints
/api/v3/community/list(params)
,/.well-known/nodeinfo
,/nodeinfo/2.0.json
and/api/v3/site
for the user-agentlemmy-explorer-crawler/**
If it has been successful, I'd like to understand why beehaw.org is not listed on lemmyverse.net as expected.