Closed simonbaird closed 1 month ago
Right now every page seems to take 8 or 10 seconds to load, even if it's a page that should be just rendering some html.
Hello, I can confirm that since about 45 minutes, saving changes on a Tiddly takes about 80 seconds on Tiddlyhost. Good luck to fix it!
Not sure if it's impacting performance, but I'm seeing Tiddlyhost responding with a 406 Not Acceptable
to a HEAD request with Accept
header set as */*;charset=UTF-8
.
Update: Filed #343 to deal with that.
Would you like me to open a new issue for the latency problem on TiddlyHost?
I think it's probably the same root cause, so this single issue should be enough.
I'm not certain but I think the the change in commit b8154dfdaa560bd8883e957e35612aad88314e8d introduced a performance regression. I've reverted it, and it looks like that did drop the the load average and the amount of CPU showing in top for the ruby process. At this stage I'm not sure how much impact that will have on the overall problem.
https://github.com/tmm1/rbtrace might be useful.
It's easy to observe with:
time curl -s https://tiddlyhost.com/ | tail -5
The real time should be under 0.2 seconds, and it often it is, but at other times it spikes up to 9 or 10 seconds for unknown reasons.
Hi, I’ve been noticing something odd lately which is why I’ve landed on this. Saving a wiki doesn't work, maybe it's related to this issue. If not, please separate the issue.
@fu-sen Saving should be working now as far as I can tell. Can you give me more details about how it's not working for you?
I am trying to update https://balloonvendor.tiddlyhost.com/, but even when I save the Wiki, it times out after a few minutes. and the Updated time will not change in the Your sites list. I think this is because I live in Japan, so I'm far away from the server, and also because this Wiki is relatively large in size. I have been using it without any problems until July.
My Feather Wiki is small in size and the saving works.
This is good:
The drop in CPU at around 5:30pm is when the robots.txt
was deployed, see 64bc75f41a48cb74849aac1c09abc407ad9a380e.
Not sure about the large spike at 10pm, but let's hope it's not a regular thing.
I'm guessing the wider bump before that is due to one or more crawlers that hadn't picked up the updated robots file yet.
For what it's worth, on my end the latency issue seems to be solved 👌Thanks for your work @simonbaird ! edit: seems like there is still a latency issue on saving, but loading is much faster than a few day ago 🤔
Let's close this. We can open additional issues for performance as needed. There are still some users experiencing saving problems, see #345, but the original slowness due to high CPU load caused by webcrawlers should be resolved.
CPU load average looking alright:
In recent weeks Tiddlyhost has been much slower than it ought to be. I don't know why, but I want to figure it out and fix it.