Open OrichalcumCosmonaut opened 3 years ago
deadline for slow transfers https://github.com/jesopo/bitbot/blob/4a6037c77405f3584efadc10ae75826b6b9ac422/src/utils/http.py#L240
max file size for large files that could oom the bot https://github.com/jesopo/bitbot/blob/4a6037c77405f3584efadc10ae75826b6b9ac422/src/utils/http.py#L224
don't know what problem you've managed to stumble on, but there's already code intended to handle what you've described. do you have a stacktrace?
https://github.com/jesopo/bitbot/blob/4a6037c77405f3584efadc10ae75826b6b9ac422/src/utils/http.py#L35
it seems that the default for that variable is 100MiB, which probably doesn’t help for a page smaller than that, especially when the title probably isn’t very far into the file.
i don’t have a stacktrace, this is tildebot that disconnected, so maybe ben has one, assuming it did crash?
I'm sure the machine it's running on can read and parse 100mib of html unless it's somehow akin to a zipbomb. can you get the stacktrace from ben? we're going to be blind without it
I think that limit ought to be configurable. Some people's stuff can handle that but others obviously can't.
My guess is it spends forever in html5lib
trying to parse the page. Pure python parser = sadness.
I've hit it with much worse in testing. eager to see a stacktrace
if it is a timeout, especially on the .soup()
call outside the deadline, I'd be inclined to do a much less thorough parse, even just a regex to grab the <title>
. it'd work the majority of the time
benchmarking lxml against html5lib puts the former far ahead of the latter but I recall picking the latter for fault tolerances the former doesn't have
looks like my log level was too low it just shut down
>>> def parseit():
... with open("big.html", "rb") as f:
... return html5lib.parse(f)
...
>>> timeit.timeit(parseit)
This has been running for over 20 minutes...
😰
I don't think the correct solution is limiting file size, I imagine it's trivial to code golf something html5lib finds hard to parse, I'd either deadline soupifying the results or switch to something closer to O(1). lxml is undoubtedly faster but I can't remember what exact case caused me to switch away from it
Well I wanted to post the final count for timing html5lib for posterity but it seems python got OOM killed while I wasn't looking 🙁
when given a URI like https://git.causal.agency/scooper/tree/sqlite3.c?h=vendor-sqlite3 (a 24MiB page), bitbot disconnects while getting its title, probably because it tries to parse the entire page, just to find the title, leading to it timing out.
this could probably be fixed by limiting the amount of the page to be parsed to 64KiB or thereabouts.