eyeonus / TradeDangerous-listener

An EDDN listener, designed to work in conjunction with the EDDBlink plugin for Trade Dangerous.
GNU Lesser General Public License v3.0
4 stars 3 forks source link

Server hiccup after exporting listings-live.csv #12

Closed Tromador closed 5 years ago

Tromador commented 6 years ago

See the below example. I suspect the sleep needs moving (or another adding somewhere) to stop the loop (or maybe a different loop) going back into runaway mode when exporting.

It's not causing us to fall behind the queue like before, but it would be nice if we could clear up this last bit of the server slows.


Market update for EURYBIA/DEMOLITION UNLIMITED finished in 0.236 seconds.
Market update for LEMBAVA/GOLDSTEIN PORT finished in 0.188 seconds.
Market update for LUSHERTHA/BUTCHER KEEP finished in 0.135 seconds.
Market update for DIAGUANDRI/RAY GATEWAY finished in 0.176 seconds.
Market update for LHS 2088/TANAKA TERMINAL finished in 0.27 seconds.
Market update for SOL/ABRAHAM LINCOLN finished in 0.089 seconds.
Listings exporter sending busy signal. 2018-07-23 17:13:41.674614
Message processor acknowledging busy signal.
Busy signal acknowledged, getting listings for export.
Exporting 'listings-live.csv'. (Got listings in 0:00:02.754535)
Busy signal off, message processor resuming.
Market update for ROBIGO/ROBIGO MINES finished in 3.684 seconds.
Market update for LINDOL/STEIN TERMINAL finished in 4.407 seconds.
Export completed in 0:00:13.612769
Market update for JUMUZGO/LINDSTRAND GATEWAY finished in 2.277 seconds.
Market update for SOL/ABRAHAM LINCOLN finished in 0.07 seconds.
Market update for LAKSAK/STJEPAN SELJAN HUB finished in 0.164 seconds.
Market update for ETA CASSIOPEIAE/J.F.KENNEDY finished in 0.151 seconds.
Market update for 36 DORADUS/GARDNER DOCK finished in 0.298 seconds.
Market update for HIP 44811/MULLANE TERMINAL finished in 0.174 seconds.
Market update for GLIESE 868/MACLEAN TERMINAL finished in 0.062 seconds.
``
eyeonus commented 6 years ago

IF you're pointing at what I think you're pointing at, there's nothing to be done about it. The listings exporter takes very little time to grab all the listings (2.75 seconds in that snippet above), but then it has to process them all and write the file. Since it doesn't need the DB at that point, it releases the busy lock, but it's still doing work, which means it's eating CPU. In that snippet, Robigo Mines and Stein Terminal were being updated while the exporting was happening, and the exporting finished while Lindstrand Gateway was being updated, which is why those three took longer than average- CPU was being divided between exporting the listings and updating those stations.

Tromador commented 6 years ago

CPU wasn't divided. Each thread is running on a separate core of a largely unloaded server.

But sure - ok. I sat and watched what happened. It was slow whilst writing the file, as soon as the file was written then it ran a few updates mega fast (catching up) and then went back to normal.

Is there anything we can do to speed up the file write then? It's not the hardware, (fibre channel raid), I assure you.

eyeonus commented 6 years ago

Maybe?

Right now it writes each entry directly to the file in a for loop. We could instead write to a string, and then write the string to the file. That means just one very large write, rather than a bunch of single line writes.

I don't know how much time savings that would be, but I do know it'd be a potentially very large increase in memory usage, since that string would be stored in memory, and would be just as large as the listings file.

eyeonus commented 6 years ago

I've updated the debug branch to do this, feel free to test it.

Tromador commented 6 years ago

As I write, listings-live.csv is 15MB long. Let's say in my wildest nightmares it gets to 50MB long, then RAM is never going to be an issue (unless something badly breaks).

It is, however 342,000 (and change) lines long, so that's 342,000 (and change) I/O operations.

On balance, I would guess that doing it in RAM would be quicker - certainly if you are willing, I'd like to try it.

EDIT: I'll test the debug then :)

eyeonus commented 6 years ago

The change is on the debug branch. I've not tested it yet, the test is still doing a plugin import.

Tromador commented 6 years ago

Astonishingly - it's slower to do it all in memory.

When it does write the file, it's instantaneous, but putting it together in memory is slower. I'm genuinely surprised.

Tromador commented 6 years ago

Well - never mind then. If I should come across some super efficient way of doing it which we hadn't considered, I'll let you know. Thanks for looking at it.

eyeonus commented 6 years ago

No worries.

Tromador commented 6 years ago

Bernd notes in https://github.com/eyeonus/EDDBlink-listener/issues/7

"If you're running in WAL journal there is no need to stop the updater while exporting. WAL allowes multiple reader and one writer. Only the EDDB update can't run at the same time."

So I'm reopening this. I am running the server in WAL. Once I'm happy with the database tunings I'm running on server, we can push them to the main TD and potentially remove the exporter's busy signal.

eyeonus commented 6 years ago

Sounds good to me.