g-andrade / locus

MMDB reader for geolocation and ASN lookup of IP addresses
https://hexdocs.pm/locus/
MIT License
111 stars 15 forks source link

Corrupted DB download when packet loss on network #11

Open leonardb opened 4 years ago

leonardb commented 4 years ago

Branch: 1.9.0-beta Erl: 20.3

When deploying we've noticed a number of nodes with crashes caused by corrupted GeoLite2-City.mmdb.gz downloads and it has been quite difficult to replicate

After trial and error playing with netem to introduce network issues we seem to be able to consistently replicate when introducing packet loss.

I'm not sure if this is a bug in locus or in httpc stream handling as there are no errors received in locus_http_download:handle_httpc_message/2

It runs through the intended stream_start -> stream -> stream_end with no errors, but the resulting data is corrupt

To replicate consistently I used a fairly high packet loss setting: sudo tc qdisc add dev eth0 root netem loss 25%

to disable after testing use: sudo tc qdisc del dev eth0 root

diff and console output: https://gist.github.com/leonardb/4d2b1755d13af1e65830b61767d18c68

g-andrade commented 4 years ago

What a bizarre issue.

I used your commands for inducing packet loss but couldn't replicate after dozens of attempts. Are you using OTP 20.3 from the website? Or did you build it directly from git? I did the latter, and performed the experiment using OTP 20.3.8.24.

I'm going to introduce a simple constraint - checking whether the size of the response body matches the value of the content-length response header (if present.) It won't solve the problem, but it might point us in the right direction.

Since you're downloading directly from MaxMind, it would also be fairly easy to automate the download of the checksum files, but I've avoided doing that due to a handful of bad edge cases, and in any case it wouldn't solve this particular problem, either.

leonardb commented 4 years ago

It's a 20.3 patch version built from kerl. I'll add more info tomorrow when I get to office. I'll check with Deva on our other projects as well. I know they ran into the same issue and I'm fairly sure they're running a different version. Using content length and checksums is probably a good idea, maybe allowing to gracefully handle intermittent failures.

Thanks for looking at this and as I said I'll try and gather more data tomorrow.

On Thu, Jan 9, 2020, 19:48 Guilherme Andrade notifications@github.com wrote:

What a bizarre issue.

I used your commands for inducing packet loss but couldn't replicate after dozens of attempts. Are you using OTP 20.3 from the website? Or did you build it directly from git? I did the latter, and performed the experiment using OTP 20.3.8.24.

I'm going to introduce a simple constraint - checking whether the size of the response body matches the value of the content-length response header (if present.) It won't solve the problem, but it might point us in the right direction.

Since you're downloading directly from MaxMind, it would also be fairly easy to automate the download of the checksum files, but I've avoided doing that due to a handful of bad edge cases, and in any case it wouldn't solve this particular problem, either.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/g-andrade/locus/issues/11?email_source=notifications&email_token=AAFKWMCMX5SGY5ZNKNNMXZ3Q47AUVA5CNFSM4KE7QRK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEISJN7I#issuecomment-572823293, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFKWMGMVE4XEDOGREBCDMTQ47AUVANCNFSM4KE7QRKQ .

g-andrade commented 4 years ago

I've pushed the content-length check to 75ec584edcb.

g-andrade commented 4 years ago

[...] maybe allowing to gracefully handle intermittent failures.

Well, it does handle intermittent failures: if you can afford to boot your system without geolocalization being ready, I highly recommend it - download attempts will be retried every minute[1] unless you had the database already cached on the file system.

However, if geolocalization is absolutely required, then maybe a different strategy can be employed[2] - repeatedly await the database loader in a loop until it succeeds while perhaps logging details on any errors.

[1]: This particular interval is customizable through the pre_readiness_update_period loader option, in milliseconds. [2]: According to the stracktrace of the crash, I believe this would be whatever code you've got on smlib_sup:init:22, on your application

leonardb commented 4 years ago

I ended up doing as suggested and adding busy-wait loop.

init_locus() ->
    ok = locus:start_loader(?GEODB_NAME, ?LOCUS_DB),
    case locus:wait_for_loader(?GEODB_NAME, timer:seconds(30)) of
        {ok, _DatabaseVersion} ->
            lager:info("Locus loaded database"),
            ok;
        Error ->
            locus:stop_loader(?GEODB_NAME),
            lager:error("Locus init error: ~p", [Error]),
            init_locus()
    end.

And with testing

2020-01-10 19:34:03.050 UTC [error] <0.1500.0>@smlib_sup:init_locus:30 Locus init error: {error,{body_size_mismatch,#{actual_content_length => "28691056",declared_content_length => "28704939"}}}
2020-01-10 19:34:03.050 UTC [error] <0.1501.0> [locus] geoip database failed to load (remote): {body_size_mismatch,#{actual_content_length => "28691056",declared_content_length => "28704939"}}
2020-01-10 19:34:08.054 UTC [error] <0.1512.0> [locus] geoip database download failed to start: timeout
2020-01-10 19:34:08.054 UTC [error] <0.1500.0>@smlib_sup:init_locus:30 Locus init error: {error,{timeout,waiting_stream_start}}
2020-01-10 19:34:08.054 UTC [error] <0.1512.0> [locus] geoip database failed to load (remote): {timeout,waiting_stream_start}
2020-01-10 19:34:10.408 UTC [info] <0.1500.0>@smlib_sup:init_locus:26 Locus loaded database

I still find it quite odd that httpc is returning a stream_end when it has clearly not received the fully body.

g-andrade commented 4 years ago

locus 1.10.0, which was released earlier today, does somethings differently and might be of use to you in working around the packet loss issue: