MetPX / sarracenia

https://MetPX.github.io/sarracenia
GNU General Public License v2.0
46 stars 22 forks source link

Incomplete XMLs being distributed since January 21, 2019 #131

Open mikegabriel opened 5 years ago

mikegabriel commented 5 years ago

Hello,

I am not sure if you have had it reported but starting January 21, sr_subscribe has been downloading incomplete XML files for certain locations.

Specifically, the closing tag </siteData> is being included at </sit

I've updated the the latest version of sarracenia but the problem persists.

For reference, see the latest as of this time of writing for s0000822 (Moose Jaw, SK) and look at line 866.

Is there troubles on your end of things or do you have suggestions?

Thanks!

matthewdarwin commented 5 years ago

I'm getting tons of these too. I reported this to ec.dps-client.ec@canada.ca yesterday.

petersilva commented 5 years ago

It´s likely an internal problem.

petersilva commented 5 years ago

When you see them, do you check on the datamart, and they are OK there? I don´t see any incomplete files on the server at the moment, but I don´t know if that is because the incomplete data gets replaced afterward, or if there is a problem with the download.

mikegabriel commented 5 years ago

@petersilva In my manual testing, the datamart seems correct but I will find out shortly. I am just deploying a patch that will fallback to datamart on malformed XML.

matthewdarwin commented 5 years ago

They are incomplete on the datamart. Failure happens doing 'wget' or similar. I have received 11,114 incomplete XML documents since midnight EST today.

petersilva commented 5 years ago

OK so we have to trace it back to the stuff that is feeding datamart.

petersilva commented 5 years ago

it is hard to tell, because these files are updated very frequently, so it could be that when the initial download was triggerred, the file was incomplete, but afterward, it got updated.

matthewdarwin commented 5 years ago

This is ongoing problem since years. But it is worse last 2 days...

select date(from_unixtime(timestamp)) as date, count(*) from download_log where status = 591 group by date having date > '2019-01-01';
+------------+----------+
| date       | count(*) |
+------------+----------+
| 2019-01-02 |      459 |
| 2019-01-03 |      926 |
| 2019-01-04 |      337 |
| 2019-01-05 |      446 |
| 2019-01-06 |      606 |
| 2019-01-07 |      384 |
| 2019-01-08 |      179 |
| 2019-01-09 |      107 |
| 2019-01-10 |      191 |
| 2019-01-11 |      221 |
| 2019-01-12 |       63 |
| 2019-01-13 |      261 |
| 2019-01-14 |      366 |
| 2019-01-15 |     4521 |
| 2019-01-16 |     2080 |
| 2019-01-17 |       84 |
| 2019-01-18 |       91 |
| 2019-01-19 |      434 |
| 2019-01-20 |     1733 |
| 2019-01-21 |     2890 |
| 2019-01-22 |    10364 |
| 2019-01-23 |    11114 |
+------------+----------+
22 rows in set (5.37 sec)
petersilva commented 5 years ago

The problem was supposed to be solved as of last fall. This is not good.

mikegabriel commented 5 years ago

I can echo that this has been ongoing as well. Just normally I'll lose an hourly forecast for a location but catch up the next hour. Since the 21st though, some places haven't had a successful push yet.

It seemed to me like this morning was a bit better than yesterday but @matthewdarwin logs indicate otherwise.

matthewdarwin commented 5 years ago

Got this back from dps client 10 minutes ago:

<<< It seems the situation is better from now, can you confirm ? Our analysts have done some cleanup and rebooted one of the servers >>>

matthewdarwin commented 5 years ago

Let's keep an eye on it...

select date(from_unixtime(timestamp)) as date, hour(from_unixtime(timestamp)) as hour, count(*) from download_log where status = 591 group by date, hour having date > '2019-01-22';
+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-23 |    0 |     5973 |
| 2019-01-23 |    1 |       10 |
| 2019-01-23 |    2 |       26 |
| 2019-01-23 |    3 |       20 |
| 2019-01-23 |    4 |      461 |
| 2019-01-23 |    5 |      723 |
| 2019-01-23 |    6 |        9 |
| 2019-01-23 |    7 |        5 |
| 2019-01-23 |    8 |        1 |
| 2019-01-23 |    9 |      119 |
| 2019-01-23 |   10 |     1453 |
| 2019-01-23 |   11 |      542 |
| 2019-01-23 |   12 |     1333 |
| 2019-01-23 |   13 |      155 |
| 2019-01-23 |   14 |      279 |
| 2019-01-23 |   15 |        5 |
| 2019-01-23 |   16 |        8 |
+------------+------+----------+
17 rows in set (6.14 sec)
matthewdarwin commented 5 years ago

Definately better...

+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-23 |   16 |       36 |
| 2019-01-23 |   17 |       46 |
| 2019-01-23 |   18 |        2 |
| 2019-01-23 |   19 |        2 |
| 2019-01-23 |   23 |      352 |
| 2019-01-24 |    0 |       12 |
| 2019-01-24 |    1 |       12 |
| 2019-01-24 |    3 |        5 |
| 2019-01-24 |    4 |        1 |
| 2019-01-24 |    7 |        2 |
+------------+------+----------+
petersilva commented 5 years ago

Some systems were gradually deteriorating over months, and were not properly monitored. We are addint monitoring to address that.... but as far as I am concerned with the configuration in place now, 0 should be attainable. So we still have work to do.

matthewdarwin commented 5 years ago

Great thanks Peter. I look forward to the ongoing improvements.

If you want to see what it looks like from my side at any time, feel free to check https://www.weatherstats.ca/debug/ec_dd_network_error.html This is my error log (minus the MD5 checksum errors which happen quite a lot)

mikegabriel commented 5 years ago

Sounds good, looking forward to any improvements. Thanks Peter.

mikegabriel commented 5 years ago

Looks like it’s back to consistent failures today. Matthew’s logs are showing the same things I’m seeing. Started around 09:00 UTC.

matthewdarwin commented 5 years ago

Indeed the problem has returned.

+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-26 |    0 |        2 |
| 2019-01-26 |    1 |        2 |
| 2019-01-26 |    4 |      138 |
| 2019-01-26 |    5 |        4 |
| 2019-01-26 |    7 |        1 |
| 2019-01-26 |    8 |        2 |
| 2019-01-26 |   10 |       39 |
| 2019-01-26 |   11 |     2101 |
| 2019-01-26 |   12 |        3 |
| 2019-01-26 |   15 |        3 |
| 2019-01-26 |   16 |      510 |
| 2019-01-26 |   17 |     2632 |
+------------+------+----------+
mikegabriel commented 5 years ago

Looking like the same issues though, files are just getting cutting off mid-XML.

$ tail s0000863_e.xml 
 <almanac>
 <temperature class="extremeMax" period="2008-2018" unitType="metric" units="C" year="2011">11.4</temperature>
 <temperature class="extremeMin" period="2008-2018" unitType="metric" units="C" year="2009">-9.0</temperature>
 <temperature class="normalMax" unitType="metric" units="C"/>
 <temperature class="normalMin" unitType="metric" units="C"/>
 <temperature class="normalMean" unitType="metric" units="C"/>
 <precipitation class="extremeRainfall" period="-" unitType="metric" units="mm" year=""/>
 <precipitation class="extremeSnowfall" period="-" unitType="metric" units="cm" year=""/>
 <precipitation class="extremePrecipitation" period="2008-2017" unitType="metric" units="mm" year="2016">18.8</precipitation>
 <precipitation class="extremeSnowOnGround" period="2015-2018" unitType="metric" units="cm" year="2015">0.0</precipitation>
$

Missing </almanac></siteData>.

mikegabriel commented 5 years ago

Looking a bit further, I am seeing that the onfly_checksum is missing again. Last time I saw this happen was September, https://github.com/MetPX/sarracenia/issues/98

2019-01-26 18:04:03,902 [WARNING] onfly_checksum b777b9f0a9fe21bd8868778210752fc9 differ from message 95e3d5aa8528f87450edaae5781a0394
2019-01-26 18:04:03,906 [INFO] appended to retry list file 20190126140604.741638 http://dd4.weather.gc.ca /citypage_weather/xml/NS/s0000462_e.xml
2019-01-26 18:04:03,906 [ERROR] sr_subscribe/run going badly, so sleeping for 0.01 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:04,177 [ERROR] util/writelocal mismatched file length writing s0000763_e.xml.tmp. Message said to expect 32369 bytes.  Got 31601 bytes.
2019-01-26 18:04:04,239 [WARNING] onfly_checksum cc970be5f22ff347d5a4c27a39364f82 differ from message 71d66c4f8e94eb0f8d4a9f4c18964132
2019-01-26 18:04:04,239 [INFO] appended to retry list file 20190126140604.749477 http://dd4.weather.gc.ca /citypage_weather/xml/ON/s0000763_e.xml
2019-01-26 18:04:04,239 [ERROR] sr_subscribe/run going badly, so sleeping for 0.02 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:04,466 [ERROR] util/writelocal mismatched file length writing s0000707_e.xml.tmp. Message said to expect 30860 bytes.  Got 30212 bytes.
2019-01-26 18:04:04,535 [WARNING] onfly_checksum bc72c94e6cb1d9129c5e55afdf156ebf differ from message f56830511030540259541fc09a16bade
2019-01-26 18:04:04,535 [INFO] appended to retry list file 20190126140604.751064 http://dd4.weather.gc.ca /citypage_weather/xml/ON/s0000707_e.xml
2019-01-26 18:04:04,535 [ERROR] sr_subscribe/run going badly, so sleeping for 0.04 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:04,829 [ERROR] util/writelocal mismatched file length writing s0000641_e.xml.tmp. Message said to expect 31312 bytes.  Got 30697 bytes.
2019-01-26 18:04:04,879 [WARNING] onfly_checksum 8d042d31d37ec24ff8a65e033d9e2aa9 differ from message 6efdc3862634156882bf9f1db6e514ea
2019-01-26 18:04:04,880 [INFO] appended to retry list file 20190126140604.752588 http://dd4.weather.gc.ca /citypage_weather/xml/ON/s0000641_e.xml
2019-01-26 18:04:04,880 [ERROR] sr_subscribe/run going badly, so sleeping for 0.08 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:05,398 [ERROR] util/writelocal mismatched file length writing s0000386_e.xml.tmp. Message said to expect 29784 bytes.  Got 29391 bytes.
2019-01-26 18:04:05,451 [WARNING] onfly_checksum efeddb952a38b697e8565b43793ee274 differ from message 20b5cb7d3a2e92c7f7e7fcc640dae753
2019-01-26 18:04:05,451 [INFO] appended to retry list file 20190126140614.781660 http://dd4.weather.gc.ca /citypage_weather/xml/NB/s0000386_e.xml
2019-01-26 18:04:05,452 [ERROR] sr_subscribe/run going badly, so sleeping for 0.16 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:05,915 [ERROR] util/writelocal mismatched file length writing s0000597_e.xml.tmp. Message said to expect 30526 bytes.  Got 29655 bytes.
2019-01-26 18:04:05,966 [WARNING] onfly_checksum 97583b932f38b29d20fa536082da4c6e differ from message b07cafd91c06b2dd7809795d22c39c7d
2019-01-26 18:04:05,966 [INFO] appended to retry list file 20190126140614.786393 http://dd4.weather.gc.ca /citypage_weather/xml/ON/s0000597_e.xml
2019-01-26 18:04:05,967 [ERROR] sr_subscribe/run going badly, so sleeping for 0.32 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:06,668 [ERROR] util/writelocal mismatched file length writing s0000253_e.xml.tmp. Message said to expect 29607 bytes.  Got 28959 bytes.
2019-01-26 18:04:06,722 [WARNING] onfly_checksum e0065595075008be7f0d7b96e37be4d6 differ from message 0ee7dafb86aa2ccf953b7b8971b83332
2019-01-26 18:04:06,723 [INFO] appended to retry list file 20190126140614.801288 http://dd4.weather.gc.ca /citypage_weather/xml/QC/s0000253_e.xml
2019-01-26 18:04:06,723 [ERROR] sr_subscribe/run going badly, so sleeping for 0.64 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:07,757 [ERROR] util/writelocal mismatched file length writing s0000708_e.xml.tmp. Message said to expect 30860 bytes.  Got 30205 bytes.
2019-01-26 18:04:07,808 [WARNING] onfly_checksum 6b254190e6b8b47fe1ba67987753afa5 differ from message d525e3777bb814ff790588c27268ab0a
2019-01-26 18:04:07,808 [INFO] appended to retry list file 20190126140614.809813 http://dd4.weather.gc.ca /citypage_weather/xml/ON/s0000708_e.xml
2019-01-26 18:04:07,808 [ERROR] sr_subscribe/run going badly, so sleeping for 1.28 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:09,688 [ERROR] util/writelocal mismatched file length writing s0000642_e.xml.tmp. Message said to expect 31308 bytes.  Got 30703 bytes.
2019-01-26 18:04:09,742 [WARNING] onfly_checksum f1bbc064f078f339b016e0a43be487a7 differ from message 477a50cf1999b0b301d829672553d1b6
2019-01-26 18:04:09,742 [INFO] appended to retry list file 20190126140614.822699 http://dd4.weather.gc.ca /citypage_weather/xml/ON/s0000642_e.xml
2019-01-26 18:04:09,742 [ERROR] sr_subscribe/run going badly, so sleeping for 2.56 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
2019-01-26 18:04:12,564 [ERROR] util/writelocal mismatched file length writing s0000610_e.xml.tmp. Message said to expect 33450 bytes.  Got 32574 bytes.
2019-01-26 18:04:12,618 [WARNING] onfly_checksum 37313418a5a97f5bdbbe750ab22773d8 differ from message dba1e02530862fa9206a6ab465436018
2019-01-26 18:04:12,618 [INFO] appended to retry list file 20190126140614.835802 http://dd4.weather.gc.ca /citypage_weather/xml/QC/s0000610_e.xml
2019-01-26 18:04:12,619 [ERROR] sr_subscribe/run going badly, so sleeping for 5.12 Type: <class 'AttributeError'>, Value: 'sr_subscribe' object has no attribute 'onfly_checksum',  ...
petersilva commented 5 years ago

There is no server load issue this time. The last time the guys worked on it, they put things that prevent overload. We just changed something (inflight setting on sarra deliverying citypage files). It may help. See if it is any better now. We aren't likely to fix much on the weekend. On Monday, the analysts will take a look again.

matthewdarwin commented 5 years ago

No improvement... Since my last update, the numbers are:

| 2019-01-26 |   19 |        1 |
| 2019-01-26 |   22 |       27 |
| 2019-01-26 |   23 |     2024 |
| 2019-01-27 |    1 |        1 |
| 2019-01-27 |    4 |     1963 |
| 2019-01-27 |    5 |     1596 |
| 2019-01-27 |   10 |       64 |
petersilva commented 5 years ago

plugin available to replicate issue. https://github.com/MetPX/sarracenia/blob/master/sarra/plugins/file_citypage_check.py

We will have someone run something like this next week.

matthewdarwin commented 5 years ago

I assume the files are being written in an atomic fashion (write to a temp file and move the temp file into the desired final location) so that we aren't getting files being distributed that are in the middle of being updated.

petersilva commented 5 years ago

you assume a lot. It's a fundamental problem that the files are constantly being re-written by a several processes that do not communicate with each other or the transport system. We are stuck having to figure out when a gap has occurred, and copy off a new version.

petersilva commented 5 years ago

It's only city pages we have this problem with. For all other data types, your assumption is good, and we do the right thing.

mikegabriel commented 5 years ago

Plugin looks good, should catch all of the errors that I have seen. I've since been able to patch my scripts to grab from the datamart (dd.weather.gc.ca specifically) on an XML failure and so far had good success with that source today. Generally my server would fetch that remote file seconds after the sr_subscribe download and parsing failure. So dd seems to be getting accurate files pretty quick.

I appreciate you taking time this weekend to look into it.

matthewdarwin commented 5 years ago

Yuck. This explains why there are duplicate forecast periods I get multiple times every day. Since you are writing a filter, then maybe you could filter those out too? eg (Monday -> Monday Night -> Monday Night -> Tuesday -> Tuesday Night).

This is my code to deal with that problem:

        # sometimes the forecast is messed up.  Check for duplicate section titles
        # if any found, throw away the entire forecast
        my %xperiods;
        foreach my $day (@{$$forecast{forecast}}) {
                my $period = $$day{period}{textForecastName};
                if ($xperiods{$period}) {
                        $exec->warning("duplicate forecast period=<$period>");
                        return;
                }
                $xperiods{$period} = 1;
        }

This problem has been reported to DPS Client back in early 2017. It has been going on for a long time before that.

matthewdarwin commented 5 years ago

@mikegabriel I don't use the sr download feature at all. I have a custom plugin so everything gets routed through to my own downloader so I can retry until I get the files properly. (this allows for network communication errors and whatnot) Also there are times I don't even get sr alert there is a file, so I have to also regularly do a directory listing and see what files are there and grab them.

matthewdarwin commented 5 years ago

Today's download results (up to 23:15 EST):

MariaDB [weather]> select date(from_unixtime(timestamp)) as date, hour(from_unixtime(timestamp)) as hour, count(*) from download_log where status = 591 group by date, hour having date > '2019-01-26';
+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-27 |    1 |        1 |
| 2019-01-27 |    4 |     1963 |
| 2019-01-27 |    5 |     1596 |
| 2019-01-27 |   10 |      186 |
| 2019-01-27 |   11 |     5676 |
| 2019-01-27 |   12 |        1 |
| 2019-01-27 |   13 |       10 |
| 2019-01-27 |   15 |        1 |
| 2019-01-27 |   16 |      740 |
| 2019-01-27 |   17 |     4105 |
| 2019-01-27 |   19 |        9 |
| 2019-01-27 |   22 |       36 |
| 2019-01-27 |   23 |      457 |
+------------+------+----------+
13 rows in set (3.58 sec)
alexandreleroux commented 5 years ago

It's only city pages we have this problem with. For all other data types, your assumption is good, and we do the right thing.

@petersilva , if this helps, we also have this issue with the marine_weather XMLs. Thanks!

petersilva commented 5 years ago

One of the analysts in my team just implemented (17h25 Eastern) the same check that we used to detect the problem to prevent posting of files in the first place. From now on, there should be no incomplete city pages. ... marine... we didn´t do yet... if it works for city pages, we´ll apply it there also. also, I wasn´t clear, marine_pages are produced by the same type process as citypages, so would indeed be affected similarly.

matthewdarwin commented 5 years ago

So far so good... no errors seen since 16:45 EST

petersilva commented 5 years ago

By the way, we studied it today and the weird thing is we are seeing files being in this incomplete state for many minutes... (6!) which is what caused the whole thing in the first place. We still need to talk to upstream about why the files are messed up at source for so long. This fix just stops propagating the weirdness to clients.

matthewdarwin commented 5 years ago

I spoke too soon... got a few errors just now...

petersilva commented 5 years ago

sigh... we were avoiding a full XML parser by just checking for the final tag. Perhaps these cases include the final tag but are corrupt in some other way? Or perhaps our analyst decided to back out the change for some reason. Will find out later today.

matthewdarwin commented 5 years ago

I can sent you all the examples I have, if you would like. I can tell you some of them have more than the last tag missing.

ls -l corrupt.* | wc -l
192
mikegabriel commented 5 years ago

I haven’t analyzed a lot of the files but I have seen missing ‘’ as well, I suspected the file just gets cut short.

petersilva commented 5 years ago

Do any of them have the last tag? Trying to figure out if the last tag is a good enough test.

matthewdarwin commented 5 years ago

Last tag test should be fine from what I have seen. There are files that end with:

</siteData

(missing closing greater-than)

As well as files that end earlier than that.

petersilva commented 5 years ago

we check for the whole thing (including the >)

petersilva commented 5 years ago

OK, we did another intervention @ 16h30 Eastern time. Which checks again at a later stage. We traced one of the broken cases and confirmed that it would be caught by this latter check. Should be good this time: 🤞

matthewdarwin commented 5 years ago

Much better, but still missing some cases...

+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-29 |   15 |        5 |
| 2019-01-29 |   23 |       31 |
| 2019-01-30 |    0 |        1 |
| 2019-01-30 |    4 |       95 |
| 2019-01-30 |    5 |        8 |
| 2019-01-30 |    6 |        2 |
+------------+------+----------+
mikegabriel commented 5 years ago

Files may be coming through complete but they are still failing their checksums, seeing lots missing the onfly_checksum used in the check_part.py plugin.

matthewdarwin commented 5 years ago

Checksum is yet another ongoing problem since years....

select date(from_unixtime(timestamp)) as date, hour(from_unixtime(timestamp)) as hour, count(*) from download_log where status = 592 group by date, hour having date > '2019-01-29';
+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-30 |    0 |       65 |
| 2019-01-30 |    1 |      925 |
| 2019-01-30 |    2 |       30 |
| 2019-01-30 |    3 |      420 |
| 2019-01-30 |    4 |     1060 |
| 2019-01-30 |    5 |     1845 |
| 2019-01-30 |    6 |      355 |
| 2019-01-30 |    7 |      755 |
| 2019-01-30 |    8 |      355 |
| 2019-01-30 |    9 |     1035 |
| 2019-01-30 |   10 |    10220 |
| 2019-01-30 |   11 |     2747 |
+------------+------+----------+
12 rows in set (9.64 sec)
petersilva commented 5 years ago

@matthewdarwin you need to raise this with ECCC we transport what we get (we usually can figure out when it is complete.)

Yuck. This explains why there are duplicate forecast periods I get multiple times every day. Since you are writing a filter, then maybe you could filter those out too? eg (Monday -> Monday Night -> Monday Night -> Tuesday -> Tuesday Night).

matthewdarwin commented 5 years ago

Thanks Peter, I have raised this issue with ECCC back in March 2017.

petersilva commented 5 years ago

OK, more plumbing work, completed at 17h55 Eastern. We'll see tommorrow if it is better.

matthewdarwin commented 5 years ago

Better and better!

select date(from_unixtime(timestamp)) as date, hour(from_unixtime(timestamp)) as hour, count(*) from download_log where status = 591 group by date, hour having date > '2019-01-29';
+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-30 |    0 |        1 |
| 2019-01-30 |    4 |       95 |
| 2019-01-30 |    5 |        8 |
| 2019-01-30 |    6 |        2 |
| 2019-01-30 |   10 |       65 |
| 2019-01-30 |   11 |       73 |
| 2019-01-30 |   12 |        2 |
| 2019-01-30 |   13 |        2 |
| 2019-01-30 |   14 |        1 |
| 2019-01-30 |   23 |       21 |
| 2019-01-31 |    0 |        1 |
| 2019-01-31 |    5 |       43 |
+------------+------+----------+
12 rows in set (2.73 sec)
matthewdarwin commented 5 years ago
+------------+------+----------+
| date       | hour | count(*) |
+------------+------+----------+
| 2019-01-31 |    0 |        1 |
| 2019-01-31 |    5 |       43 |
| 2019-01-31 |   10 |        1 |
| 2019-01-31 |   11 |       49 |
| 2019-01-31 |   12 |       26 |
| 2019-01-31 |   13 |        7 |
| 2019-01-31 |   14 |        4 |
| 2019-01-31 |   23 |       51 |
| 2019-02-01 |    0 |        3 |
| 2019-02-01 |    4 |       13 |
| 2019-02-01 |    5 |       49 |
+------------+------+----------+