The issue is we had large gaps of data. After profiling and logging stuff it was found that when a NaN was encountered it would throw the whole data package away. Example here:
Collecting clientA.policies.accountsReceivable.page.tsetting oJSON, clientA.policies.accountsReceivable.page.navigationStart, clientA.policies.accountsReceivable.page.unloadEventStart, clientA.policies.accountsReceivable.page.unloadEventEnd, clientA.policies.accountsReceivable.page.redirectStart, clientA.policies.accountsReceivable.page.redirectEnd, clientA.policies.accountsReceivable.page.fetchStart, clientA.policies.accountsReceivable.page.domainLookupStart, clientA.policies.accountsReceivable.page.domainLookupEnd, clientA.policies.accountsReceivable.page.connectStart, clientA.policies.accountsReceivable.page.connectEnd, clientA.policies.accountsReceivable.page.requestStart, clientA.policies.accountsReceivable.page.responseStart, clientA.policies.accountsReceivable.page.responseEnd, clientA.policies.accountsReceivable.page.domLoading, clientA.policies.accountsReceivable.page.domInteractive, clientA.policies.accountsReceivable.page.domContentLoadedEventStart, clientA.policies.accountsReceivable.requests.clientA.gateway.post, clientA.policies.accountsReceivable.requests.clientA.gateway.post.sending, clientA.policies.accountsReceivable.requests.clientA.gateway.post.headers, clientA.policies.accountsReceivable.requests.clientA.gateway.post.waiting, clientA.policies.accountsReceivable.requests.clientA.gateway.post.receiving, clientA.policies.accountsReceivable.requests.clientA.gateway.post.2xx, clientA.policies.accountsReceivable.requests.clientA.gateway.post.200 for undefined
Unparsable row: NaN|ms
Collecting clientB.policies.builder.requests.clientB.gateway.post, clientB.policies.builder.requests.clientB.gateway.post.sending, clientB.policies.builder.requests.clientB.gateway.post.headers, clientB.policies.builder.requests.clientB.gateway.post.waiting, clientB.policies.builder.requests.clientB.gateway.post.receiving, clientB.policies.builder.requests.clientB.gateway.post.2xx, clientB.policies.builder.requests.clientB.gateway.post.200 for undefined
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post:159.541
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post.sending:0.155
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post.headers:157.246
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post.waiting:0.176
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post.receiving:0.126
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post.2xx:1
Writing to opentsdb: clientB.policies.builder.requests.clientB.gateway.post.200:1
For clientA nothing is sent to openTSD due to the NaN, halting the process to parse/send the other metrics. This results in all the metrics in that POST being thrown away. Only thing I could find is clientA had a NaN. In this example on the first row - seems to be generated client side by the toJSON metric.
This PR basically defaults a NaN to 0. I'm not for sure if this is the best work around, but maybe start a convo about how to handle it.
The issue is we had large gaps of data. After profiling and logging stuff it was found that when a NaN was encountered it would throw the whole data package away. Example here:
For clientA nothing is sent to openTSD due to the NaN, halting the process to parse/send the other metrics. This results in all the metrics in that POST being thrown away. Only thing I could find is clientA had a NaN. In this example on the first row - seems to be generated client side by the toJSON metric.
This PR basically defaults a NaN to 0. I'm not for sure if this is the best work around, but maybe start a convo about how to handle it.