criteo / biggraphite

Simple Scalable Time Series Database
Apache License 2.0
130 stars 35 forks source link

Read BigGraphite metrics from CassandraDB #578

Closed zerosoul13 closed 3 years ago

zerosoul13 commented 3 years ago

Hello Criteo team,

I've been trying to read BigGraphite metric data from Cassandra and found that it wasn't as easy as I thought. The issue comes when trying to produce a dataset of a given metric. I'm having trouble producing the time_start_ms and offset values because I'm unsure about how to calculate those values (time_start_ms and offset).

I've been reading all day long the driver code https://github.com/criteo/biggraphite/blob/master/biggraphite/drivers/cassandra.py but still with huge questions as on how would I be able to produce the expected values or if the values I'm producing are correct.

The data I'm trying to collect has the following retention configuration: retentions = 300s:2y. How can one collect that for a given metric?

Thanks!

mycroft commented 3 years ago

I admit it can be a little tricky. There is already some doc about that, so I'll show you how I do with an example.

Lets' retrieve data for storage.prometheus.dc:graphite_global_relay_carbon_datapoints_received_total:rate_5m.cluster.global.dc.par.

First, I query metadata to get its UUID:

select id from metrics_metadata where name = 'storage.prometheus.dc:graphite_global_relay_carbon_datapoints_received_total:rate_5m.cluster.global.dc.par';

 id
--------------------------------------
 72dad991-8527-5b27-b72d-6d34b4eb52ec

As you might have understood already, datapoints are stored in "stages", starting with at a defined time_start_ms, and each datapoint in this stage can be retrieve using the time_start_ms & its offset (each offset representing a period - in your sample, an offset = 300sec)

So, starting with the retention setting (300s:2y in your case), we need to compute for a given timestamp the value of the starting stage (time_start_ms) and the offset of the timestamp within that range. With those 2 values, you'll be able to retrieve a single datapoint. The number of offset in a stage will change depending the retention setting (check row_size_ms later).

We do not use 300s:2y here, but 60s:8d as stage 0 retention (it works the same for aggr-ed retention, tables are just differently named). I'll come to that later.

So, 60s:8d is 1152060s (11520 comes from 8 86400 / 60), then 11520 points with a 60sec precision. The data table to be used will be datapoints_11520p_60_0.

With now need the time_start_ms & the offset for current timestamp. To compute time_start_ms, we need to know time_offset_ms (the offset of current timestamp within a stage). Also, as a stages doesn't have a constant offset count, we also need to determine the size of it.

The computations can be simplified as this:

def _row_size_ms(stage):
    HOUR = 3600
    _MAX_PARTITION_SIZE = 25000
    _EXPECTED_POINTS_PER_READ = 2000
    _MIN_PARTITION_SIZE_MS = 6 * HOUR

    # stage.precision_ms is 60*1000 in our sample

    row_size_ms = min(
        stage.precision_ms * _MAX_PARTITION_SIZE,
        max(stage.precision_ms * _EXPECTED_POINTS_PER_READ, _MIN_PARTITION_SIZE_MS),
    )

    # In our example: min(60*1000*25000, max(60*1000*2000,6*3600)) = 120000000

    return row_size_ms

timestamp_ms = time.time() * 1000
time_offset_ms = timestamp_ms % _row_size_ms(stage) # a "row" size in ms is 120000000 in my retention configuration
time_start_ms = timestamp_ms - time_offset_ms

With this, we have the current timestamp offset in the range, so we also have the current time_start_ms.

Last missing value is the offset, which is time_offset_ms / (precision_ms).

Ex:

For a timestamp of 1628233045, timestamp_ms is 1628233045000, the row_size_ms is 120000000, time_offset_ms: 74240662, time_start_ms 1628160000000 & offset = 1237.

The cassandra query will then be:

svc-biggraphite@cqlsh:biggraphite> select * from datapoints_11520p_60s_0 where metric = 72dad991-8527-5b27-b72d-6d34b4eb52ec and time_start_ms = 1628160000000 and offset = 1237;

 metric                               | time_start_ms | offset | value
--------------------------------------+---------------+--------+------------
 72dad991-8527-5b27-b72d-6d34b4eb52ec | 1628160000000 |   1237 | 1.1306e+06

About _0 & _aggr tables. As documented here https://github.com/criteo/biggraphite/blob/master/CASSANDRA_DESIGN.md, if your retention policy has multiple stages, other stages will use aggregated data instead. For a single record you can have multiple values, so make sure to use the one with the most writes in it (the higher count)

You'll find my crappy debug script here: https://gist.github.com/mycroft/1f72460edd1ec4ce299b3544080e28ff - Most important part is the "def point(self)" function.

Hope it helps!

iksaif commented 3 years ago

There is also a bgutil read command for that purpose no ?

On Fri, Aug 6, 2021, 09:27 Patrick MARIE @.***> wrote:

I admit it can be a little tricky. There is already some doc about that, so I'll show you how I do with an example.

Lets' retrieve data for storage.prometheus.dc:graphite_global_relay_carbon_datapoints_received_total:rate_5m.cluster.global.dc.par .

First, I query metadata to get its UUID:

select id from metrics_metadata where name = 'storage.prometheus.dc:graphite_global_relay_carbon_datapoints_received_total:rate_5m.cluster.global.dc.par';

id

72dad991-8527-5b27-b72d-6d34b4eb52ec

As you might have understood already, datapoints are stored in "stages", starting with at a defined time_start_ms, and each datapoint in this stage can be retrieve using the time_start_ms & its offset (each offset representing a period - in your sample, an offset = 300sec)

So, starting with the retention setting (300s:2y in your case), we need to compute for a given timestamp the value of the starting stage (time_start_ms) and the offset of the timestamp within that range. With those 2 values, you'll be able to retrieve a single datapoint. The number of offset in a stage will change depending the retention setting (check row_size_ms later).

We do not use 300s:2y here, but 60s:8d as stage 0 retention (it works the same for aggr-ed retention, tables are just differently named). I'll come to that later.

So, 60s:8d is 1152060s (11520 comes from 8 86400 / 60), then 11520 points with a 60sec precision. The data table to be used will be datapoints_11520p_60_0.

With now need the time_start_ms & the offset for current timestamp. To compute time_start_ms, we need to know time_offset_ms (the offset of current timestamp within a stage). Also, as a stages doesn't have a constant offset count, we also need to determine the size of it.

The computations can be simplified as this:

def _row_size_ms(stage): HOUR = 3600 _MAX_PARTITION_SIZE = 25000 _EXPECTED_POINTS_PER_READ = 2000 _MIN_PARTITION_SIZE_MS = 6 * HOUR

# stage.precision_ms is 60*1000 in our sample

row_size_ms = min(
    stage.precision_ms * _MAX_PARTITION_SIZE,
    max(stage.precision_ms * _EXPECTED_POINTS_PER_READ, _MIN_PARTITION_SIZE_MS),
)

# In our example: min(60*1000*25000, max(60*1000*2000,6*3600)) = 120000000

return row_size_ms

timestamp_ms = time.time() * 1000 time_offset_ms = timestamp_ms % _row_size_ms(stage) # a "row" size in ms is 120000000 in my retention configuration time_start_ms = timestamp_ms - time_offset_ms

With this, we have the current timestamp offset in the range, so we also have the current time_start_ms.

Last missing value is the offset, which is time_offset_ms / (precision_ms).

Ex:

For a timestamp of 1628233045, timestamp_ms is 1628233045000, the row_size_ms is 120000000, time_offset_ms: 74240662, time_start_ms 1628160000000 & offset = 1237.

The cassandra query will then be:

@.**:biggraphite> select from datapoints_11520p_60s_0 where metric = 72dad991-8527-5b27-b72d-6d34b4eb52ec and time_start_ms = 1628160000000 and offset = 1237;

metric | time_start_ms | offset | value --------------------------------------+---------------+--------+------------ 72dad991-8527-5b27-b72d-6d34b4eb52ec | 1628160000000 | 1237 | 1.1306e+06

About _0 & _aggr tables. As documented here https://github.com/criteo/biggraphite/blob/master/CASSANDRA_DESIGN.md, if your retention policy has multiple stages, other stages will use aggregated data instead. For a single record you can have multiple values, so make sure to use the one with the most writes in it (the higher count)

You'll find my crappy debug script here: https://gist.github.com/mycroft/1f72460edd1ec4ce299b3544080e28ff - Most important part is the "def point(self)" function.

Hope it helps!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/criteo/biggraphite/issues/578#issuecomment-894062542, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAH4GA2W7DAR4I7BF5TIFNLT3OFGVANCNFSM5BUZGPGA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

mycroft commented 3 years ago

I really hope the question was about understanding how it worked and not only how to retrieve the data using the CLI tool. But indeed, I believe bgutil can do that as well.

Also, It usually takes me more time to find out the correct arguments to be able to use bgutil, so I dive in database directly.

zerosoul13 commented 3 years ago

Thank you, @mycroft. This is exactly what I needed. @iksaif, I'm aware of this option but as @mycroft mentioned, sometimes is hard to remember all the parameters required so I find it easier to script my way in (personal challenge)

rodolfo-mora commented 2 years ago

Looks like the above can give us a specific datapoint from a specific date. What about pulling all data points from a metric that has multiple retention configurations attached to it?