thanos-io / thanos

Highly available Prometheus setup with long term storage capabilities. A CNCF Incubating project.
https://thanos.io
Apache License 2.0
13.15k stars 2.1k forks source link

compact/downsampling/rewrite: Read chunks directly from object storage. #3416

Open bwplotka opened 4 years ago

bwplotka commented 4 years ago

Instead of downloading all bytes of chunks for blocks we want to process, we could easily just read and stream those through while using constant amount of mem / disk. We can do that because all of those operations go through chunks sequentially. We do that because series are sorted and chunks are placed by series ordered by oldest. See Prometheus compaction tests for confirmation:

With this println:

func (cr mockChunkReader) Chunk(id uint64) (chunkenc.Chunk, error) {
    fmt.Printf("%p: Asked for %v\n", cr, id)
    chk, ok := cr[id]
    if ok {
        return chk, nil
    }

    return nil, errors.New("Chunk with ref not found")
}

We have:

=== RUN   TestCompaction_populateBlock/Populate_from_single_block._We_expect_the_same_samples_at_the_output.
0xc0002b0300: Asked for 0
0xc0002b0300: Asked for 1
=== RUN   TestCompaction_populateBlock/Populate_from_two_blocks.
0xc0002b0a20: Asked for 0
0xc0002b0d80: Asked for 0
0xc0002b0a20: Asked for 1
0xc0002b0a20: Asked for 2
0xc0002b0d80: Asked for 1
0xc0002b0a20: Asked for 3
=== RUN   TestCompaction_populateBlock/Populate_from_two_blocks;_chunks_with_negative_time.
0xc0002b1950: Asked for 0
0xc0002b1cb0: Asked for 0
0xc0002b1950: Asked for 1
0xc0002b1950: Asked for 2
0xc0002b1cb0: Asked for 1
0xc0002b1950: Asked for 3
=== RUN   TestCompaction_populateBlock/Populate_from_two_blocks_showing_that_order_is_maintained.
0xc0002d88a0: Asked for 0
0xc0002d8c00: Asked for 0
0xc0002d88a0: Asked for 1
0xc0002d88a0: Asked for 2
0xc0002d8c00: Asked for 1
0xc0002d88a0: Asked for 3
=== RUN   TestCompaction_populateBlock/Populate_from_two_blocks_showing_that_order_of_series_is_sorted.
0xc0002d9bc0: Asked for 1
0xc0002d9bc0: Asked for 0
0xc0002d97d0: Asked for 1
0xc0002d97d0: Asked for 0
0xc0002d9bc0: Asked for 2
0xc0002d97d0: Asked for 2
=== RUN   TestCompaction_populateBlock/Populate_from_two_blocks_1:1_duplicated_chunks;_with_negative_timestamps.
0xc000304b10: Asked for 0
0xc000304ea0: Asked for 0
0xc000304b10: Asked for 1
0xc000304b10: Asked for 2
0xc000304ea0: Asked for 1
0xc000304b10: Asked for 3
0xc000304ea0: Asked for 2
0xc000304b10: Asked for 4
=== RUN   TestCompaction_populateBlock/Populate_from_single_block_containing_chunk_outside_of_compact_meta_time_range.
0xc000305a40: Asked for 0
0xc000305a40: Asked for 1
=== RUN   TestCompaction_populateBlock/Populate_from_single_block_containing_extra_chunk
0xc00033a180: Asked for 0
=== RUN   TestCompaction_populateBlock/Populate_from_two_blocks_containing_duplicated_chunk.
0xc00033a7e0: Asked for 0
0xc00033aa80: Asked for 0
0xc00033a7e0: Asked for 1
=== RUN   TestCompaction_populateBlock/Populate_from_three_overlapping_blocks.
0xc00033b320: Asked for 0
0xc00033bad0: Asked for 0
0xc00033b710: Asked for 0
0xc00033b320: Asked for 1
0xc00033b710: Asked for 1
0xc00033bad0: Asked for 1
0xc00033b710: Asked for 2
0xc00033b320: Asked for 2
=== RUN   TestCompaction_populateBlock/Populate_from_three_partially_overlapping_blocks_with_few_full_chunks.
0xc000376e40: Asked for 0
0xc000376e40: Asked for 1
0xc000376e40: Asked for 2
0xc000376e40: Asked for 3
0xc000376e40: Asked for 4
0xc000376e40: Asked for 5
0xc000377890: Asked for 0
0xc000376e40: Asked for 6
0xc000377380: Asked for 0
0xc000376e40: Asked for 7
0xc000376e40: Asked for 8
0xc000376e40: Asked for 9
0xc000376e40: Asked for 10
0xc000377380: Asked for 1
0xc000376e40: Asked for 11
0xc000377380: Asked for 2
0xc000377380: Asked for 3
0xc000377380: Asked for 4
0xc000377890: Asked for 1
0xc000377380: Asked for 5
0xc000377890: Asked for 2
0xc000377890: Asked for 3
0xc000377890: Asked for 4
0xc000377380: Asked for 6
0xc000377380: Asked for 7
0xc000377380: Asked for 8
0xc000377380: Asked for 9
0xc000377380: Asked for 10
0xc000377380: Asked for 11
0xc000377890: Asked for 5
0xc000377890: Asked for 6
0xc000377890: Asked for 7
0xc000377890: Asked for 8
0xc000377890: Asked for 9
=== RUN   TestCompaction_populateBlock/Populate_from_three_partially_overlapping_blocks_with_chunks_that_are_expected_to_merge_into_single_big_chunks.
0xc00039d2f0: Asked for 0
0xc00039d8c0: Asked for 0
0xc00039d5f0: Asked for 0
0xc00039d2f0: Asked for 1
0xc00039d5f0: Asked for 1
0xc00039d8c0: Asked for 1
goku321 commented 4 years ago

Can I work on this?

stale[bot] commented 3 years ago

Hello 👋 Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗 If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

kakkoyun commented 3 years ago

Still valid and help wanted.

Please @goku321 go ahead! We can assign this to you if you are still interested in.

goku321 commented 3 years ago

Thanks @kakkoyun . Yes, I'm still interested. Currently, I'm trying to finish up exemplars api feature and then, will move to this one. Please feel free to assign it to me.

dswarbrick commented 3 years ago

Has anyone considered using UploadPartCopy to do server-side concatenation of blocks during the compacting phase? This is exposed via the ComposeObject method in the minio SDK (https://pkg.go.dev/github.com/minio/minio-go/v7#Client.ComposeObject), and allows concatenating multiple objects server-side, including specifying start offsets and lengths.

From the docs:

Uploads a part by copying data from an existing object as data source. You specify the data source by adding the request header x-amz-copy-source in your request and a byte range by adding the request header x-amz-copy-source-range in your request.

yashrsharma44 commented 3 years ago

Hi @bwplotka, can you explain more about the implementation details about the same? I didn't find any similar implementation that solves this problem, so curious about the idea for implementation

goku321 commented 3 years ago

I've started looking into this. Thanks everyone for your patience.

stale[bot] commented 3 years ago

Hello 👋 Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗 If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] commented 3 years ago

Closing for now as promised, let us know if you need this to be reopened! 🤗

goku321 commented 3 years ago

Not stale 🥱

stale[bot] commented 2 years ago

Hello 👋 Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗 If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

oronsh commented 2 years ago

Hi, sounds like a great idea! I can see how this is going to keep disk space bounded but is it the same for memory? I thought that the chunks being opened with mmap only when needed and when finished memory map should get released.