nikepan / clickhouse-bulk

Collects many small inserts to ClickHouse and send in big inserts
Apache License 2.0
474 stars 87 forks source link

Correct flush of data to clickhouse after sending "STOP signal" #1

Closed dink10 closed 6 years ago

dink10 commented 6 years ago

There are some problems with sending "STOP signal".

  1. After sending "STOP signal" POST query not works. Insert data is lost. If I use standart SendQuery instead of Send method everything is ok. If I put Sleep into Send befor flushing it the table it is also ok.
  2. It is unclear whether the data was flushed to the clickhouse

I propose the following solution.

codecov-io commented 6 years ago

Codecov Report

Merging #1 into master will decrease coverage by 2.67%. The diff coverage is 16.66%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master       #1      +/-   ##
==========================================
- Coverage   89.13%   86.46%   -2.68%     
==========================================
  Files           6        7       +1     
  Lines         267      266       -1     
==========================================
- Hits          238      230       -8     
- Misses         18       24       +6     
- Partials       11       12       +1
Impacted Files Coverage Δ
sender.go 100% <ø> (ø) :arrow_up:
backuper.go 0% <0%> (ø)
clickhouse.go 85.24% <0%> (-3.99%) :arrow_down:
collector.go 91.04% <25%> (-2.85%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 48e99d0...c22b80b. Read the comment docs.

nikepan commented 6 years ago

Thankyou very mutch for pull request! It's good, but it saving data sequental. I made parallel saving good, gracefill disconnect, and added file dumper. Can you check this version? It works right in you case? Thanks!