mkabilov / pg2ch

Data streaming from postgresql to clickhouse via logical replication mechanism
MIT License
195 stars 35 forks source link

Please support 'block_size' in clickhouse connection configuration #33

Closed yjhatfdu closed 4 years ago

yjhatfdu commented 5 years ago

Initial sync for very large and wide(500columns) table will fail DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 8388608 bytes) After discover, it cause by clickhouse driver default block_size=1000000. When the table is too wide, 1million rows will execeed Memory limit 10G. Add a setting for 'block_size' will solve this.

mkabilov commented 5 years ago

You can try to use just released prestable version which uses http protocol: https://github.com/mkabilov/pg2ch/releases/tag/v1.0.0

Deninc commented 4 years ago

@mkabilov could you please guide me how to build the project from source?

Also what's your plan for production release?