ClickHouse / ClickHouse

ClickHouse® is a real-time analytics DBMS
https://clickhouse.com
Apache License 2.0
37.04k stars 6.83k forks source link

Server fails to start while loading meta-data #44232

Closed acris5 closed 1 year ago

acris5 commented 1 year ago

We have production server with 2Tb of data, once someone tried to Select large table without limits in query, and server restarted unexpectedly and can't start now with following error:

2022.12.14 18:31:37.045716 [ 649315 ] {} <Information> TablesLoader: 96.22641509433963%
2022.12.14 18:53:48.219719 [ 649318 ] {} <Debug> default.ptparams (f8d43f1e-bb3e-4c9a-84b1-6a2bbd9309a9): Loaded data parts (8762 items)
2022.12.14 18:53:48.353114 [ 649318 ] {} <Debug> default.ptparams (f8d43f1e-bb3e-4c9a-84b1-6a2bbd9309a9): Loading mutation: mutation_14279.txt entry, commands size: 1
2022.12.14 18:53:48.356729 [ 649318 ] {} <Debug> default.ptparams (f8d43f1e-bb3e-4c9a-84b1-6a2bbd9309a9): Loading mutation: mutation_26621.txt entry, commands size: 1
2022.12.14 18:53:48.359849 [ 649318 ] {} <Debug> default.ptparams (f8d43f1e-bb3e-4c9a-84b1-6a2bbd9309a9): Loading mutation: mutation_18924.txt entry, commands size: 1
2022.12.14 18:53:48.368503 [ 649318 ] {} <Information> TablesLoader: 97.16981132075472%
2022.12.14 18:54:38.050449 [ 649321 ] {} <Debug> default.breakerparams (38d451af-fb28-4bb9-b5e3-b4f83089e8c3): Loaded data parts (13124 items)
2022.12.14 18:54:38.085343 [ 649321 ] {} <Debug> default.breakerparams (38d451af-fb28-4bb9-b5e3-b4f83089e8c3): Loading mutation: mutation_18946.txt entry, commands size: 1
2022.12.14 18:54:38.115472 [ 649321 ] {} <Debug> default.breakerparams (38d451af-fb28-4bb9-b5e3-b4f83089e8c3): Loading mutation: mutation_40130.txt entry, commands size: 1
2022.12.14 18:54:38.122449 [ 649321 ] {} <Debug> default.breakerparams (38d451af-fb28-4bb9-b5e3-b4f83089e8c3): Loading mutation: mutation_40131.txt entry, commands size: 1
2022.12.14 18:54:38.128659 [ 649321 ] {} <Debug> default.breakerparams (38d451af-fb28-4bb9-b5e3-b4f83089e8c3): Loading mutation: mutation_37837.txt entry, commands size: 1
2022.12.14 18:54:38.137909 [ 649321 ] {} <Debug> default.breakerparams (38d451af-fb28-4bb9-b5e3-b4f83089e8c3): Loading mutation: mutation_39944.txt entry, commands size: 1
2022.12.14 19:06:00.276976 [ 649287 ] {} <Debug> default.dcparams (fb4bed6c-d866-4dcf-9d83-5538d3126457): Loaded data parts (26146 items)
2022.12.14 19:06:00.309800 [ 649287 ] {} <Debug> default.dcparams (fb4bed6c-d866-4dcf-9d83-5538d3126457): Loading mutation: mutation_91918.txt entry, commands size: 1
2022.12.14 19:06:00.313737 [ 649287 ] {} <Debug> default.dcparams (fb4bed6c-d866-4dcf-9d83-5538d3126457): Loading mutation: mutation_46088.txt entry, commands size: 1
2022.12.14 19:06:00.320122 [ 649287 ] {} <Debug> default.dcparams (fb4bed6c-d866-4dcf-9d83-5538d3126457): Loading mutation: mutation_91916.txt entry, commands size: 1
2022.12.14 19:15:16.337372 [ 649331 ] {} <Debug> default.ctparams (f62cc0f8-8217-4dda-b3a5-af7ea4a8ff2c): Loaded data parts (44743 items)
2022.12.14 19:15:16.388268 [ 649331 ] {} <Debug> default.ctparams (f62cc0f8-8217-4dda-b3a5-af7ea4a8ff2c): Loading mutation: mutation_129532.txt entry, commands size: 1
2022.12.14 19:15:16.395866 [ 649331 ] {} <Debug> default.ctparams (f62cc0f8-8217-4dda-b3a5-af7ea4a8ff2c): Loading mutation: mutation_130666.txt entry, commands size: 1
2022.12.14 19:15:16.404178 [ 649331 ] {} <Debug> default.ctparams (f62cc0f8-8217-4dda-b3a5-af7ea4a8ff2c): Loading mutation: mutation_130668.txt entry, commands size: 1
2022.12.14 19:15:16.406800 [ 649331 ] {} <Debug> default.ctparams (f62cc0f8-8217-4dda-b3a5-af7ea4a8ff2c): Loading mutation: mutation_130943.txt entry, commands size: 1
2022.12.14 19:15:16.408757 [ 649331 ] {} <Debug> default.ctparams (f62cc0f8-8217-4dda-b3a5-af7ea4a8ff2c): Loading mutation: mutation_130667.txt entry, commands size: 1
2022.12.14 19:15:17.877513 [ 649222 ] {} <Error> Application: Caught exception while loading metadata: Code: 27. DB::ParsingException: Cannot parse input: expected 'format version: 1\n' at end of stream.: Cannot attach table `default`.`breakerparams` from metadata file /var/lib/clickhouse/store/9d2/9d227804-3b3f-40e6-9d6e-82503af24eb2/breakerparams.sql from query ATTACH TABLE default.breakerparams UUID '38d451af-fb28-4bb9-b5e3-b4f83089e8c3' (`date` Date, `datetime` DateTime, `ts_nano` Int64, `app_id` String, `calculator_uuid` String, `calculator_name` String, `u1_breaker` Nullable(Float64), `d_mr` Nullable(Float64), `du_op2` Nullable(Float64), `i_breaker` Nullable(Float64), `qd_i_breaker` Int32, `qd_mr` Int32, `qd_p_breaker` Int32, `time_ram` Int64, `u_breaker` Nullable(Float64), `d_ew` Nullable(Float64), `data_time` Int64, `du_breaker` Nullable(Float64), `ew` Nullable(Float64), `phase` Int32, `qd_du_cl` Int32, `qd_u_op2` Int32, `qd_d_ew` Int32, `qd_t_drive` Int32, `t_drive` Nullable(Float64), `qd_du_op1` Int32, `du_op1` Nullable(Float64), `mr` Nullable(Float64), `qd_u1_breaker` Int32, `u_op1` Nullable(Float64), `u_op2` Nullable(Float64), `qd_f_avg` Int32, `asset_uuid` String, `du_cl` Nullable(Float64), `f_avg` Nullable(Float64), `i1_breaker` Nullable(Float64), `p_breaker` Nullable(Float64), `qd_breaker_on_off` Int32, `qd_ew` Int32, `qd_u_breaker` Int32, `qd_u_cl` Int32, `u_cl` Nullable(Float64), `du1_breaker` Nullable(Float64), `qd_d_mr` Int32, `qd_du1_breaker` Int32, `qd_du_op2` Int32, `qd_u_op1` Int32, `breaker_on_off` Nullable(Int32), `eq_uuid` String, `qd_du_breaker` Int32, `qd_i1_breaker` Int32, `qd_t_air` Int32, `t_air` Nullable(Float64)) ENGINE = MergeTree PARTITION BY (toYYYYMM(datetime), asset_uuid) ORDER BY (ts_nano, datetime) SETTINGS index_granularity = 8192, storage_policy = 'all_stores'. (CANNOT_PARSE_INPUT_ASSERTION_FAILED), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
1. DB::throwAtAssertionFailed(char const*, DB::ReadBuffer&) @ 0xbaadecd in /usr/bin/clickhouse
2. DB::MergeTreeMutationEntry::MergeTreeMutationEntry(std::__1::shared_ptr<DB::IDisk>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1767c74b in /usr/bin/clickhouse
3. DB::StorageMergeTree::loadMutations() @ 0x17315806 in /usr/bin/clickhouse
4. DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, bool, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool) @ 0x173152a4 in /usr/bin/clickhouse
5. ? @ 0x177e1d56 in /usr/bin/clickhouse
6. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x1715d306 in /usr/bin/clickhouse
7. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool) @ 0x1617aa3e in /usr/bin/clickhouse
8. DB::DatabaseOrdinary::loadTableFromMetadata(std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::QualifiedTableName const&, std::__1::shared_ptr<DB::IAST> const&, bool) @ 0x16223c04 in /usr/bin/clickhouse
9. ? @ 0x1628e92b in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xbb06e46 in /usr/bin/clickhouse
11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0xbb08a55 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046e8 in /usr/bin/clickhouse
13. ? @ 0xbb07a7d in /usr/bin/clickhouse
14. ? @ 0x7f85b113a609 in ?
15. __clone @ 0x7f85b105f133 in ?
 (version 22.7.3.5 (official build))
2022.12.14 19:15:17.936386 [ 649222 ] {} <Information> Application: Shutting down storages.
2022.12.14 19:15:17.936430 [ 649222 ] {} <Information> Context: Shutdown disk click_data
2022.12.14 19:15:17.936441 [ 649222 ] {} <Information> Context: Shutdown disk click_data2
2022.12.14 19:15:17.936449 [ 649222 ] {} <Information> Context: Shutdown disk default
2022.12.14 19:15:18.513830 [ 649222 ] {} <Debug> Application: Shut down storages.
2022.12.14 19:15:18.711946 [ 649222 ] {} <Debug> Application: Destroyed global context.
2022.12.14 19:15:18.734359 [ 649222 ] {} <Error> Application: DB::ParsingException: Cannot parse input: expected 'format version: 1\n' at end of stream.: Cannot attach table `default`.`breakerparams` from metadata file /var/lib/clickhouse/store/9d2/9d227804-3b3f-40e6-9d6e-82503af24eb2/breakerparams.sql from query ATTACH TABLE default.breakerparams UUID '38d451af-fb28-4bb9-b5e3-b4f83089e8c3' (`date` Date, `datetime` DateTime, `ts_nano` Int64, `app_id` String, `calculator_uuid` String, `calculator_name` String, `u1_breaker` Nullable(Float64), `d_mr` Nullable(Float64), `du_op2` Nullable(Float64), `i_breaker` Nullable(Float64), `qd_i_breaker` Int32, `qd_mr` Int32, `qd_p_breaker` Int32, `time_ram` Int64, `u_breaker` Nullable(Float64), `d_ew` Nullable(Float64), `data_time` Int64, `du_breaker` Nullable(Float64), `ew` Nullable(Float64), `phase` Int32, `qd_du_cl` Int32, `qd_u_op2` Int32, `qd_d_ew` Int32, `qd_t_drive` Int32, `t_drive` Nullable(Float64), `qd_du_op1` Int32, `du_op1` Nullable(Float64), `mr` Nullable(Float64), `qd_u1_breaker` Int32, `u_op1` Nullable(Float64), `u_op2` Nullable(Float64), `qd_f_avg` Int32, `asset_uuid` String, `du_cl` Nullable(Float64), `f_avg` Nullable(Float64), `i1_breaker` Nullable(Float64), `p_breaker` Nullable(Float64), `qd_breaker_on_off` Int32, `qd_ew` Int32, `qd_u_breaker` Int32, `qd_u_cl` Int32, `u_cl` Nullable(Float64), `du1_breaker` Nullable(Float64), `qd_d_mr` Int32, `qd_du1_breaker` Int32, `qd_du_op2` Int32, `qd_u_op1` Int32, `breaker_on_off` Nullable(Int32), `eq_uuid` String, `qd_du_breaker` Int32, `qd_i1_breaker` Int32, `qd_t_air` Int32, `t_air` Nullable(Float64)) ENGINE = MergeTree PARTITION BY (toYYYYMM(datetime), asset_uuid) ORDER BY (ts_nano, datetime) SETTINGS index_granularity = 8192, storage_policy = 'all_stores'
2022.12.14 19:15:18.734618 [ 649222 ] {} <Information> Application: shutting down
2022.12.14 19:15:18.734626 [ 649222 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2022.12.14 19:15:18.735128 [ 649223 ] {} <Information> BaseDaemon: Stop SignalListener thread
2022.12.14 19:15:19.284057 [ 649212 ] {} <Information> Application: Child process exited normally with code 70.
2022.12.14 19:15:49.998095 [ 649561 ] {} <Information> Application: Will watch for the process with pid 649570
2022.12.14 19:15:49.998451 [ 649570 ] {} <Information> Application: Forked a child process to watch
2022.12.14 19:15:49.999110 [ 649570 ] {} <Information> SentryWriter: Sending crash reports is disabled
2022.12.14 19:15:51.318485 [ 649570 ] {} <Information> : Starting ClickHouse 22.7.3.5 with revision 54464, build id: A97EE8D81E41A58E, PID 649570
2022.12.14 19:15:51.318727 [ 649570 ] {} <Information> Application: starting up
2022.12.14 19:15:51.318750 [ 649570 ] {} <Information> Application: OS name: Linux, version: 5.4.0-125-generic, architecture: x86_64
2022.12.14 19:15:51.533788 [ 649570 ] {} <Information> Application: Integrity check of the executable successfully passed (checksum: 35C037103309C93E3EF6D59CA6A2CF05)
2022.12.14 19:15:51.565562 [ 649570 ] {} <Debug> Application: rlimit on number of file descriptors is 500000
2022.12.14 19:15:51.565595 [ 649570 ] {} <Debug> Application: rlimit on number of threads is 62312
2022.12.14 19:15:51.565610 [ 649570 ] {} <Debug> Application: Initializing DateLUT.
tavplubix commented 1 year ago

Looks like duplicate of https://github.com/ClickHouse/ClickHouse/issues/41409

acris5 commented 1 year ago

Is here any quilck fix for this issue? May be manualy droping or creating some files? Or only full database recover from backup can help?

tavplubix commented 1 year ago

Find an empty mutation_X.txt file in /var/lib/clickhouse/store/38d/38d451af-fb28-4bb9-b5e3-b4f83089e8c3 dir and remove the file.

acris5 commented 1 year ago

@tavplubix tnx it worked, db started finally after deleting 3 empty mutation files. Question is why clickhouse-server restarted because of large query, I thought there were query memory and time limits to protect from that?

2022.12.14 14:01:48.278001 [ 644537 ] {b4fd321a-be3a-4b91-b9e1-7e996fcd0667} <Error> executeQuery: Code: 24. DB::Exception: Cannot write to ostream at offset 739451980: While executing Native. (CANNOT_WRITE_TO_OSTREAM) (version 22.7.3.5 (official build)) (from 192.168.10.230:2727) (in query: select calculator_name, calculator_uuid, date, datetime, ts_nano, app_id, time_ram, data_time, eq_uuid, asset_uuid, phase, u_hv, qd_u_hv,u1_ccvt, qd_u1_ccvt, i1_ccvt, qd_i1_ccvt, fi_ui1_ccvt, qd_fi_ui1_ccvt, freq_cctv, qd_freq_ccvt,ir_ccvt, qd_ir_ccvt, ic_ccvt, qd_ic_ccvt, c_ccvt, qd_c_ccvt, tg_ccvt, qd_tg_ccvt,ccvt_on_off, qd_ccvt_on_off, time_on_off, t_air, qd_t_air, humidity_air, qd_humidity_air from ccvt_params_0 where ts_nano > 1667088000 order by ts_nano asc), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:01:48.413756 [ 644537 ] {b4fd321a-be3a-4b91-b9e1-7e996fcd0667} <Error> DynamicQueryHandler: Code: 24. DB::Exception: Cannot write to ostream at offset 739451980: While executing Native. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:01:48.509030 [ 644537 ] {b4fd321a-be3a-4b91-b9e1-7e996fcd0667} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 24. DB::Exception: Cannot write to ostream at offset 738403716. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:01:48.879227 [ 644537 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:09:27.864531 [ 646470 ] {5d1b6de8-8a52-42e6-81f8-84fe256799a7} <Error> executeQuery: Code: 24. DB::Exception: Cannot write to ostream at offset 1152578610: While executing Native. (CANNOT_WRITE_TO_OSTREAM) (version 22.7.3.5 (official build)) (from 192.168.10.230:2924) (in query: select calculator_name, calculator_uuid, date, datetime, ts_nano, app_id, time_ram, data_time, eq_uuid, asset_uuid, phase, u_hv, qd_u_hv,u1_ccvt, qd_u1_ccvt, i1_ccvt, qd_i1_ccvt, fi_ui1_ccvt, qd_fi_ui1_ccvt, freq_cctv, qd_freq_ccvt,ir_ccvt, qd_ir_ccvt, ic_ccvt, qd_ic_ccvt, c_ccvt, qd_c_ccvt, tg_ccvt, qd_tg_ccvt,ccvt_on_off, qd_ccvt_on_off, time_on_off, t_air, qd_t_air, humidity_air, qd_humidity_air from ccvt_params_0 where ts_nano > 1667088000 order by ts_nano asc), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:09:28.001010 [ 646470 ] {5d1b6de8-8a52-42e6-81f8-84fe256799a7} <Error> DynamicQueryHandler: Code: 24. DB::Exception: Cannot write to ostream at offset 1152578610: While executing Native. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:09:28.001137 [ 646470 ] {5d1b6de8-8a52-42e6-81f8-84fe256799a7} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 24. DB::Exception: Cannot write to ostream at offset 1151530348. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:09:28.001655 [ 646470 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:18:17.633916 [ 646470 ] {4afb7352-ae44-425c-a0b4-fbdcb60575a3} <Error> executeQuery: Code: 24. DB::Exception: Cannot write to ostream at offset 677870882: While executing Native. (CANNOT_WRITE_TO_OSTREAM) (version 22.7.3.5 (official build)) (from 192.168.10.230:3269) (in query: select calculator_name, calculator_uuid, date, datetime, ts_nano, app_id, time_ram, data_time, eq_uuid, asset_uuid, phase, u_hv, qd_u_hv,u1_ccvt, qd_u1_ccvt, i1_ccvt, qd_i1_ccvt, fi_ui1_ccvt, qd_fi_ui1_ccvt, freq_cctv, qd_freq_ccvt,ir_ccvt, qd_ir_ccvt, ic_ccvt, qd_ic_ccvt, c_ccvt, qd_c_ccvt, tg_ccvt, qd_tg_ccvt,ccvt_on_off, qd_ccvt_on_off, time_on_off, t_air, qd_t_air, humidity_air, qd_humidity_air from ccvt_params_0 where ts_nano > 1667088000 order by ts_nano asc), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:18:17.726725 [ 646470 ] {4afb7352-ae44-425c-a0b4-fbdcb60575a3} <Error> DynamicQueryHandler: Code: 24. DB::Exception: Cannot write to ostream at offset 677870882: While executing Native. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:18:17.726840 [ 646470 ] {4afb7352-ae44-425c-a0b4-fbdcb60575a3} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 24. DB::Exception: Cannot write to ostream at offset 676822618. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:18:17.742934 [ 646470 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:16.168114 [ 646470 ] {6635423b-295e-4d6c-824c-135354ce3eb6} <Error> executeQuery: Code: 24. DB::Exception: Cannot write to ostream at offset 100760255: While executing Native. (CANNOT_WRITE_TO_OSTREAM) (version 22.7.3.5 (official build)) (from 192.168.10.230:3365) (in query: select calculator_name, calculator_uuid, date, datetime, ts_nano, app_id, time_ram, data_time, eq_uuid, asset_uuid, phase, u_hv, qd_u_hv,u1_ccvt, qd_u1_ccvt, i1_ccvt, qd_i1_ccvt, fi_ui1_ccvt, qd_fi_ui1_ccvt, freq_cctv, qd_freq_ccvt,ir_ccvt, qd_ir_ccvt, ic_ccvt, qd_ic_ccvt, c_ccvt, qd_c_ccvt, tg_ccvt, qd_tg_ccvt,ccvt_on_off, qd_ccvt_on_off, time_on_off, t_air, qd_t_air, humidity_air, qd_humidity_air from ccvt_params_0 where ts_nano > 1667088000 order by ts_nano asc), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:16.259929 [ 646470 ] {6635423b-295e-4d6c-824c-135354ce3eb6} <Error> DynamicQueryHandler: Code: 24. DB::Exception: Cannot write to ostream at offset 100760255: While executing Native. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:16.260056 [ 646470 ] {6635423b-295e-4d6c-824c-135354ce3eb6} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 24. DB::Exception: Cannot write to ostream at offset 99711991. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:16.260256 [ 646470 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:58.228586 [ 646470 ] {2585c448-c8bc-4318-925c-68e5b662e1f8} <Error> executeQuery: Code: 24. DB::Exception: Cannot write to ostream at offset 44422037: While executing Native. (CANNOT_WRITE_TO_OSTREAM) (version 22.7.3.5 (official build)) (from 192.168.10.230:3394) (in query: select calculator_name, calculator_uuid, date, datetime, ts_nano, app_id, time_ram, data_time, eq_uuid, asset_uuid, phase, u_hv, qd_u_hv,u1_ccvt, qd_u1_ccvt, i1_ccvt, qd_i1_ccvt, fi_ui1_ccvt, qd_fi_ui1_ccvt, freq_cctv, qd_freq_ccvt,ir_ccvt, qd_ir_ccvt, ic_ccvt, qd_ic_ccvt, c_ccvt, qd_c_ccvt, tg_ccvt, qd_tg_ccvt,ccvt_on_off, qd_ccvt_on_off, time_on_off, t_air, qd_t_air, humidity_air, qd_humidity_air from ccvt_params_0 where ts_nano > 1667088000 order by ts_nano asc), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:58.318107 [ 646470 ] {2585c448-c8bc-4318-925c-68e5b662e1f8} <Error> DynamicQueryHandler: Code: 24. DB::Exception: Cannot write to ostream at offset 44422037: While executing Native. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:58.318223 [ 646470 ] {2585c448-c8bc-4318-925c-68e5b662e1f8} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 24. DB::Exception: Cannot write to ostream at offset 43373771. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:20:58.318434 [ 646470 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:29:23.196905 [ 646510 ] {} <Error> DynamicQueryHandler: Code: 373. DB::Exception: Session is locked by a concurrent client. (SESSION_IS_LOCKED), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:30:05.955452 [ 646510 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:31:29.095059 [ 646510 ] {2aa5e119-7031-467e-9f5a-64a3b8f189e0} <Error> DynamicQueryHandler: Cannot flush data to client: Code: 24. DB::Exception: Cannot write to ostream at offset 170. (CANNOT_WRITE_TO_OSTREAM), Stack trace (when copying this message, always include the lines below):
2022.12.14 14:32:05.472556 [ 646511 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:32:35.144246 [ 646510 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
2022.12.14 14:33:01.385107 [ 646499 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
alexey-milovidov commented 1 year ago

You can check the logs to see why the server was restarted. An empty file on the local filesystem usually indicates an unclean restart of the machine with clickhouse-server, so the filesystem changes were not committed.