Closed thongpv87 closed 3 years ago
any update ? i face the same problem
Are you inserting a single row per INSERT, or batching? You'll see like >10x faster ingest rate with some batching (100s/1000s of rows per batch). That said, 300 records/s is painfully slow; we typically would still do like 10K+ rows per second with single node. What type of indexes did you specify? Anything more about your setup?
update: I update my code to use batching update, then the insert performance is 10x faster with 100 rows/insert and 50x faster with 1000 rows per insert.
But when I run this test with postgres, the performance is the same. I mean the problem is because of batching insert, and I can not see the advantage of timescale in this case.
@thongpv3 Vanilla Postgres' performance there will drop off as the table grows large, Timescale's won't. Additionally, there are a large number of query optimizations, analytical capabilities, automation, timeseries-data management, etc. that is unique to Timescale:
See some graphs here about larger inserts, while our docs and blog will go into more details about capabilities: https://blog.timescale.com/timescale-microsoft-azure-team-up-to-power-iot-and-time-series-workloads/
For best performance, please use COPY.
@erimatnor please could you elaborate on why and how to use COPY
instead of INSERT
?
I have a this table:
and data is inserted from a c++ client (use
taopq
- https://github.com/taocpp/taopq as postgres library):There is no other index, trigger,.....
I tried to insert 300 000 record which was parsed from a text file (the parsing performance is really fast, it only take 1-3s). The insert performance is only
300 records/s
.I used docker image
latest-pg11
to run the timescale server with default config.What was wrong? It is first time I tried timescaledb.