timescale / timescaledb-toolkit

Extension for more hyperfunctions, fully compatible with TimescaleDB and PostgreSQL 📈
https://www.timescale.com
Other
385 stars 47 forks source link

time_weight OrderError #794

Open Wintermute79 opened 8 months ago

Wintermute79 commented 8 months ago

Relevant system information:

Describe the bug time_weight fails with error in console and in C# with Npgsql but actually works fine in DBeaver 24.0.0.

To Reproduce

Query: SELECT average(time_weight('LOCF', time, e5)) as e5 FROM prices WHERE time >= CURRENT_DATE - 1 AND time < CURRENT_DATE;

DDL:

CREATE TABLE IF NOT EXISTS prices
(
    id integer NOT NULL,
    "time" timestamp with time zone NOT NULL,
    e5 numeric(4,3)
);

-- hypertable SELECT create_hypertable('prices', 'time');

-- default index created by timescaledb CREATE INDEX prices_time_idx ON prices USING btree ("time" DESC);

-- additional index for faster queries by id and time CREATE INDEX prices_id_time_idx ON prices USING btree (id ASC, "time" DESC);

Expected behavior A floating point number like 1.836 (like in DBeaver)

Actual behavior

ERROR:  called `Result::unwrap()` on an `Err` value: OrderError
wiboticalex commented 6 months ago

I'm running into what seems to be the same issue and get

ERROR:  called `Result::unwrap()` on an `Err` value: OrderError

as a response to my query.

OS: Debian 12.2 PostgreSQL: PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit TimescaleDB Toolkit version: 1.18.0 (also tried the latest development version 1.19.0-dev) Installation method: compiled from source

Query:

SELECT average(time_weight('Linear', "time", "voltage"))
FROM chargelogdata
WHERE uuid = 'a8fa74bb-d000-4751-a26e-ec81b90664cf'::uuid

Interestingly, making a simple change by adding

GROUP BY time_bucket('9999 years'::interval, time)

to the end of the query allows it to run and give the correct answer.

Also, I am able to run the query against shorter sequences of data and do not need the additional GROUP BY. The query that has problems looks at 201939 rows.

Update: I used explain on the query and it seemed like the common pattern for queries that work was the lack of a "Gather" step before the final Aggregate. It turns out that I can make all queries work by executing

SET max_parallel_workers_per_gather = 0;

first. Setting this variable also causes the execution plan to change as well, dropping the "Gather" step.