Closed cappadaan closed 9 months ago
for me this query work fine
CREATE TABLE test ( id bigint, d1 multi64, myfields text, otherfields text );
INSERT into test (id, myfields, otherfields) values (10, 'test', 'up'), (11, 'always_present productie medewerker', 'medewerker magazijn');
mysql> SELECT * FROM test WHERE MATCH('(@(myfields) always_present productie | "magazijn medewerker") | (@(otherfields) ^=productie magazijn medewerker$)');
+------+--------------------------------------+---------------------+------+
| id | myfields | otherfields | d1 |
+------+--------------------------------------+---------------------+------+
| 11 | always_present productie medewerker | medewerker magazijn | |
+------+--------------------------------------+---------------------+------+
1 row in set (0.00 sec)
it could be better to provide complete example that reproduces issue locally.
I will come back later on this issue. We downgraded to 6.0.4. because 6.2 was causing a very high CPU load. I cannot test this good enough for now.
you could try to disable pseudo_sharding at the config searchd.pseudo_sharding = 0
or check show status
output load*
counters along with show sessions
and show threads
to check what query cause the load and is it CPU load become higher or CPU utilize 100% of cores?
The new version simply uses a lot more CPU. I tried to decrease the amount of threads: this kept the server CPU under 80% but it was not enough: the search became super slow.
Never had any issues before with CPU.
@cappadaan You could try disabling pseudo_sharding
or use OPTION threads
to fine-tune its behavior. In version 6.2.0, the pseudo-sharding functionality underwent a few changes, including tighter integration with the secondary indexes functionality and smarter query optimization. The goal was to make the search faster by optimizing CPU utilization. Let me remind you that we adhere to the concept that the CPU shouldn't remain idle; so if Manticore can load it to 100% to expedite the search, it will do so. However, if you are certain that you're experiencing both a higher CPU load and slower search performance, we would appreciate it if you let us know how to reproduce the issue, so we can investigate further.
Did another try in upgrading from 6.0.4 to 6.2.12
Server 1:
Server 2:
Server 2 gives an SQL error on exact the same query as server 1.
If I change the query to:
SELECT 1 FROM
so without ( and ), it works.
I am totally confused.
I'm confused too and can't reproduce it on a synthetic dataset. I'm afraid we can't reproduce and fix it without your data. Feel free to send them to our write-only S3 storage - https://manual.manticoresearch.com/Reporting_bugs#Uploading-your-data
I found the cause. Server 1 has no wordforms, server 2 has. The wordform breaks the query, but only if they are surrounded by quotes and contain a space.
query = select 1 from index where match('"technisch medewerker"')
For this specific query I have 2 rows in the wordform:
medeweker > medewerker Medewerkster > medewerker
Can you reproduce it now?
Unfortunately, I still can't reproduce it like this:
➜ ~ cat /tmp/wordforms
medeweker > medewerker
Medewerkster > medewerker
➜ ~ mysql -P9306 -h0 -v
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3990
Server version: 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822) git branch manticore-6.2.12...origin/manticore-6.2.12
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Reading history-file /Users/sn/.mysql_history
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> drop table if exists index; create table index(always_present text) wordforms='/tmp/wordforms'; insert into index values(0, 'always_present Technisch Medewerker'); SELECT 1 FROM index WHERE MATCH('(@(always_present) always_present | "Technisch Medewerker")'); select 1 from index where match('"technisch medewerker"');
--------------
drop table if exists index
--------------
Query OK, 0 rows affected (0.01 sec)
--------------
create table index(always_present text) wordforms='/tmp/wordforms'
--------------
Query OK, 0 rows affected (0.00 sec)
--------------
insert into index values(0, 'always_present Technisch Medewerker')
--------------
Query OK, 1 row affected (0.00 sec)
--------------
SELECT 1 FROM index WHERE MATCH('(@(always_present) always_present | "Technisch Medewerker")')
--------------
+------+
| 1 |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
--------------
select 1 from index where match('"technisch medewerker"')
--------------
+------+
| 1 |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
Can you modify this example to a reproducible one?
Try this query
select 1 from index where match('("technisch medewerker")')
so with ( and ) extra.
This one fails.
It doesn't fail for me:
➜ ~ cat /tmp/wordforms
medeweker > medewerker
Medewerkster > medewerker
➜ ~ mysql -P9306 -h0 -v
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4183
Server version: 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822) git branch manticore-6.2.12...origin/manticore-6.2.12
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Reading history-file /Users/sn/.mysql_history
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> drop table if exists index; create table index(always_present text) wordforms='/tmp/wordforms'; insert into index values(0, 'always_present Technisch Medewerker'); select 1 from index where match('("technisch medewerker")');
--------------
drop table if exists index
--------------
Query OK, 0 rows affected (0.01 sec)
--------------
create table index(always_present text) wordforms='/tmp/wordforms'
--------------
Query OK, 0 rows affected (0.00 sec)
--------------
insert into index values(0, 'always_present Technisch Medewerker')
--------------
Query OK, 1 row affected (0.01 sec)
--------------
select 1 from index where match('("technisch medewerker")')
--------------
+------+
| 1 |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
Ok, it only occurs on my specific wordform-file. No only the 2 rules as earlier mentioned. I cannot share this wordform file here, so I sent you a mail about it.
Thanks @cappadaan .
➜ ~ cat /tmp/wf
a b c => d b c
mysql> drop table if exists index; create table index(f text) wordforms='/tmp/wf'; insert into index values(0, 'abc'); select 1 from index where match('("e a")');
--------------
drop table if exists index
--------------
Query OK, 0 rows affected (0.00 sec)
--------------
create table index(f text) wordforms='/tmp/wf'
--------------
Query OK, 0 rows affected (0.01 sec)
--------------
insert into index values(0, 'abc')
--------------
Query OK, 1 row affected (0.00 sec)
--------------
select 1 from index where match('("e a")')
--------------
ERROR 1064 (42000): index index: P08: syntax error, unexpected $end near ''
I will come back later on this issue. We downgraded to 6.0.4. because 6.2 was causing a very high CPU load. I cannot test this good enough for now.
I agree that version 6.2.0 cause high CPU load, nearly Double. Downgrade to 6.0.4 and Half Load.
Using threads=1 on my queries fixed the CPU problem for me.
I agree that version 6.2.0 cause high CPU load, nearly Double. Downgrade to 6.0.4 and Half Load.
@masarinetwork What about the response time and throughput? The assumption is that doubling the CPU load would lead to about 2x lower avg response time. Can you provide the latency and throughput stats? That would be very valuable for the team.
Thank You @sanikolaev and @cappadaan
Currently Run On 6.0.4 on 12 CPU (6 Core Hyperthreading 12) and 256 GB RAM, SATA SSD DC / Enterprise.
Distributed Index with 2 Local Index, total data size 321 Gb, 280 tables, per table data range 1 Million -10 Million.
Also running MySQL with 280 Gb of data, low read, as data sources.
Running Nginx and PHP-FPM, average Request per Second 50 - 120, google Analytics Page Views about 100K - 120K (exclude bot).
Using PDO with Socket Connection /var/run/manticore/manticore.sock
Config:
listen = 0.0.0.0:9312 listen = 0.0.0.0:9306:mysql41 listen = /var/run/manticore/manticore.sock:mysql41 log = /var/log/manticore/searchd.log query_log_format = sphinxql query_log_min_msec = 1000 network_timeout = 10 qcache_max_bytes = 167772160 qcache_ttl_sec = 5m client_timeout = 2m sphinxql_timeout = 10m max_threads_per_query = 12 pseudo_sharding = 1 threads = 12 #12 By default it's set to the number of CPU cores on the server. pid_file = /var/run/manticore/searchd.pid seamless_rotate = 1 preopen_tables = 1 unlink_old = 1 access_plain_attrs = mlock access_blob_attrs = mmap_preread binlog_path = /var/lib/manticore
mysql Ver 15.1 Distrib 10.6.15-MariaDB, for Linux (x86_64) using readline 5.1
Connection id: 7484
Current database: Manticore
Current user: Usual
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server: MySQL
Server version: 6.0.4 1a3a4ea82@230314 git branch HEAD (no branch)
Protocol version: 10
Connection: localhost via TCP/IP
Server characterset:
MySQL [(none)]> show status; +-----------------------+------------------------+ | Counter | Value | +-----------------------+------------------------+ | uptime | 46501 | | connections | 7495 | | maxed_out | 0 | | version | 6.0.4 1a3a4ea82@230314 | | mysql_version | 6.0.4 1a3a4ea82@230314 | | command_search | 8383802 | | command_excerpt | 0 | | command_update | 0 | | command_keywords | 0 | | command_persist | 0 | | command_status | 1 | | command_flushattrs | 0 | | command_sphinxql | 0 | | command_ping | 0 | | command_delete | 0 | | command_set | 0 | | command_insert | 0 | | command_replace | 0 | | command_commit | 0 | | command_suggest | 0 | | command_json | 0 | | command_callpq | 0 | | command_cluster | 0 | | command_getfield | 0 | | agent_connect | 0 | | agent_tfo | 0 | | agent_retry | 0 | | queries | 8383798 | | dist_queries | 0 | | workers_total | 12 | | workers_active | 36 | | workers_clients | 35 | | workers_clients_vip | 0 | | work_queue_length | 40 | | query_wall | 135119.074 | | query_cpu | OFF | | dist_wall | 0.000 | | dist_local | 0.000 | | dist_wait | 0.000 | | query_reads | OFF | | query_readkb | OFF | | query_readtime | OFF | | avg_query_wall | 0.016 | | avg_query_cpu | OFF | | avg_dist_wall | 0.000 | | avg_dist_local | 0.000 | | avg_dist_wait | 0.000 | | avg_query_reads | OFF | | avg_query_readkb | OFF | | avg_query_readtime | OFF | | qcache_max_bytes | 167772160 | | qcache_thresh_msec | 3000 | | qcache_ttl_sec | 300 | | qcache_cached_queries | 20 | | qcache_used_bytes | 6204930 | | qcache_hits | 344 | +-----------------------+------------------------+
Average Load: 8 - 12, uptime command show below
08:25:16 up 86 days, 17:50, 1 user, load average: 7.61, 8.06, 8.99
On 6.0.4, single keyword search data 0.046 seconds, two keyword 0.012 seconds.
When using manticore 6.2, the average load becomes 14 - 18, and the query time is above 1-4 seconds, on the same data and same keyword. And because the CPU is more than 100%, it becomes very slow.
Nginx and PHP-FPM also become slow and often response time out, uptime robot monitoring 24 Hours goes to 80% uptime. Downgrade to 6.0.4 and no problem at all.
and the query time is above 1-4 seconds
Do you mean you saw the value of avg_query_wall
became 1-4 seconds?
What's interesting is that:
pseudo_sharding
disabledSo neither of them can be the source of the perf. degradation.
Would it be possible to get your data and the searchd and query logs, so we can reproduce the issue on our side? We have a write-only S3 you can upload the data to https://manual.manticoresearch.com/Reporting_bugs#Uploading-your-data
It would be also great to remove query_log_min_msec = 1000
temporarily, so we can replay all the queries.
Do you mean you saw the value of
avg_query_wall
became 1-4 seconds?
Yes, when using 6.2.4. We saw on search result, we display the query time on the front page. This is real data from manticore show meta and we display on search results: On 6.0.4, single keyword search data 0.046 seconds, two keyword 0.012 seconds.
you have pseudo_sharding disabled.
No, configuration is same and not changed.
you are not using the columnar library, i.e. no secondary indexes
We have use Plain Index. Yes we have no use secondary library, because Manticore is used for Full Text Search, on documentation Note that secondary indexes are not effective for full-text queries.
Sorry, we can not switch back to 6.2. We have tried 2 times upgrade to 6.2 and no luck CPU high load, 14 - 18 and sometimes reach more than 25, from 12 is 100% load.
Then switch back to 6.0.4 2 times with same configuration, and it's working fine, load 7 - 11, (100% is 12).
Additional Info, this also happen on CentOS 9 Stream, when using 6.2 average load is 4-5, then switch to 6.0.4 and average load is 1-2. No change in config, pseudo_sharding enabled.
We have seen on searchd.log when using 6.2: [Wed Sep 6 08:00:06.075 2023] [19740] WARNING: send() failed: 32: Broken pipe, sock=5906 [Wed Sep 6 08:00:06.075 2023] [19740] WARNING: conn (local)(65315), sock=5906: send() failed: 32: Broken pipe [Wed Sep 6 08:00:06.087 2023] [19745] WARNING: send() failed: 32: Broken pipe, sock=7563 [Wed Sep 6 08:00:06.087 2023] [19745] WARNING: conn (local)(65348), sock=7563: send() failed: 32: Broken pipe [Wed Sep 6 08:00:06.089 2023] [19740] WARNING: send() failed: 32: Broken pipe, sock=1615 [Wed Sep 6 08:00:06.089 2023] [19740] WARNING: conn (local)(65210), sock=1615: send() failed: 32: Broken pipe [Wed Sep 6 08:00:06.117 2023] [19743] WARNING: send() failed: 32: Broken pipe, sock=7438 [Wed Sep 6 08:00:06.117 2023] [19743] WARNING: conn (local)(65245), sock=7438: send() failed: 32: Broken pipe [Wed Sep 6 08:00:06.121 2023] [19741] WARNING: send() failed: 32: Broken pipe, sock=6773 [Wed Sep 6 08:00:06.121 2023] [19741] WARNING: conn (local)(65267), sock=6773: send() failed: 32: Broken pipe [Wed Sep 6 08:00:06.127 2023] [19734] WARNING: send() failed: 32: Broken pipe, sock=7406 [Wed Sep 6 08:00:06.127 2023] [19734] WARNING: conn (local)(65338), sock=7406: send() failed: 32: Broken pipe [Wed Sep 6 08:00:06.127 2023] [19745] WARNING: send() failed: 32: Broken pipe, sock=7202 [Wed Sep 6 08:00:06.127 2023] [19745] WARNING: conn (local)(65209), sock=7202: send() failed: 32: Broken pipe
Using threads=1 on my queries fixed the CPU problem for me.
But it's not a good choice, because it will slow down the query time. I choose downgrade to 6.0.4 and use thread 12 (number of my CPU), and max_threads_per_query = 12 (number of my CPU).
All of our site content and URL is full text search. There is no option for slow down query result time.
As we know, Web Vitals's good Connect Time is 200 ms and TTFB is 800 ms.
You already use only 1 thread per query, because your pseudo_sharding=0.
You already use only 1 thread per query, because your pseudo_sharding=0.
No, my real config is pseudo_sharding=1
Sorry, I think I miss when copying, because I delete remark after pseudo_sharding = 1 # remark of Manticore URL deleted
I have made the changes to correct my config above
Try disabling pseudo_sharding, it worked for me. We process a lot of queries per second on a 32 cpu machine and it ended up being faster.
Thank you, but it's big number of CPU 👍. How much query per second and how much data set?
I only have 12 CPU (Hyper threading) and 256 Gb of RAM, and 321 Gb of data, with 50 -150 full text query per second.
Now you use manticore 6.2 or 6.0?
Now you use manticore 6.2 or 6.0?
Rolling back to 6.0 makes sense. However, I'd like to emphasize that it would be helpful if we could reproduce and debug your issue. I'm uncertain if we can address it without your assistance, given that it doesn't appear to be a widespread problem following the upgrade to 6.2.0 or 6.2.12. We carefully tested 6.2.0 in a large production environment for weeks before its release, then we released 6.2.12 with a dozen of fixes reported by the community and since then, we haven't received numerous complaints about performance degradation.
The ball is in your court, @masarinetwork and @cappadaan: if you assist us in reproducing this issue, we can address it in the upcoming release. Otherwise, the bug may remain somewhat known but too specific to fix.
About turning off pseudo_sharding - in version 6.2.x, it should turn off by itself when needed. If you have to turn it off manually, we'd also like to reproduce the case when it's required and fine-tune the behaviour.
Thank You @sanikolaev
I will switch back to 6.2 on CentOS 9 Stream. And will report on bug report.
Because on CentOS 9 Stream the same problem also occurs, where on 6.2 the load becomes 200% compared to when using 6.0.4, with the same configuration and hardware.
Indeed, on CentOS 9 Stream there is no server overload because the data set is smaller and there are more CPUs, 24 CPUs.
I'm also getting these problems on my Ubuntu server.
Searching with quotes gives a syntax error when performing a search like MATCH('@(street)("west ave")'). Both words have wordforms for w, av, avenue, etc. Seems like you already have an example that breaks it though.
Multi thread searches are hanging when max_threads_per_query number is 2 or higher. During that time, those threads are at 100% and don't stop until searchd is stopped. Single thread or disabling Pseudo sharding works. It also only seems to hang when adding an additional where parameter. Even stranger, sometimes queries will go through with max_threads_per_query=2, but it never works with anything higher.
MATCH('@(street)(west ave)') and country='us' 6.2.12 multi = hangs 6.2.12 single = 700 cpums 6.0.4 multi = 500cpums found 7199
MATCH('@(street)(west ave)') 6.2.12 multi = 500 cpums 6.2.12 single = 500 cpums 6.0.4 multi = 500 cpums found 1042736
Multi thread searches are hanging when max_threads_per_query number is 2 or higher. During that time, those threads are at 100% and don't stop until searchd is stopped.
@d3mon187 How do I reproduce this?
Multi thread searches are hanging when max_threads_per_query number is 2 or higher. During that time, those threads are at 100% and don't stop until searchd is stopped.
@d3mon187 How do I reproduce this?
I was seeing it when I did a search like "select * from index where MATCH('@(street)(west ave)') and country='us'".
Where street is a sql_field_string and country might have been a sql_attr_string. Index has wordforms, and index_exact_words/min_word_len/min_prefix_len/ngram_len all set to 1. Let me know if there are any other specific settings that might help. Right now my servers are all back on 6.0, but I can try and run more tests later this week once my test server is finished crunching if you can't reproduce.
Thanks!
I was seeing it when I did a search like "select * from index where MATCH('@(street)(west ave)') and country='us'".
Can be reproduced in Manticore 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)
like this:
root@perf3 ~ # mysql -P9306 -h0 < /tmp/dump.sql
root@perf3 ~ # mysql -P9306 -h0 -e "select * from test where MATCH('@street walnut blvd') and country='US';"
... hangs ...
Workaround - disable secondary indexes:
root@perf3 ~ # mysql -P9306 -h0 -e "set global secondary_indexes=0; select * from test where MATCH('@street walnut blvd') and country='US';"
+--------+------------------+---------+
| id | street | country |
+--------+------------------+---------+
| 871851 | 838 Walnut Blvd | US |
| 735050 | 6810 Walnut Blvd | US |
| 916478 | 6810 Walnut Blvd | US |
| 736892 | 838 Walnut Blvd | US |
| 976031 | 6810 Walnut Blvd | US |
| 978104 | 7504 Walnut Blvd | US |
| 775419 | 7504 Walnut Blvd | US |
| 761688 | 7504 Walnut Blvd | US |
| 761921 | 7504 Walnut Blvd | US |
| 937116 | 6810 Walnut Blvd | US |
| 849275 | 7504 Walnut Blvd | US |
| 980283 | 6810 Walnut Blvd | US |
| 996936 | 6810 Walnut Blvd | US |
| 754518 | 838 Walnut Blvd | US |
| 756364 | 838 Walnut Blvd | US |
| 834079 | 7504 Walnut Blvd | US |
| 758933 | 7504 Walnut Blvd | US |
| 987945 | 7504 Walnut Blvd | US |
| 836745 | 6810 Walnut Blvd | US |
| 988821 | 838 Walnut Blvd | US |
+--------+------------------+---------+
The dump is: dump.sql.tgz
How it was generated:
The bug is already fixed in the latest dev version Manticore 6.2.13 d016f3dc0@231025 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
root@perf3 ~ # searchd -v
Manticore 6.2.13 d016f3dc0@231025 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
...
root@perf3 ~ # mysql -P9306 -h0 < /tmp/dump.sql
root@perf3 ~ # mysql -P9306 -h0 -e "select * from test where MATCH('@street walnut blvd') and country='US';"
+--------+------------------+---------+
| id | street | country |
+--------+------------------+---------+
| 871851 | 838 Walnut Blvd | US |
| 735050 | 6810 Walnut Blvd | US |
| 916478 | 6810 Walnut Blvd | US |
| 736892 | 838 Walnut Blvd | US |
| 976031 | 6810 Walnut Blvd | US |
| 978104 | 7504 Walnut Blvd | US |
| 775419 | 7504 Walnut Blvd | US |
| 761688 | 7504 Walnut Blvd | US |
| 761921 | 7504 Walnut Blvd | US |
| 937116 | 6810 Walnut Blvd | US |
| 849275 | 7504 Walnut Blvd | US |
| 980283 | 6810 Walnut Blvd | US |
| 996936 | 6810 Walnut Blvd | US |
| 754518 | 838 Walnut Blvd | US |
| 756364 | 838 Walnut Blvd | US |
| 834079 | 7504 Walnut Blvd | US |
| 758933 | 7504 Walnut Blvd | US |
| 987945 | 7504 Walnut Blvd | US |
| 836745 | 6810 Walnut Blvd | US |
| 988821 | 838 Walnut Blvd | US |
+--------+------------------+---------+
root@perf3 ~ # php load_gh_1335.php 10000 30 10000000
finished inserting
712309 docs per sec
root@perf3 ~ # mysql -P9306 -h0 -e "select * from test where MATCH('@street walnut blvd') and country='US';"
+---------+------------------+---------+
| id | street | country |
+---------+------------------+---------+
| 1360475 | 7504 Walnut Blvd | US |
| 1366250 | 7504 Walnut Blvd | US |
| 1366577 | 6810 Walnut Blvd | US |
| 457504 | 6810 Walnut Blvd | US |
| 1368926 | 7504 Walnut Blvd | US |
| 1379943 | 6810 Walnut Blvd | US |
| 1270264 | 838 Walnut Blvd | US |
| 420685 | 6810 Walnut Blvd | US |
| 1272402 | 7504 Walnut Blvd | US |
| 1273640 | 6810 Walnut Blvd | US |
| 424763 | 6810 Walnut Blvd | US |
| 1281267 | 7504 Walnut Blvd | US |
| 432156 | 7504 Walnut Blvd | US |
| 441679 | 7504 Walnut Blvd | US |
| 668560 | 7504 Walnut Blvd | US |
| 633020 | 7504 Walnut Blvd | US |
| 633370 | 7504 Walnut Blvd | US |
| 26523 | 838 Walnut Blvd | US |
| 637788 | 7504 Walnut Blvd | US |
| 639612 | 838 Walnut Blvd | US |
+---------+------------------+---------+
Awesome @sanikolaev ! Glad you were able to reproduce. Looking forward to the next release!
I can confirm both the syntax error bug and the performance bug are fixed in 6.2.13.
the issue with phrase at the query cause daemon to reply with the parsing error if index has workforms with multiple destination forms just fixed at https://github.com/manticoresoftware/manticoresearch/commit/f8bec55bd6d91f5d8a0a730e5742982924378c51
You need to update daemon from the dev repository after CI will publish packages to get issue fixed.
Just upgraded tot 6.2.0
This query-part:
WHERE MATCH('(@(myfields) always_present productie | \"magazijn medewerker\") | (@(otherfields) ^=productie magazijn medewerker$)')
now gives
P08: syntax error, unexpected ')' near ')'
To me the query seems fine. I ran this query from the first manticore version, it always worked.
Is this correct behavior? What has changed in 6.2.0 ? There is nothing about this syntax in the breaking changes of 6.2.0.
UPDATE
See MRE in https://github.com/manticoresoftware/manticoresearch/issues/1335#issuecomment-1698776596
The performance issues related below are offtopic.