Open somegooser opened 1 year ago
50k is not a large dataset, it easily indexes millions of rows... can you tell us a bit more about the structure and how/where do you index the data...?
On Thu, Jun 29, 2023 at 10:41 AM somegooser @.***> wrote:
Hi,
I have performance issues when indexing large datasets with 50000 records. Ik takes 30+ minutes.
The indexed content is not even long. It is approximately 50 characters per row.
This also happens with another datasets with only 500 rows with LONG tekst.
Any information how to boost performance?
— Reply to this email directly, view it on GitHub https://github.com/teamtnt/tntsearch/issues/291, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQMGWR77K52LIWKECB57W3XNU5SNANCNFSM6AAAAAAZYGBQLY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Thanks for the reply.
I am using a simple dataset with 50000+ company names. I am only using a custom tokenizer.
ok can you show us the code for the tokenizer and table structure?
On Thu, Jun 29, 2023 at 10:54 AM somegooser @.***> wrote:
Thanks for the reply.
I am using a simple dataset with 50000+ company names. I am only using a custom tokenizer.
— Reply to this email directly, view it on GitHub https://github.com/teamtnt/tntsearch/issues/291#issuecomment-1612657859, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQMGWVZLOBLSFEJXV5OE7DXNU7C7ANCNFSM6AAAAAAZYGBQLY . You are receiving this because you commented.Message ID: @.***>
Hi,
This is my tokenizer
`<?php
namespace Search;
use TeamTNT\TNTSearch\Support\AbstractTokenizer; use TeamTNT\TNTSearch\Support\TokenizerInterface;
class Tokenizer extends AbstractTokenizer implements TokenizerInterface { static protected $pattern = '/[^\p{L}-\p{N}]+/u';
public function tokenize($text, $stopwords = [])
{
if ($text === null) {
return [];
}
$text = mb_strtolower($text, 'UTF-8');
$text = str_replace(['-', '_', '~'], [' ', ' ', '-'], $text);
$text = strip_tags($text);
$split = preg_split($this->getPattern(), $text, -1, PREG_SPLIT_NO_EMPTY);
return array_diff($split, $stopwords);
}
} `
My query is super simpel
$indexer->query('SELECT id,
nameFROM
companies');
I'm using this package with a result set of 1.5 million records. Indexing from scratch takes ~5 minutes.
Thats crazy...
There is something weird anyways.
Indexing 10.000 rows takes like 20 seconds on my server. But when i index 100.000 rows with alike data (so not different form of data or length) it takes about 30 minutes. Even updating the index is very slow instead of complete reindex.
Could it be something with the size of the index file?
Hi,
I have performance issues when indexing large datasets with 50000 records. Ik takes 30+ minutes.
The indexed content is not even long. It is approximately 50 characters per row.
This also happens with another datasets with only 500 rows with LONG tekst.
Any information how to boost performance?