Open janonym1 opened 2 years ago
Is the problem that the server takes a long time to find these results or is it a problem of sending the results over the wire or displaying on the client? We'd like to avoid settings that have to be set on clients to match the servers they're connecting to if at all possible since this doesn't really scale to an open ecosystem.
Isnt the debounceTime already set in the client?
The speed of the search seems to be depending on how many usernames are associated with a certain letter and how many search requests I already fired off before (and also, how fast a user can type). Sometimes it works fine but sometimes it causes the ma1sd search to hang a bit (I assume because of rate limiting the requests at my backends) and only returns the search results way later. In that case, some of the search results are still queried long after I made them initially. For example, it sometimes returns the search for "surname" correctly but still looks up the search for "sur" and "surn" some time after the initial result already passed - If I am lucky, in that order.
I tried to tune my ma1sd-config but I couldn't find anything relating to how the user-search from element gets triggered. Throwing out redundant searches and hashmaps helped a bit (performance wise) but I still get lot of requests that sometimes causes the problem
So the search sometimes never gets shown correctly since its waiting for the server response, which can come late depending on the order of the search and the amount of associated users
Your use case
What would you like to do?
It would be nice to have an easy setting or variable to either control the debounceTimer (default: 150ms) or to set a delayTimer, after which the search for users fires off in element-web.
Why would you like to do it?
We have 50k users at our homeserver, so searching for names can give back a lot of entries. When I am typing the name (e.g. "surname"), I see the search firing off in the backend (ma1sd) before I even type the first letter (
org.apache.mina.filter.codec.ProtocolEncoderException: org.apache.directory.api.ldap.codec.api.MessageEncoderException: ERR_04058 Cannot have a null initial, any and final substring
) and then searches for every single character added: Threepid: found 2654 match(es) for 's' Threepid: found 1199 match(es) for 'su' Threepid: found 395 match(es) for 'sur' ... Threepid: found 5 match(es) for 'surname'Since my backend is configured to be also searchable by LDAP attributes: display name, surname and givenname and not just the mail/3PID, it does the same thing for
io.kamax.mxisd.backend.sql.generic.GenericSqlDirectoryProvider - Searching users by display name using 'su'
.This becomes a problem when typing a more famous name;
Suppressed 44423 messages from matrix-ma1sd.service
and rateLimiting starts to kick in resulting in a chunky, slow or at worst not working search. For example, we have many users beginning with "k...", so typing kath results in:When the usersearch finally gets to the relevant part (surname and givenname typed out to clearly identify some users) it already rate-limited so hard, that it either it returns a result way to late or not at all. Sometimes it does the above, but in a weird (even reverse) order, which I assume, happens because of some rateLimiting somewhere
I already tried optimizing my search with ma1sd by enabling hashmaps, which increases performance but the search in element still fires off at almost every single typed character. I can imagine a lot of unnecessary load on my server when just a percent of my users start using the search
How would you like to achieve it?
Allow for an easy configuration for the debounceTime, which is set at 150ms. Or (even better) just let the search only fire off, when pressing enter or clicking the search button
Have you considered any alternatives?
I tried a lot of settings in my ma1sd backend to limit it as much as possible but I cannot restrict the searchable attributes anymore than to email, surname and givenname. I tried hashmaps for better performance but that didnt help much
Additional context
No response