Open nruhe opened 2 years ago
Is it that much of a burden for clients to upload the filter? If we switch to using hashes, then clients would need to
which seems to me like more effort than just blindly uploading the filter.
Perhaps effort is the wrong way to look at it, and here's why.
Hashes provide significantly more utility over generated IDs. With hashing, clients that want to continue blindly uploading filters can do so without interruption. Nothing changes for them. On the other hand, clients that have a large number of pre-defined filters alongside bandwidth / network availability concerns absolutely need a way to avoid uploading large JSON documents. This will require hashes. They may also wish to pre-compute hashes during the build process, so they can be extracted into a separate constants definitions file. This is something that can be helpful for automation and testing purposes.
Now, I do admit this conversation mostly applies to filters that configure only static values, like limits or event types, but looking out at the near future, it's a reasonable expectation that we'll eventually need a way to specify template parameters too. For example, I might want to use 90% of a filter's configuration, but pass a handful of room ids dynamically when I make the /sync request. E.g. set not_rooms to something like ['$not_rooms']
and query /sync with ?fp_not_rooms=1,2,3
.
Suggestion Filters created via
POST /_matrix/client/v3/user/{userId}/filter
are difficult to manage because their resulting ID is generated dynamically on upload from the client. This forces clients to use computed IDs obtained by uploading each filter's full JSON definition to the home-server on startup. By switching to hashes, filter IDs could be computed independently and referenced statically in config files. Moreover, this lays the groundwork for a separate service that fresh clients can use to verify filters exist before sending full JSON definition.