Open ollyde opened 2 years ago
@ollydixon
I am working on a feature for this, but it's not done yet, and I'm not sure exactly when it will be.
What it will do (at least as it looks at the moment) is that it adds a third entry point to the workflow that does search in an entirely different way than the current search feature. So right now there is the server side search, the bookmark adding feature, and there will in the future also be a cached search feature. You will be able to simply use this search variant instead of the old one if you want to, or use them both with different keywords (like r for search now) or shortcuts.
The difference between the server side search and the cached search is that with the server side search it will be a bit slower (like it always is now), but you get for example full text search, more advanced logic to finding things that match what you searched for, and the search includes newly added bookmarks right away
The cached search will update it's cache sometimes in the background, and then use only the cache for searching, so it will take a bit longer for new bookmarks to be included, and it will not have full text search or the better search algorithm that is available on the server side etc, but it will show the results almost instantly and it will also (which might end up as an option) let Alfred have a say in the order of the results list so that more commonly searched for items will be further up in the results list. (this breaks things in the classic search variant, which is why it doesn't do that)
Quite a bit of this is already done, and all the concepts are tested, but I'm not sure when this will be ready for release. Probably within a few months.
@westerlind awesome thanks for the update; looking forwards to test it! Let me know what that is :-)
@westerlind I have a rough proof of concept illustrating the use of the httpcache
library for go
. It works by leveraging the etag
header returned by Raindrop's API to enable reliable client-side caching.
It provides an indeterminate speedup on my machine when the same query is issued but given the way things currently work in this codebase each query is a different request so there may be limited value. One way of leveraging it would be to download everything locally through the cache and search that way but that would probably require some testing on how long the cache lasts since it would be counter-productive to redownload everything for every search 😅 That would also lose server-side querying and might impact functionality?
So to summarize: I'm not sure this is useful but wanted to share my findings just in case.
Hi @westerlind, thanks so much for this workflow! Was curious as to whether the cache was ever implemented 🙏
It's not, unfortunately. But I still intend to do it, I just haven't gotten to it yet.
OK, look forward to it 😄
Switched to Raycast now; way better than Alfred!
I was traveling the last few months; using this plugin.
I found that sometimes it would take 5-10 seconds to load bookmarks.
At home now on fiber and sometimes it takes over 3 seconds when typing.
This makes the Alfred plugin useless, consider adding cache here.