Closed Rtizer-9 closed 4 years ago
The rules now show whether they are default or user-made in the preferences since v4.1 (released right now).
/searchbyimage?image_url=
on google
https://www.google.fr/searchbyimage?image_url=https%3A%2F%2Favatars0.githubusercontent.com%2Fu%2F35103368%3Fs%3D88%26v%3D4&encoded_image=&image_content=&filename= If you have a specific add-on in mind we’re breaking please let me know which one so I can test it.
https://tineye.com/search/8f451386e06256e364fc7d3f1a6e35efa457f17e?page=1
q
on all search pages including images, now whitelistedtext
, and a specific url
parameter for image search.On searx URLs if I understand well, these can be on any domain, is that right? Anyone can add their own instance if I read their website well.
So that will have to be a rule with domain *.*
and the path being matched exactly. Right now, from what I can see:
*.*
, path ^/autocompleter$
, whitelist q
*.*
path ^/image_proxy$
whitelist url
proxy.*.*
(no subdomain matching) path ^/$
whitelist mortyurl
I must say there’s a couple of proxies that claim to be anti-tracking that we break (another one would be duckduckgo). That’s because while we successfully shortcut the proxy to a valid image, the restrictive Content Security Policy on those pages prevents cross-domain requests to prevent leaking our activity.
I think those are reasonable use case that require to be whitelisted. However we must be wary of a couple of things in the future:
Actually duckduckgo is even a little more perverse. it doesn’t use restrictive policies. So technically, there, cleaning the links is leaking our activity when the proxy is trying to hide it.
I think the engine url has lots of variations and different addons and country are using different urls thus the breakage.
One such url is https://www.google.com/imghp?hl=en
I'm using https://addons.mozilla.org/en-US/firefox/addon/selection-context-search/
https://www.tineye.com/search?url=https://avatars1.githubusercontent.com/u/35103368?s=120&v=4
which as you can see will be redirected.
[x] For searx, as I earlier said I just wanted to bring it to your attention only if you would like to add rules for them by default coz they can be easily added then and there. Those autocomplete and imageproxy rule looks great so I'll just leave it to you. It's not an issue needing urgent attention.
[x] About the CSP and cleanlinks clashing, I think it should be best left to per use basis because everyone tends to have their subjective opinion on privacy. Also, websites tends to change their policy over time (changing owners, doing partnership, chagning terms and conditions etc) so it should be left to the user to use rules as per their wish.
One question: Is it a good practice to introduce more issues completely unrelated to the current issue in the SAME thread coz I don't really know if this is more convenient for the developer (you) or he gets irritated and wants the users to create new threads for new issues.
PS: I'm having a bit of a hard time dealing with how to write the content in proper way in github so please bear with me. Thanks
Interesting, that tineye URL is not what happens when you paste an URL to the search box. I’ve also added it now, and changed the /searchbyimage
as you suggested.
For where and how to post, don’t worry about it, no irritation here. Usually best practices are to keep a single topic per issue and open a new one for new topics. It doesn’t really matter though as I get notifications for all of the comments on this repository, so wherever you post something I’ll see it (and eventually get around to it).
The issue was actually that when you are on a page which requires you to login to google or github then it doesn't login because of cleanlinks redirection and forms an infinited loop.
Eg case: go to a website where they have a youtube channel and click to subscribe button, its going to redirect you to login page but when you put credentials and try to login, you will again get redirected. The reason is I think that in such particular cases the url contains the redirecting url of that particular channel and thus cleanlinks intercepts it inbetween.
You can replicate the issue by going to any youtube channel, clicking on the subscribe button and now because of redirections you'll get a "something went wrong, try again" error.
I also added some github / youtube accounts / google accounts rules, hope they cover all the cases. New version released now on AMO.
Everything is fixed now :+1:
I have a doubt:
If in case when you have modified a default rule (NOT created a new one) just like I did with that star.google.*/searchbyimage url, does the addon update rewrite the whole modification or the user modification interferes with it? coz there is no clear way to see that with a default rule (you can do that user added rule thanks to your previous update). So what's a good way to deal in such cases if there is, without taking screenshots of before/after cases and comparing those rules?
A request:
Can you please make a separate preference page for cleanlinks since the options/features are getting more and more complex and thus hard to manage in firefox about:addons page.
Something like https://addons.mozilla.org/en-US/firefox/addon/requestcontrol/ has implemented. Its far more inferior in comparison but the addon preference page looks extremely organized and easy to work with.
I see their add-on is much prettier indeed. Let’s track that in a separate issue, #102.
On the rules, the updater compares default rules with previous default rules:
So if you modified a rule, it will not be removed when updating as it is not the same as the old default rule anymore. If a new default rule is introduced that matches your existing or modified rule, they will be merged, and if they are exactly the same that means that (a) this rule won’t be changed in your rules file, and (b) this rule which was previously modified is now shown as default, because it matches the new default rules.
Please add default rules for:
Tineye, Bing, Yandex and similar reverse search engines.
Also update the url of that of google, I think the url (searchbyimage) has changed and the addons providing reverse search from context menu breaks due to this. (Please confirm first if it is the case).
Searx search engine instances (https://searx.space/)
When you search for something which requires fetching images from wikipedia, it uses a image proxy which has url entry which cleanlinks intercept.
I think this particular issue can be ignored because searx has multiple instances and the best thing is that the exception for that image proxy can be added then and there because it follow same format everytime without and uid.
So I just thought of bringing it to your attention, if you'd like to add some regex which covers the most (if not all) used searx instances; although I personally agree it can be easily ignored because of above mentioned reason.
I also request you to make some changes in UI and handling of rules if that's possible which makes it easy for user to differentiate between default rules and the rules added by their own. A simple indicator would suffice in cases where a user wants to make a decision as to whether its a default rule which was thoughtfully added after checking OR it was added by myself which NOW requires some modification in an error case.
As a long time cleanlinks user it becomes hard to remember which is a default rule and which was custom added, especially considering the amount of variance in paths for a single domain like that of google's.