scrapy / w3lib

Python library of web-related functions
BSD 3-Clause "New" or "Revised" License
392 stars 104 forks source link

canonicalize_url isn't handling some crucial cases #107

Open sibiryakov opened 6 years ago

sibiryakov commented 6 years ago

Useful links: https://developers.google.com/safe-browsing/v4/urls-hashing https://github.com/iipc/urlcanon/blob/master/python/urlcanon/canon.py#L530

kmike commented 6 years ago

Nice links, thanks!

removal of userinfo

What does it mean? username/password?

dots and slashes in path and hostname

Could you please give an example? What's wrong with e.g. dots in hostname?

spaces succeeding and preceding the URL

Arguably this is an issue with link extraction, not with canonicalization. URLs shouldn't have such whitespaces. See also: https://github.com/scrapy/scrapy/issues/1614.

common session id variables and their values

This would be a very good feature to have, but we can't just blindly strip some known session_id parameter names and values by default. See also: https://github.com/scrapy/scrapy/issues/1560.

ip v6 canonicalization

a good call.

ghost commented 6 years ago

WRT the session variables, it may be worth considering also the google analytics etc. URL params, like "utm_source"; those attributes will be added by lots of websites and social media tools to all outbound URLs and could probably be safely stripped during canonicalisation.

sibiryakov commented 6 years ago

yes, userinfo is username and password.

Could you please give an example? What's wrong with e.g. dots in hostname? google.com.

obviously these are task-dependent issues, but there is no mechanism to enable such behaviour.

kmike commented 6 years ago

those attributes will be added by lots of websites and social media tools to all outbound URLs and could probably be safely stripped during canonicalisation.

I don't quite like doing this all by default without an option to turn it off. So the main question for now is how to make this behavior overridable, so that users can implement such rules themselves, without having to modify w3lib or scrapy. We can also provide something by default, but I think it should be a next step, and a separate task.

obviously these are task-dependent issues, but there is no mechanism to enable such behaviour.

Yep, it is discussed in https://github.com/scrapy/scrapy/issues/1560. Changing canonicalize_url to do these actions is not enough, there should be a mechanism in Scrapy to enable it, and it also should be customizable. This part of the ticket is a duplicate of https://github.com/scrapy/scrapy/issues/1560, probably it is better to keep discussion of this feature there. The problem is known, but there was no concrete proposal on how to fix it so far.

kmike commented 6 years ago

yes, userinfo is username and password.

Is it a real issue in practice? I understand why it can help, but I can also see how it can break some of the use cases if https://github.com/scrapy/scrapy/pull/1466 gets merged, if we do it by default.