Open julienCXX opened 8 years ago
I tried the 5 first links. With Firefox API, title, url and description are fetched correctly. However, with bookmarklet (or Shaarli-next firefox addon), description are not loaded.
Thanks for the dataset! It'll be easier to fix this for good.
So far, here's what I got:
30x
HTTP redirection can contain a relative URL.request_fulluri
attribute in the HTTP request, some others reject it.<title>
HTML tag contain an attribute (eg. <title stuff="morestuff">
.HTTP 406 Not Acceptable
, which is not very kind, because my browser is too old (Firefox 23 UA).<meta charset=
, while we only expect double quotes or nothing, which leads to an invalid charset.truncated/not the right one
- http://expandedramblings.com : point 2.
- amazon: the title is more likely changed in JS after page loading.
- meetup: same as amazon.
- https://wiki.cyanogenmod.org/w/Install_CM_for_jfltexx : Retrieved title is not escaped! That's very bad.
- http://host1.no/ returns a 404 page (probably because I'm a bot), but with a 200 HTTP code. Don't use this host.
Encoding issues: mb_convert_encoding
parameters are messed up.
Not solved yet and/or probably won't fix:
[
0 => 'HTTP/1.1 301 Moved Permanently',
1 => 'HTTP/1.1 301 Moved Permanently',
2 => 'HTTP/1.1 200 OK',
]
Probably due to going through reverse proxies, but that's more likely a bad configuration on their side.
@alexisju: it doesn't work the same way. With bookmarklets, the page is already opened, and Shaarli grabs the title with JS (or Firefox API). In Shaarli, it has to make an external request from your server to the targeted URL.
OK, the explanation seems legit, but does not convince me completely.
amazon: the title is more likely changed in JS after page loading.
I do not agree with that. The correct title appears in a title
tag, in the page’s source code. Have you looked at the URL encoding/escaping?
On another (desert) hand, this one does it.
meetup: same as amazon.
On this one, the retrieved title has an extra space between the “-” and the preceding word. It looks like the carriage return is interpreted as a space character. Other title differences could be related to an adaptation to the browser’s accepted language settings.
http://host1.no/ returns a 404 page (probably because I'm a bot), but with a 200 HTTP code. Don't use this host.
I had not got a 404 page. Strange. Yes, trough my local instance.
https://static.societegenerale.fr/ : access is forbidden.
Yes, I know. I expected to get “403 Forbidden” as page title.
http://mro.name/foaf.rdf doesn't have any title.
This is not a valid HTML page, but my browser (Firefox) still displays a title (Marcus Rohrmoser). It looks like it was extracted from the foaf:name
tag.
@alexisju: it doesn't work the same way. With bookmarklets, the page is already opened, and Shaarli grabs the title with JS (or Firefox API). In Shaarli, it has to make an external request from your server to the targeted URL.
I almost only used the Shaarli Web interface for adding links.
amazon: the title is more likely changed in JS after page loading.
I ran wget
on that page and the <title>
is correct (Amazon.fr : test), so not generated by JS.
I expected to get “403 Forbidden”
The message seems to depend on the User Agent (curl
returns <title>Erreur / Error</title>
, firefox <title>403 Forbidden</title>
This is not a valid HTML page, but my browser (Firefox) still displays a title (Marcus Rohrmoser). It looks like it was extracted from the foaf:name tag.
I think Shaarli can live without RDF parsing.
amazon: the title is more likely changed in JS after page loading.
My bad, I jumped on that conclusion too fast. I found another issue, which will probably fix some other issues: the URL is escaped too early, so Shaarli's trying to reach a non valid URL (&
=> &
).
On this one, the retrieved title has an extra space between the “-” and the preceding word. It looks like the carriage return is interpreted as a space character. Other title differences could be related to an adaptation to the browser’s accepted language settings.
Yep, I thought you were talking about the translation. The replace function adds an additional space to replace new lines, which is unnecessary. Also, I guess adding the locale in the HTTP request wouldn't harm.
Yes, I know. I expected to get “403 Forbidden” as page title.
No, Shaarli only downloads the page/retrieves the title if it finds 200 OK HTTP code. I don't really see the point to set 403 Forbidden as a title.
This is not a valid HTML page, but my browser (Firefox) still displays a title (Marcus Rohrmoser). It looks like it was extracted from the foaf:name tag.
Right, NoScript was blocking the rendering. I agree with @nodiscc though.
Anyway, there is a bunch of things to fix, but that will be a great improvement. Thanks!
Actually I found something else for fr.atlassian.com: they use location
instead of Location
redirection directive (notice the case). It doesn't cost much to support it.
EDIT: Another. Bad certificate can be ignored.
It looks like your commit fixed most problems, as I tested the problematic links with that version. However, there are still issues with some of them (excluding the forbidden and won’t fix cases). No title appears (maybe an anti-bot policy):
Title appears, but is not the expected one:
title
HTML tags and my browser shows the other one “3 Reasons Self-Hosted Video Is Better Than YouTube”)host1.no has no 404 error anymore, title fetching works well here.
I found some new links causing other issues:
Accept-Language
header (it sends exactly the same thing as my browser do), but still no luck. I bet my 2 cents on this, which might be a browser feature:<link rel="alternate" hreflang="de" href="/articles.php?lang=de">
wget
it, you'll get the same crap as Shaarli does: a 40kB file full of unknown characters. No idea here.filter_var($url, FILTER_VALIDATE_URL)
...So there are 2 fixable things here: unicode domains and supporting pre-HTML5 charset.
For the links that work for me, it might be because I have a better connection than yours. Although a proper server should have a better connection than mine.
For the links that work for me, it might be because I have a better connection than yours. Although a proper server should have a better connection than mine.
They now work for me, after another try.
http://news.zing.vn/ If you try to wget it, you'll get the same crap as Shaarli does: a 40kB file full of unknown characters. No idea here.
wget
also gave me that. Using file
, I discovered that the document is actually a gzip-compressed HTML page. After gunzipping it, the title appears clearly, surrounded by 2 carriage return characters (0x0D
or ^M
).
Oh right, good catch. Although:
Talking about anti-bot policies on some websites, how do you explain that they detect Shaarli as a bot given that it is using a desktop browser user agent? On another hand, most links flagged as having an anti-bot policy worked for me with wget
(without setting the user agent), from my computer.
To be honest, I've no idea. Playing the headers didn't change anything. I'm not willing to spend more time on this. If anyone sees what Shaarli could do in a better way, feel free to reopen this issue.
Hello again, I have made discoveries about the links that are still broken.
First of all, I analysed (using Wireshark) the HTTP request emitted by Shaarli (on my local instance) when looking for the title and compared with wget
on the same URL: http://antix.mepis.org/. wget worked whereas Shaarli did not (got the 403 bad behaviour).
The sent headers are different:
HTTP/1.1
, but Shaarli’s was HTTP/1.0
(and still specifying the host, not part of the spec)Accept: */*
whereas Shaarli does not have itConnection: close
whereas wget does not have itAfter digging trough the code, it appears that the title fetching procedure is called from index.php
and resides in get_http_response()
, at application/HttpUtils.php
. The function that actually sends the said headers is get_headers()
, with options for altering them (I assume you already know that). Comparing the options with the network analysis, it appears that the method and user-agent are respected, but the accept-language parameter is missing. I also tried to add the Accept: */*
, without success.
In order to do a more extensive testing, I extracted the code responsible for calling get_http_response()
in a distinct file and added another method for connecting to websites, based on cURL and looking like that:
$url = 'http://an-url-to-test.tld/';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0');
$r = curl_exec($ch);
return $r;
Running that on problematic URLs gave me those results:
There are several things to say:
HTTP/1.1
and Accept: */*
get_headers()
, as the sent headers might look like more “authentic”get_headers()
missed the redirectCURLOPT_NOBODY
makes cURL use the HEAD method; setting back to GET, with curl_setopt($ch, CURLOPT_HTTPGET, true);
fixes the issue.As a conclusion, I would say that get_headers()
itself may be the culprit, as its behaviour is too far from a web browser to pass trough some “anti-bot” filters. Thus I think it could be solved by changing the fetching function (with cURL, for instance). The only drawback that appears to me is that cURL may be disabled in server PHP configuration.
Thanks for the thorough investigation. I'm against adding a php5-curl dependency to workaround a few edge cases, but this could benefit from more investigation.
Note that curl --head https://www.transip.eu/
returns 501 not implemented
. Whereas curl without --head returns a HTTP code 200. So this server is apparently actively blocking header requests. I'm not sure we should worry about this, which can be considered an unusual (or bogus) server configuration. The best course of action would be contacting the server's operator and asking them for more info.
Thanks for the investigation. I've noticed in another project that cURL might be more reliable that default get_headers
/file_get_contents
function, even with context.
While we shouldn't add a server requirement for that (php-curl
), we eventually could add a cURL fetch as default, with a get_header
fallback function. It's what we've done with php-intl
functions.
Thanks @julienCXX for the PHP arcane investigation!
+1 for switching to a cURL client with a fallback get_headers/file_get_contents
mode
Thanks everyone for the feedback!
@nodiscc I contacted the server operator and was replied that HEAD requests are blocked as a protection against crawlers.
Talking about webpage fetching methods, I also considered using fsockopen()
, but I am not sure about shared hosting support nor how to deal with HTTPS.
fsockopen()
for remote access requires allow_url_fopen
to be set to true in php.ini
. I don't know if it's more or less common than cURL. Also, I didn't run any test, but the function doesn't take any settings. stream_socket_client()
allow to set up a context, but we might end up with the same issues we encountered with file_get_contents()
.
@julienCXX Since you already worked on this, feel free to submit a PR with cURL fetching and the current code as fallback, if you want to.
I am thinking about it, but I have a few questions before working:
application/HttpUtils.php
, should I add the fetch method selection into get_http_response()
or should I create a distinct function (method selection in calling code)?file_get_contents()
. Thus, should I generate an exception the same way, in prevision of a code change?I think you should replace get_http_response()
content, and move its current content in get_http_response_fallback()
or something like that. Then you can call it conditionally using function_exists()
.
I'm not sure what's this TODO is for specifically, but there are todos application wide regarding errors, because we lack a proper error system. It'll be refactored eventually, don't worry about it.
Also, it's PHP. file_get_contents()
and the vast majority of PHP functions won't raise exception, but will generate a warning or an error. Client code isn't wrapped in try/catch either, so just return array($headers, false);
should be fine IMHO.
There are interesting discussions on how to catch PHP warnings/errors/fatals with a custom handler:
I think the TODO
annotation in HttpUtils
dates back to https://github.com/shaarli/Shaarli/commit/451314eb48c7d922264adc6eada8a0273b12344c ; I've added these here and there to keep in mind we need to switch to a proper error handling system (i.e. instead of calling die()
or exit()
) so we can log them in a more friendly way (and avoid having to dig into server logs).
It’s almost there. The latest broken links seem to work, but I discovered other bugs:
When a link has HTTP redirects, the URL shown in “Add link” page is the one prior any redirect (except adding “http://” at the beginning) instead of the effective one (after redirects).
That's the expected behaviour actually. We don't change user input (except for some parameters cleaning).
That's the expected behaviour actually. We don't change user input (except for some parameters cleaning).
OK, that’s not up to me.
Testing again with the cURL-based method, title in http://android.izzysoft.de/ now appears in English. Talking about the weird response in https://fr.atlassian.com/git/tutorials/comparing-workflows/forking-workflow, as shown by @ArthurHoaro, it also happened to me with cURL (and any other link involving redirect). This was because cURL returns headers from all the redirects (not only the final page), resulting in a mixup. I assume it also happened with the fallback method.
Title encoding of http://www.paruvendu.fr/bateau-nautisme/ is broken again.
Looks like something mb_convert_encoding
isn't able to deal with because the à
is converted, while the ´
is converted to a square.
title in http://android.izzysoft.de/ now appears in English
Neat. That will remain a mystery though.
https://fr.atlassian.com/git/tutorials/comparing-workflows/forking-workflow
Works fine here with your PR and the fallback method.
Neat. That will remain a mystery though.
This is not a mystery. It comes from the fact that cURL sends the Accept-Language header properly, whereas file_get_contents()
does not.
Works fine here with your PR and the fallback method.
Yes I know. But I wanted to inform you that the issue was probably not related with a bad configuration on the webmaster’s side, as you stated in your first comment.
Hello, this still not fixed in 2023 ! I got same problem when trying to fetch https://tinast.fr.
Page title fetching still not work properly
Any updates ?
regards
Hi,
I got same problem when trying to fetch https://tinast.fr/.
Works properly on the demo instance https://demo.shaarli.org/admin/shaare?post=https%3A%2F%2Ftinast.fr%2F, and on my own instance.
Make sure that:
php-curl
, or else Shaarli will fall back to the old get_file_contents()
method which is not reliable https://github.com/shaarli/Shaarli/pull/624)If this still doesn't work, check your webserver/PHP logs for any possible errors (and provide them in a new issue), and/or enable dev.debug
in Shaarli's configuration file.
There are, to my knowledge, only a few remaining cases where page title fetching is still broken:
<title>
is set via javascript. This will not be fixed as it would require downloading the full page + external scripts and executing them (too complex/out of scope). I will update the Troubleshooting page to document this as a limitation of Shaarli. Using Browser extensions or the Bookmarklet works around this limitation, as they use the page title provided by the web browser. -> https://github.com/shaarli/Shaarli/pull/2037After rechecking all links in all comments of this issue, I was only able to reproduce the problem (no title retrieved) with these URLs on Shaarli v0.12.2. Each one may have a different cause, so we may have to open specific issues or update the documentation:
curl
, this URL returns the cloudflare anti-DDoS/anti-bot page Attention Required! | Cloudflare
. More debugging needed, but if a website is behind a service that blocks bots/automated access from Shaarli, I don't think there is much we can/should do to bypass this restriction. This should be documented.curl
returns a page stating Enable JavaScript and cookies to continue
, so this is another case of https://github.com/shaarli/Shaarli/issues/989 (javascript required, wontfix)curl --silent https://stackoverflow.com/jobs|grep title|head -n1
returns the proper <title>
Before reporting other cases, make sure your Shaarli installation is compliant with the requirements above (latest Shaarli release, PHP curl extension installed and enabled, outgoing HTTP requests not blocked by your hosting provider, destination webserver reachable form the server hosting Shaarli)
Metadata retrieval does not work for YouTube videos, for example https://www.youtube.com/watch?v=e4TFD2PfVPw. The problem is that YouTube redirects (302) to https://www.youtube.com/supported_browsers?next_url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3De4TFD2PfVPw.
The reason for this redirection seems to be that Shaarli uses the user agent string from Firefox 45 which YouTube apparently deems to be too ancient. I can confirm that changing the version in the user agent string from 45.0 to something more recent like 121.0 (current release) or 115.0 (current extended support release) fixes the issue, at least for now. If you wish, I will submit a pull request to that effect.
changing the version in the user agent string from 45.0 to something more recent like 121.0 (current release) or 115.0 (current extended support release) fixes the issue
It seems you are correct
$ curl --silent --user-agent 'Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/115.0' https://www.youtube.com/watch?v=e4TFD2PfVPw | grep --only-matching '<title>.*</title>'
<title>Parcels - Live Vol. 1 (Complete Footage) - YouTube</title>
$ curl --silent --user-agent 'Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/45.0' https://www.youtube.com/watch?v=e4TFD2PfVPw | grep --only-matching '<title>.*</title>'
# nothing
If you wish, I will submit a pull request to that effect.
It would be nice, thank you.
Youtube used to be one of the few sites that used javascript to update the HTML <title>
(it's even used as example in https://shaarli.readthedocs.io/en/master/Troubleshooting.html#page-title-and-description-are-not-retrieved-automatically), but it apparently no longer does (see other examples above).
Hi, I recently installed Shaarli (v0.6.5) on a private server (running PHP 7.0.5), in order to save a large amount of links from opened browser tabs. As I imported links manually, I noticed that the link’s title did not always appear properly (nothing, only partially or breaking the UI). That happened quite often (about 90 broken titles on more than 520 links). As these issues seemed to have been fixed in #410 and #512, I tried to reiterate the process with a local instance (same PHP version), using the most recent development version at that time (11609d9). The result was a bit better (no UI breakage), but about 50 links are still broken, in different ways. These are the following:
No title appears:
http://www.leboncoin.fr/annonces/offres/ile_de_france/occasions/?th=1&q=test404https://www.amazon.fr/gp/cart/view.html/ref=nav_cartbinary output, not HTMLhttp://ast2015.acroe-ica.org/website down/timeouthttps://static.societegenerale.fr/404http://antix.mepis.org/NXDOMAINhttp://www.quickvz.com/NXDOMAINhttp://www.kris3d.com/website down/timeouthttps://mro.name/foaf.rdf404A title appears, but is truncated/not the right one:
A title appears, but non-ASCII characters are replaced with a question mark (encoding issue?):
Furthermore, title fetching fails on links to PDF files.