Open tommedema opened 9 years ago
Could you post a permalink to demo http://markdown-it.github.io/linkify-it/ ? You can type there all examples at once, with results.
Problem is in (
, used in your example. There are NO space before it. I'm not familiar with asian punctuation. Are such (
always word terminators? Is is possible to have it in links, like in wiki?
I added another issue where a user mistakenly added http:// twice.
These are both (technically speaking) user behavior issues, but the first one especially is quite common.
Formally speaking ( is not a word terminator, however because it takes so much white space, many chinese people don't put a space in front of it (it's an act of being lazy, but it's quite common).
Perhaps I should do some string manipulation before I pass it to linkify-it
?
Example with double http:// is technically correct. That's like http://localhost
- any local domain allowed. Imho such typo do not need fixes.
Example with (
is more serious. It uses chineese unicode scopes, and can be distinguished from english scopes. But i need recommendations about grammar to be sure.
I'm sorry I'm not a Chinese speaker. Perhaps someone else can help out here. There are also other Chinese punctuation characters, like ,
Let's leave it open until someone can formalize info for asian languages group (probably japan language has the same issues). I'm ready to fix as soon as possible, but would like to avoid kludges with inccomplete workarounds.
More examples of how difficult these Chinese posts are to parse: link
What's your idea on this?
Seems you given wrong link, it shows default text.
Sorry, I've fixed the link now.
Thanks for examples. Posted a question at commonmark forum http://talk.commonmark.org/t/linkifier-lets-discuss-and-test/1045/9?u=vitaly
As far as i see in last example, spaces are not used at all. It's possible to to track locale change (non-english -> english), but that's not safe.
That's correct, spaces are not used. The Chinese use different punctuation marks or even don't use them because URLs are in English script and therefore the disctinction is easily visible to the eye, but more difficult for machines.
What about links with chineese chars? I can find link start somehow by [ any non english ]http://
, but can't use this rule to search link end.
PS. Anyway, it worth to start collecting test samples for china language separately. Something like this
You're absolutely right. I think the first rule makes sense, but URLs can probably have chinese characters. Perhaps the first rule by itself would help though, and somekind of smart processing for certain delimiters that are unlikely to appear within an URL, e.g.:
str.replace(/[\uff08\uff09\uff0c\uff01\u3002\uff1f\u3010\u3011\uff3b\uff3d\u3001\u300a\u300b\u2605]/g, ' ') //( ) , ! 。 ? 【 】 [ ] 、 《 》 ★
Found this URL doesn't match:
BTW, I found a website comparing different regexps for URL: https://mathiasbynens.be/demo/url-regex
By the way, Facebook does this properly, but I'm not sure if they use a proprietary solution.
I will try to resolve this problem.
@fengmk2 implementation can be not easy, but for the first step it would be enougth to have collection of fixtures with good coverage of Chinese edge cases.
See https://github.com/markdown-it/linkify-it/tree/master/test/fixtures
Here is my example page linkifying incorrectly: http://www.dancedeets.com/events/896736620379772/2016-r16-taiwan-x-wbc
In this case, it's a vertical bar, though to be precise it's |
and not |
(I hope github shows the difference properly). The page author uses it as a delimiter, but linkify sees it as part of the domain name (presumably it sees it as just another word character).
@mikelambert Nothing to fix. Fuzzy mode is not safe, it's an author's mistale to use linkify-it in wrong way.
I'm the website author using linkify-it to add links to raw text from a variety of sources which I didn't write myself (facebook events and other websites). And the individual source authors are not using linkify-it, and of course not writing their text with linkify-it in mind.
I recognize that fuzzy is not safe, and not perfect. However, it seems unfortunate that a vertical bar (what some authors are using as visual punctuation) is treated as part of a domain name. It seemed like the fuzziness could be smarter, even if it's still fuzzy.
So to clarify, is this a "it's not a bug, just user error" bug, or a "it's not a bug I care about fixing, but patches are welcome" bug?
@mikelambert I mean, problems with |
in that examples is because linkifier applied after layout compose, instead of before compose. But linkifier goal is to find links in natural texts, not everywhere (that's impossible).
My personal opinion is, that linkifier is used not as expected. So, this example is not good enougth as a reason for changes. May be i don't understand something, but this is my opinion for now. If i had to do such site, i would parse links first, then compose header.
Also, i understand that people can have another opinions and may wish to just quick fix something via hacks. For this case, linkifier allows to override regexps without need to fork project.
What do you mean by "layout compose" ? I assume you are referring to writing up the text and adding the |
to lay things out?
https://www.facebook.com/events/896736620379772/ is the source text I am working with. (Notice that Facebook gets the linkification correct.) I receive the raw text from the FB API, and then am trying to linkify it for use on my own website.
I understand that linkifier-it might be the wrong tool (since this is not natural text), and I am using it incorrectly (since it is applied after the author's layout composition). But unfortunately, I am not the author of the text, and so I am not able to linkify before adding the |
characters.
Thanks for your time!
Then, if you have source in known format (|
-separated), i would split it first, then apply linkifier, then join back. May be, with some additional conditions (line should be short and have chineese letters inside). Or would try to get html if possible (not familiar with FB api)
The reason to add |
support could be, if you say "humans write such way" or "that's de-facto standard", with a lot of examples.
In other words, if you have auto-generated text - consider parse/detect it's structure prior to apply linkifier. Or if you know some new uncovered de-facto patterns of human writing - create new issue with proofs (live examples), and i'll try to fix it if possible.
Thank you, that's a creative solution to this problem. I'll go with that for now.
I'll create a separate issue with the few examples I have, and you can decide if it's justified "de-facto pattern of human writin" or not there. :)
@mikelambert I'm wondering what does facebook linkify do with this following link?
link: https://zh.wikipedia.org/wiki/(
you can see what this link take you to here: https://zh.wikipedia.org/wiki/(
Feel free to create a dummy facebook event (or even a FB post on your wall) and see what happens?
It seems like it fails to parse that link properly, instead linking to https://zh.wikipedia.org/wiki/ .
I am looking forward to the fix! It could be more enjoyable every time I paste links in the GitHub README with Chinese characters following it can SMARTLY display them correctly.
Steps to reproduce:
【视频奇志大兵《发烧友》 在线观看 - 酷6视频】奇志大兵《发烧友》 在线观看,奇志 大兵 搞笑双簧 _ 发烧友 (追星族) http://t.cn/RZwjG7U(分享自 @酷6网)
http://t.cn/RZwjG7U(分享自
whereas the output link should be
http://t.cn/RZwjG7U
The reason is that ( is not recognized as a separating delimiter, yet it is quite common in Chinese.
Out of 500 posts I gathered, about 20 to 30 of them had links like this, resulting in invalid links reported by linkify.
Note that I realize that these users are technically posting invalid URLs, but 20-30 out of 500 is very common and therefore there should be a way to deal with this. Any suggestion?