Closed straight-shoota closed 3 years ago
GitHub Flavored Markdown (GFM) has the spec now.
Yes, that's the CommonMark spec plus a few extensions, so GFM is essentially a superset of CommonMark. These extensions are:
As asterite and ysbadden stated it is probably best if the default implementation does not include extensions.
There is already a Crystal implementation for CommonMark at https://github.com/ujifgc/crmark, though it is a direct port of the JavaScript reference implementation, so it can certainly be optimized in many ways.
Instead of implementing a parser in Crystal we could also use the existing C implementation. It supports custom extensions as well, though I have no idea how to incorporate them.
It would be good to adopt the full CommonMark specs in the stdlib. But I think it will be better to have a crystal native implementation. That way we could toggle extensions like GFM which many probably want.
If the failing CommonMark specs can be grouped and the design of the parser the reference implementation can be ported then we might be in a better shape to improve the markdown implementation as you suggest.
The markdown parser is mainly used to generate the docs from crystal code. And it is slightly used in the playground. That is basically why the current implementation didn't need all markdown features.
What do you mean with grouping the failing specs? The spec file is essentially a literate description (HTML at spec.commonmark.org) with embedded examples from which the test cases are extracted. So the tests are part of the hierarchical document structure. The default test runner for example can be limited to specs from specific sections with parameter --pattern
.
I would prefer a Crystal implementation as well. Seems like I was mistaken about extensibility of cmark. There is however an open PR to include extensions. Github have forked cmark to implement their syntax additions. @ysbaddaden has already created a shard with bindings to cmark: ysbadden/crystal-cmark. So it would probably be very easy to replace the current implementation with cmark. But that would not give extensibility, it would probably still be difficult even if cmark supported it.
Besides the already mentioned projects, there are also huacnlee/remarkdown, an extension of stdlib's Markdown
with support for GFM, and icyleaf/markd a WIP markdown parser in a very early stage.
/cc @icyleaf @huacnlee
I created markd just to validate how to write a CommonMark and better support extensions. In my other project named wasp is a static site generator. It requires powerful markdown parser for the users.
cmark is not the best solution to wasp, and same to crystal community.
BTW, PL#4496 is another discussion.
I'm sure that GFM should be default. Because it will be default everywhere.
What makes you sure of this? Even Github understands GFM as a set of extensions to the standard CommonMark. A default implementation should use the most basic common denominator. It is a better approach to add certain extensions to the basic set when they are necessary, instead of removing them when they are not needed.
However, we should consider including GFM into the stdlib and make it easily accessible (maybe as Markdown::GFM
).
Just because any modern language/framework documentation browser based on GFM. But Yes, GFM as option in stdlib is OK.
Don't you guys think that two markdown parsers are a little bit of overkill? I personally would advocate that none should be embedded in Crystal at all. Unless I'm losing some details, I really see no need for markdown parsers when you could simply implement it as a shard and import as needed.
@kazzkiq crystal docs
depends on Markdown.
I've said it before and I'll say it again: If the only reason the markdown parser is in the stdlib is so that it can be used in crystal doc
then the solution is to develop a method to separate it into a shard and still have the compiler depend on it. This could very well include simply vendoring sourcecode into src/compiler/vendor
and treating it as compiler sourcecode.
@straight-shoota in https://github.com/crystal-lang/crystal/issues/4613#issuecomment-310908551 i meant that if the failing specs can be grouped by features, then this issue is more actionable / can be tackled down based on that list of features. Maybe that list is just the TOC of http://spec.commonmark.org/0.27/ (now that I've opened the link).
That been said, PRs are welcome to move the crystal stdlib markdown module towards CommonMark.
If the module migrate to a separate shard that is another story. Let's first have a commonmark native implementation we are all happy about.
The current implementation probably needs some architectural changes to properly support all of CommonMark and to make it extensible. For example, it is recommended to use a two-stage parser for block nodes and inline nodes.
As a side node: documentation flags (#3519) could also be implemented as an extension to the default markdown class and can then be used by the docs generator.
Now markd is 100% compliant to Commonmark, and pass all specs.
Here is the result of a sample markdown file parse at MacBook Pro Retina 2015 (2.2 GHz):
Crystal Markdown 3.28k (305.29µs) (± 0.92%) fastest
Markd 305.36 ( 3.27ms) (± 5.52%) 10.73× slower
parse cost time:
preparing input: 1.218ms
block parsing: 1.685ms
inline parsing: 2.187ms
renderering: 1.472ms
note: preparing input is only process the source as String
not File
.
Wow, that's great! A performance loss is certainly to be expected, but I suppose there is room for optimization...? What "sample markdown" did you use for this benchmark?
@straight-shoota Updated the result, i found and used a complete commonmark source as a sample to benchmark. 🤣
Add ujifgc/crmark, It was best with commonmark support for now.
crystal markdown 3.06k (327.25µs) (± 1.25%) fastest
markd 278.73 ( 3.59ms) (± 1.12%) 10.96× slower
crmark in :commonmark 635.85 ( 1.57ms) (± 1.43%) 4.81× slower
crmark in :markdownit 118.54 ( 8.44ms) (± 4.52%) 25.78× slower
I can't install ysbaddaden/crystal-cmark, it missed the benchmark:
$ shards
Updating https://github.com/icyleaf/markd.git
Updating https://github.com/ujifgc/crmark.git
Updating https://github.com/ysbaddaden/crystal-cmark.git
Using markd (f58ed78fd0cdcc6e9dd274ac9f8696bc778dea84)
Using crmark (457c602725834429cd40544fbcaa505034637c8b)
Installing common_mark (0.1.0)
Postinstall cd ext && make
Failed cd ext && make:
/bin/sh: line 1: cd: ext: No such file or directory
Would you care comparing the performance to crystal-cmark (libcmark bindings)? And maybe some implementations in other languages (cmark and commonmark.js are reference implementations)?
@icyleaf use branch: master
on common_mark, the latest 0.2.0 release isn't tagged and it's trying to use 0.1.0.
Simple benchmark, each ran 10 times. source code
bm_crystal_builtin average cost 8.6305ms, min 7.899ms, max 9.948ms
bm_crystal_crmark average cost 18.2027ms, min 17.303ms, max 19.282ms
bm_crystal_markd average cost 14.9459ms, min 14.08ms, max 15.783ms
bm_node_commonmarkjs average cost 114.2585ms, min 109.916ms, max 119.234ms
Updated and add common_mark (thanks @RX14).
crystal markdown 2.69k (372.18µs) (± 0.93%) fastest
markd 257.93 ( 3.88ms) (± 1.18%) 10.42× slower
crmark in :commonmark 640.94 ( 1.56ms) (± 1.53%) 4.19× slower
crmark in :markdownit 123.85 ( 8.07ms) (± 3.67%) 21.69× slower
crystal-cmark 1.86k (536.57µs) (± 3.64%) 1.44× slower
without built-in markdown
markd 249.45 ( 4.01ms) (± 3.86%) 6.99× slower
crmark in :commonmark 511.05 ( 1.96ms) (± 2.72%) 3.41× slower
crmark in :markdownit 111.65 ( 8.96ms) (± 2.43%) 15.61× slower
crystal-cmark 1.74k (573.86µs) (± 3.68%) fastest
I'd say we move the Markdown
class from the std inside the compiler's source code and make it private to it. If you want to use markdown, use a shard. As @RX14 says, the only reason we have Markdown in the compiler's source code is because we use it for docs. If we later find out we like a shard, the compiler can depend on it.
Thoughts?
The current markdown implementation is insufficient in many ways, even for the job of producing the docs. Therefore I'd like to see an improvement to the markdown employed by the compiler, be it in stdlib or from an external shard (I don't know about the practical application of this).
I'd question if it would be worth sticking around with the current implementation if it gets hidden in the compiler's source. It's not good enough for this purpose and it's unlikely to get improved if nobody uses it except from the compiler.
@straight-shoota The current std is documented with it and I didn't find it lacking. We could perfectly support a subset of it, like for lists, links and codeblocks.
Yes, for the current stdlib documentation it is sufficient. But I've run into problems several times documenting my own code, mostly cause by missing support for raw HTML. While this shouldn't be needed for most documentation purposes, there are occasions where e.g. a table would help a lot. This is a common feature used in many API documentations.
Plus, the markdown renderer is not only used for the inline API documentation but also to render README.md
. For this purpose it feels like a necessity to have full support of Common Mark and don't settle for anything short of that.
@straight-shoota surely since we all agree it's substandard, surely the best thing to do is to limit it's usage to to docs tool. If someone wants to implement all of commonmark properly and completely, i'm sure we can vendor in that shard and use it in the compiler.
Go documentation is excellent and they don't have markdown. They do have pre
blocks. With that and spaces you can write tables. Like this:
header1 header2 header3
1.0 2.0 3.0
I still think simply saying "The docs use a limited subset of markdown" is fine.
@RX14 What about icyleaf/markd? It seems to be quite complete. I haven't taken an in-depth look at the implementation details, but it would be worth taking it into consideration.
@asterite pre
"tables" are very limited, don't work well with variable screen sizes. I wouldn't settle for that if it isn't hard to get "real" tables.
Though it's to true that great documentation is about the content, not fancy features. Still, it doesn't hurt to have some basic tools to make it better.
Oooh... markd
looks good! We could try using it in the compiler, specially because it's written in pure Crystal so no more dependencies needed for omnibus-crystal. The performance loss doesn't matter much, and it could always be improved later.
I am extremely impressed by markd (cc @icyleaf). It's well speced and seems to be well-written. I'm certain we could vendor a release of that in and use it for the compiler.
I'm impressed too. I benchmarked it and the slow part is using Regex, but I think that's unavoidable. But still, parsing 1000 times Crystal's Readme.md takes about 0.8 seconds. So maybe generating docs for the whole Crystal std will take 1 second more, I don't know, and that's totally acceptable. And for other smaller projects the difference will be less.
I think we can use shards in the compiler. Before compiling the compiler we should execute shards
. If someone wants to create a PR for this, I'm all in for it :-)
@icyleaf Great work!
It should be possible to improve performance by replacing regex with a dedicated parser. That's certainly in the game, but will require some work. And it should be good to go without that.
Ill work on this.
I don't think the compiler should depend on any crystal code outside crystal-lang organization.
If @icyleaf gives it consent the shard could be moved / forked. Maybe the Markdown
namespace can be used.
I am not sure about using shard in the compiler as well yet. If we want to split in different repos we could use git modules. That might imply not making it a shard per-se, but it is a tool that wont affect that much the architecture IMO
Another idea would be to move the documentation generator to a completely separate project (and binary). I was wondering how Elixir did it for docs, because they also use markdown, and it turns out it's a separate project indeed: https://github.com/elixir-lang/ex_doc (last commit is from José Valim, so it's also a core project). The doc generator needs the compiler as a dependency, but that's fine. And it's a less critical tool than the compiler itself.
This also means the compiler doesn't need to know about docs, nor use ECR. We could probably keep extracting these tools to different binaries. The only downside is that every binary will be kind of big, because the compiler's code is big. Maybe 20MB each executable. Maybe not a big problem.
In fact it seems ex_doc can depend on different markdown implementations, which is also great because no more "But I want to use this markdown implementation because it has feature X".
The playground also depends on markdown. We would need to extract that also. That is 2 or 3 times the compiler then. I'm ok with that. brew formula is 60MB so we are speaking of 100MB probably.
The downside of that is that sometimes bugs/breaks are harder to detect across different repos. i.e: the language docs would be generated with the stable or with head version? why there would be a release of the docs tool if it's not for the new release of the compiler? Eventually it will make sense for sure, but right now it will be 1:1 versions I think.
If we extract that to different repos let's keep the git history 🙏 on the new repos.
:-1: for extracting the tools to different repos. The insides of the compiler are still very unstable and in-flux. Extracting to multiple repos is likely to just complicate refactoring.
I don't see the problem with depending on a shard, personally. We fix the git sha in the shard.lock
which means all changes to the code in the shard have to be explicitly okayed inside this repo. I can see the argument that it complicates the build process unnecessarily - the solution there is to use git subtree and vendor the repo in manually without shards. If someone wants to go through the shard code with a fine-tooth comb each time we vendor an update in to see if there's any backdoors that's fine by me. Other than that, I'm not quite sure what the actual technical issue is.
Here's another idea: we move the tools (I'm thinking about the doc generator and the playground) to other repos, but make these tools depend on the crystal binary, not the compiler's source, or at least just the syntax part of the compiler, which is very lightweight and changes less.
For the docs, this means the compiler provides the documentation in JSON format. Then any tool can process this JSON and generate HTML/PDF documentation. This is in fact tracked by #2772 . The only thing is that the doc generator currently formats and highlights crystal code, so it must depend on the lexer and formatter (well, this is not a requirement, but it's nice). I made a program that depends on the compiler's syntax and formatter, and its size, when compiled with --release
, is 1MB. So I think that's more than acceptable.
I checked the syntax highlighter, it didn't change since July 2016, which means the lexer didn't change since then, and I don't think it will change. And even if it does, the tool can be updated after a new release.
For the playground this means that you can feed the compiler some text, and it will modify it so that it will return it instrumented. Right now this is what happens, for example this code:
a = 1
a
becomes this:
a = _p.i(1) do
1
end
_p.i(2) do
a
end
Then the playground defines a source file like this:
require "compiler/crystal/tools/playground/agent"
class Crystal::Playground::Agent
@@instance = Crystal::Playground::Agent.new("ws://localhost:#{port}/agent/#{session_key}/#{tag}", #{tag})
def self.instance
@@instance
end
end
def _p
Crystal::Playground::Agent.instance
end
and combines both sources so that the playground agent sends results through a web socket.
We can either make the compiler just provide the first tool, the one that "instruments" the code. Or in fact we can leave it without it and require the tool to do it, because the tool can require just the syntax part of the compiler and do the transformation (right now it's a file of about ~200 lines).
This means the compiler won't have this logic. But it will also not have an embedded HTTP::Server
in it. We get rid of that dependency and the compiler's size will shrink, and compile times will also decrease. And we can remove those ugly flag?(:without_openssl)
and flag?(:without_zlib)
from all over http's source code, thus simplifying the whole code and the build process.
Maybe a new release will break this tools if they depend on the compiler's source. It's just a matter of updating that and releasing a new version (just like every other tool or shard does), which is much less critical than releasing a new compiler version. And the best thing is that these tools can evolve independently of the compiler, and releases can be more often, because they are less critical in their nature.
Also, the compiler gets rid of all html, css, and js files, which probably don't belong inside a compiler.
How would this change the interface with the user? How would we package these external binary tools, and how would the user call them?
We don't. We don't include these tools in the compiler. If a user wants them, they can install them through the corresponding projects. This is how ex_doc
for Elixir works, it doesn't come with the base installation of Elixir. And I think that's good. Sometimes you just want to write an app and don't need a doc generator, nor need the playground.
I really like about Crystal that the compiler already brings most important tooling with it. All batteries included. That's really very helpful, especially for beginners.
This can also be achieved if these tool source code is separated from the compiler: the binaries could just be distributed with the crystal package. And they can be integrated into the compiler in the same way crystal deps
is just a wrapper call to shards install
.
Yes. It's an independent problem. What I'm saying is that having these tools as separate projects will simplify things, not make them worse. Probably the crystal formula from homebrew can depend on crystal-doc
and crystal-playground
if we wanted too, and for linux it could be a meta-package that contains the other tools.
@ysbaddaden already suggested this change some time ago and I thought it was worse, but now I'm convinced it's for the best.
Extracting features from the monolitic compiler doesn't have to change the user interface. We already embed Shards into the official packages, and can call it through crystal deps
. Extracted tools could still be compiled and distributed in the official packages, and called in the same vein as they are today.
The benefit is having a set of smaller projects that are easier to delve in, less scary than digging into src/compiler
, faster to compile, each with their own set of issues and improvements, along with the ability to rely on external Shards (thought this shouldn't be abused).
Yeah, this proposal looks good. crystal-doc
and crystal-playground
are not incredibly different to type than what they are currently. The only issue i have is one of discovery, will we put references to these external tools in crystal --help
?
I understand that crystal-doc
etc. wouldn't usually be used directly but through crystal doc
instead. This way the user has a common interface for all the tools. And of course crystal --help
should list all commands, if they are directly included in the same binary or external.
I think how the tool is invoked is not important. In Elixir is mix docs
, in Rust it's rustdoc
, in Ruby it's rdoc
, etc.
In fact, I'd like to remove crystal deps
as an alias of shards
. It's very confusing. Sometimes shards
would say its name (shards
) when running crystal deps
, so eventually you'll know what shards
is about. And I think by now everyone (that uses Crystal) knows about it.
We shouldn't keep crystal docs
as an alias for another tool.
For example:
$ crystal deps
Missing shard.yml. Please run 'shards init'
@asterite I was thinking the same thing recently but I didn't want to bring it up in this thread. We should deprecate for 1 release then remove. Or just print "use shards" but not invoke it for 1 release.
The Markdown parser and renderer is still very basic and misses important features like for example raw html.
I suggest to implement the CommonMark specification which is very reasonable and the de-facto standard reference for Markdown implementations. There is an entire test suite jgm/CommonMark to validate implementations. Currently
Markdown.to_html
passes only 137 of 621 (22 %).It would also be nice if the Markdown parser/renderer can be easily extended to add support for more specialized features like Github Flavoured Markdown (e.g #2217)