A multiplatform Markdown processor written in Kotlin.
intellij-markdown is an extensible Markdown processor written in Kotlin. It aims to suit the following needs:
The processor is written in pure Kotlin (with a little flex), so it can be compiled not only for the JVM target, but for JS and Native. This allows for the processor to be used everywhere.
intellij-markdown
as a dependencyThe library is hosted in the Maven Central Repository, so to be able to use it, you need to configure the central repository:
repositories {
mavenCentral()
}
If you have Gradle >= 5.4
, you can just add the main artifact as a dependency:
dependencies {
implementation("org.jetbrains:markdown:<version>")
}
Gradle should resolve your target platform and decide which artifact (JVM or JS) to download.
For the multiplatform projects you can the single dependency to the commonMain
class path:
commonMain {
dependencies {
implementation("org.jetbrains:markdown:<version>")
}
}
If you are using Maven or older Gradle, you need to specify the correct artifact for your platform, e.g.:
org.jetbrains:markdown-jvm:<version>
for the JVM versionorg.jetbrains:markdown-js:<version>
for the JS versionintellij-markdown
for parsing and generating HTMLOne of the goals of this project is to provide flexibility in terms of the tasks being solved. Markdown Plugin for JetBrains IDEs is an example of a usage when Markdown processing is done in several stages:
These tasks may be completed independently according to the current needs.
val src = "Some *Markdown*"
val flavour = CommonMarkFlavourDescriptor()
val parsedTree = MarkdownParser(flavour).buildMarkdownTreeFromString(src)
val html = HtmlGenerator(src, parsedTree, flavour).generateHtml()
final String src = "Some *Markdown*";
final MarkdownFlavourDescriptor flavour = new GFMFlavourDescriptor();
final ASTNode parsedTree = new MarkdownParser(flavour).buildMarkdownTreeFromString(text);
final String html = new HtmlGenerator(src, parsedTree, flavour, false).generateHtml();
The only non-Kotlin files are .flex
lexer definitions.
They are used for generating lexers, which are the first stage of inline elements parsing.
Unfortunately, due to bugs, native java->kt
conversion crashes for these files.
Because of that, conversion from .flex
to respective Kotlin files requires some manual steps:
.flex
file.jflexToKotlin
plugin (you will need to build it and then install it manually, via settings). Run JFlex Generator
action while having .flex
file opened.
.skeleton
file._<SomeName>Lexer.java
will be generated somewhere. Move it near the existing _<SomeName>Lexer.kt
..kt
lexer.Convert JFlex Lexer to Kotlin
action while having the new .java
file opened..kt
file. There should be no major issues. Please try to minimize the number of changes to the generated files. This is needed for keeping a clean Git history. The parsing process is held in two logical parts:
This is the same way as the one being proposed by the Commonmark spec.
Each (future) node (list, list item, blockquote, etc.) is associated with the so-called MarkerBlock. The rollback-free parsing algorithm is processing every token in the file, one by one. Tokens are passed to the opened marker blocks, and each block chooses whether to either:
The MarkerProcessor stores the blocks, executes the actions chosen by the blocks, and, possibly, adds some new ones.
For the sake of speed and parsing convenience, the text is passed to the MarkdownLexer first. Then the resulting set of tokens is processed in a special way.
Some inline constructs in Markdown have priorities, i.e., if two different ones overlap, the parsing result depends on their types, not their positions - e.g. *code, `not* emph`
and `code, *not` emph*
are both code spans + literal asterisks.
This means that normal recursive parsing is inapplicable.
Still, the parsing of inline elements is quite straightforward. For each inline construct, there is a particular SequentialParser which accepts some input text and returns:
After building the logical structure and parsing inline elements, a set of ranges corresponding to some markdown entities (i.e. nodes) is given. In order to work with the results effectively, it ought to be converted to the AST.
As a result, a root ASTNode corresponding to the parsed Markdown document is returned.
Each AST node has its own type which is called IElementType
as in the IntelliJ Platform.
For a given AST root, a special visitor to generate the resulting HTML is created.
Using a given mapping from IElementType
to the GeneratingProvider
it processes the parsed tree in Depth-First order, generating HTML pieces for on each node visit.
Many routines in the above process can be extended or redefined by creating a different Markdown flavour. The minimal default flavour is CommonMark which is implemented in this project.
GitHub Flavoured Markdown is an example of extending CommonMark flavour implementation. It can be used as a reference for implementing your own Markdown features.
MarkdownFlavourDescriptor
is a base class for extending the Markdown parser.
markerProcessorFactory
is responsible for block structure customization.
stateInfo
value allows to use a state during document parsing procedure.
updateStateInfo(pos: LookaheadText.Position)
is called at the beginning of each position processing
populateConstraintsTokens
is called to create nodes for block structure markers at the beginning
of the lines (for example, >
characters constituting blockquotes)
getMarkerBlockProviders
is a place to (re)define types of block structures
getParserSequence
defines inlines parsing procedure. The method must return a list of SequentialParsers
where the earliest parsers have the biggest operation precedence.
For example, to parse code spans and emphasis elements with the correct priority,
the list should be [CodeSpanParser, EmphParser]
but not the opposite.
SequentialParser
has only one method:
parse(tokens: TokensCache, rangesToGlue: List<IntRange>): ParsingResult
tokens
is a special holder for the tokens returned by lexerrangesToGlue
is a list of ranges in the document which are to be searched for the structures in question.
Considering the input: A * emph `code * span` b * c
for the emph parser ranges
[A * emph
, b * c
] mean that emph must be searched in the input A * emph | b * c
.
The method must essentially return the parsing result (nodes for the found structures) and the parts of the text to be given to the next parsers.
Considering the same input for the code span parser the result would be `code * span`
of the type "code span" and the delegate pieces would be [A * emph
, b * c
].
createInlinesLexer
should return the lexer to split the text
to the tokens before inline parsing procedure run.
createHtmlGeneratingProviders(linkMap: LinkMap, baseURI: URI?)
is the place where generated HTML is customized. This method should return a map which defines how to handle
the particular kinds of the nodes in the resulting tree.
linkMap
here is precalculated information about the links defined in the document with the means of
link definition. baseURI
is the URI to be considered the base path for the relative links resolving.
For example, given baseUri='/user/repo-name/blob/master'
the link foo/bar.png
should be transformed to
the /user/repo-name/blob/master/foo/bar.png
.
Each returned provider must implement processNode(visitor: HtmlGenerator.HtmlGeneratingVisitor, text: String, node: ASTNode)
where
text
is the whole document being processed,node
is the node being given to the provider,visitor
is a special object responsible for the HTML generation.
See GeneratingProviders.kt
for the samples.