Closed NightRa closed 9 years ago
Well, the only renderer that will ship with the library is the one that converts it back to Markdown (the "identity renderer"). In this case, there's no need to evaluate anything. Actual evaluators will be supplied by markdown applications, which render to things like HTML, code API docs, etc.
IOW, it's the responsibility of the markdown application to decide if and how to evaluate a code block during the process of rendering the document in some form.
Oh, isn't a part of this library to generate HTML?
No, just the identity renderer. I guess we could add an HTML generator although it wouldn't necessarily be portable to nodejs and would involve choosing and adding a dependency to a particular DOM lib.
It could just create a representation of the DOM, a la purescript-smolder
, or just the HTML string.
Is purescript-parser stable and could/should be used for this project?
The HTML string would be portable and fast (and could be done with an HtmlString
newtype), at least when the target is in fact HTML. I also like the idea of a purescript-smolder
renderer which could be portable to client or server.
purescript-parser
is pretty stable and could be used for this project. There's also purescript-string-parsers
although I don't know the status of that. It's probably a matter of taste / performance (I wonder which one ps-in-ps uses?).
The Haskell implementations of Markdown parsers use Parsec, which has a similar API to the Purescript libraries.
I'm sure it's possible to write a hand-parser, too, though it may end up being more work or involve implementation of a subset of the functionality in one of the parser libs. I do know parsing Markdown is a lot sloppier than parsing something with a real grammar (every string is valid Markdown!).
Whose responsibility should it be to evaluate the code? Should it the user of the library on (HTML) generation? (Take some kind of partial (for language) Evaluator on generation, with some functions that just use the identity for the code)