HaxeFoundation / haxe-evolution

Repository for maintaining proposal for changes to the Haxe programming language
111 stars 58 forks source link

Inline Markup Literals #26

Closed back2dos closed 6 years ago

back2dos commented 7 years ago

I actually wanted to brood about this a little while longer, but @mrcdk's initiative kinda forced my hand to present an alternative now.

Rendered version here.

nadako commented 7 years ago

I like this in general, but I'm not sure if it's possible to implement such parsing in a straightforward way, given Haxe's current parser. The inline markup block should ideally come as a single token from a lexer, but it has no idea about what parser expects and whether it's at the "start of the expression", so when it finds <tag, it just emits two tokens (< and tag identifier). Maybe the parser could toggle some tag-lexing mode, I don't know, have to look into it.

ibilon commented 7 years ago

An issue is that a tag can be valid haxe code.

https://try.haxe.org/#Ca85f line 6: <b> isn't a tag It's a specifically constructed example, but could happen in real code.

RealyUniqueName commented 7 years ago

Maybe some other chars could be used instead of < and >?

benmerckx commented 7 years ago

An issue is that a tag can be valid haxe code. \<b> isn't a tag

The proposal states:

If wherever an expression is expected to begin, the character < is found followed directly (i.e. no whitespace inbetween) by a letter, it signifies the start of an inline markup expressions.

Which means a<b>c; should parse as before because <b> is preceded by a. After a, which is an expression, the parser expects valid cases which may come after an expression. Such as a.b or a<b but not the start of a new expression.

ibilon commented 7 years ago

What about a<<b>c with << overloaded? a<(<b>c) or (a << b) > c IIRC << is already annoying for the lexer (parser?)

But yeah we could just use some other characters, the most likely easiest (I guess) would be to introduce a new one like # or `, that way it can't be currently valid haxe code.

benmerckx commented 7 years ago

The << token will be tokenized before <. The current rules wouldn't change. Which means if you would want to compare one of these markup literals you'd have to write a < <b></b>, where a<<b></b> would somewhat understandably result in an error. While I can't be sure there's no edge case at all, I think this proposal is of little use if other characters are chosen.

nadako commented 7 years ago

@ibilon I think it's the >> which is annoying because of type params.

Anyway, one of the easy ways would be to require a special character to make lexer enter tag parsing mode, not unlike how it's done for strings. E.g. var tag = `<div></div> (we don't need a closing backtick here because we apply the tag handling rules described in the proposal).

RealyUniqueName commented 7 years ago

I'd prefer something like @<tag>...

back2dos commented 7 years ago

@ibilon to build on what the others said: yes, a<<b>>c could be interpreted as a << b >> c or a < <b> > c, but always has and always will parse as the former, because Haxe always chooses the longest valid token. Examples:

Matters are relatively simple (conceptually - as for implementation, that seems to be a different matter): only when the parser begins parsing a new expression does it allow for a tag to occur and there is no room for ambiguity there, because < is syntactically not allowed in that place.

@nadako I've tried to get some understanding of how the parser works and found very little that would make me hopeful. The one solution I see is that the parser may be able to set some shared mutable state to tell the lexer to expect a markup literal, but peeking ahead would defeat this hack entirely: if (foo) bar; <div /> and we get "unexpected <".

So I see three options:

  1. move forward with https://github.com/HaxeFoundation/haxe/issues/6477 and rely on said hack, which is this filthy as hell but would actually work (right?)
  2. lex < followed directly by a word character as inline markup literal. It's not what I originally proposed. However it is consistent with giving longer tokens preference, but it does break a specific class of unformated code ... we already have a migration tool for that though ;)
  3. use the backtick.

I think the first approach is a disaster waiting to happen, but you tell me. The third solution is the most pragmatic and I could live with it. No point in obsessing over one character. I'm not really looking forward to having to explain for the tenth time why the backtick is there, but it's a lot better than nothing and we can still look into getting rid of it in the future.

That said, I'm actually intrigued by the second one (I have never seriously considered it until this point). There's the issue of having if (a <b) leading straight to the confusing "unterminated inline markup" but I think this could be prevented by allowing unterminated markup in the lexer (add a bool flag of course to discern them). When parsing, unterminated markup as the first token in expr would in fact raise "unterminated inline markup" while in expr_next both terminated and unterminated markup would result in "ambiguous < must be followed by whitespace" (feel free to suggest a more easily understood way of telling that). It's not perfect, but then again what is. I'll let you guys think about that ;)

back2dos commented 7 years ago

But yes, why not @. Don't see much of a difference there. Would leave backtick free for other things.

frabbit commented 7 years ago

Not directly related, but i actually have a (working) prototype which allows an alternative calling syntax for macros via metadata like @-myMacro expr or @-myMacro(config) expr. The - is required to distinguish regular metadata from macro calls. That idea also allows block syntax like in:

@-myMacro { doSomething(); }

which gets translated to

myMacro({ doSomething })

or

@-myMacro(a,b,c) { doSomething }

which gets translated to something like myMacro({ doSomething }, [a,b,c]);

ncannasse commented 7 years ago

@nadako remarks are quite valid, you need to detail how this integrate with existing haxe lexer. Appending @ does not seem very elegant.

Actually we can detect an XML start in parse_expr by matching < (ident) (ident = const(string))* > in parse_expr, but then we need to switch the lexing in "raw mode" until we have close all idents. That's again feasible, but would not that be a problem for IDEs ?

For the record I found both Scala and VB have XML literals.

Also, it might be misleading how allowing what looks like "XML literals" without actually enforcing XML (or HTML) syntax. I can see how one would complain about the compiler not giving error when writing the following:

var x = <xml> <oops/ </xml>

So maybe using xml tags with this is a bad idea after all.

back2dos commented 7 years ago

you need to detail how this integrate with existing haxe lexer.

I don't have the slightest idea how the lexer and parser work. I can whip up an implementation for hscript if that's of any help ^^

Appending @ does not seem very elegant.

Agreed.

Actually we can detect an XML start in parse_expr by matching < (ident) (ident = const(string))* > in parse_expr, but then we need to switch the lexing in "raw mode" until we have close all idents.

If there is any such thing as a "raw mode", then that's the simplest solution.

Still I have to stress that an opening tag is nothing but <(tagname). See the proposal for what constitutes a tag name, because svg:rect is valid in XHTML, custom-component is valid in HTML5, namespace.Component is valid in JSX. The parser should not even attempt to parse attributes, because again, <div contenteditable aria-labelledby="someLabel">Awesome content!!!</div> is perfectly valid HTML. The syntax proposed is chosen very carefully to cover all that ground.

That's again feasible, but would not that be a problem for IDEs ?

That depends on what "problem for IDEs" means. Getting completion to work is quite possible, as the little gif shows. Syntax highlighting is more of a challenge, as already mentioned at the end. It depends very strongly on how the IDE approaches the matter. I think an approach to highlighting that is as broad as this proposal may have a fair chance of providing a decent result. Then again I'm no expert on the matter. Perhaps @nadako can comment.

I can see how one would complain about the compiler not giving error when writing the following:

var x = <xml> <oops/ </xml>

And how would that ever happen? According to the proposal, the above will give a compiler error unless a macro processes it. If the macro expects XML syntax, it will produce an error (the most naive one will just produce the exception thrown by Xml.parse). But if it can make sense of it, then I don't see the problem. Alter it slightly and you may have a use case:

var x = <html><script> console.log(5 <oops/ 4 )</script></html>

There's a lot of super hairy stuff in actual HTML and XML. So if then you go to having half an XML or HTML parser (which one would you choose?), you're going to ingrain incompatibilities with the actual standard into the syntax.

The most pragmatic approach is thus to make it the user's problem. If they want all bells and whistles, they use something like dom4. Or whatever. And when parsing speed becomes a problem (and eval offsets that threshold quite a bit), my understanding is that they can write the parser in ocaml and just plug it in and call it from a macro (right?).

ncannasse commented 7 years ago

Actually I'm taking back my previous comment. We could ensure that when getting compiled (without going through macros), XML literals are checked against XML by the compiler and parsed in a similar way to Haxe smart strings. This way we get both something that doesn't require macros and can be used by macros in a smart manner.

I still have one particular issue: your proposal leaves attributes syntax undefined, but it means the following would also be invalid syntax:

var s = <foo>${if (x<foo) 0 else 1}</foo> because of the <foo being mistaken as a opening tag, leading to quite hard to debug error messages about the first <foo> not being closed.

Maybe we should ensure some XML attribute syntax in order to distinguish an actual node from something else.

kevinresol commented 7 years ago

I personally prefer not to include the xml parsing part into the compiler, mainly because of the looooooong release cycle.

I found myself always in a "I want to use haxe nighty but I can't use haxe nightly" dilemma.

For example, I want to use haxe nighty for these fixes:

But I can't because https://github.com/HaxeFoundation/haxe/issues/6321 breaks all my codes.

Sorry for being a bit off-track here, but my point is: please don't embed functionalities into the compiler if there are other alternatives. I think the same reason drove the team to move out SPOD as a macro library.

ncannasse commented 7 years ago

@kevinresol I would agree in general not to put too much things in compiler if it can be avoided (SPOD required some compiler specific support a long time ago - before macro era, it can be safely removed now)

However I don't think that having a syntax that requires a macro and lead to a compiler error otherwise is a good idea, especially if there's some behavior that can be expected from the end user. It seems obvious to me that the following code should compile if it's considered valid syntax: var x = <foo>$str</foo>

piotrpawelczyk commented 7 years ago

@ncannasse that's one of my ideas (I've had to share with anybody yet) to use interpolation syntax as a straight-forward solution for embedded DSLs (yes, I did notice this proposal is about "markup literals" ;)). It would require generation of "interpolation AST", so too speak, that would be available during macro processing. If there were no macros transforming it, every target would have to simply generate runtime interpolation, exactly as it's done now. This way every "inline DSL" would just be a string by default. Added bonus would be an easy way to find out which actual Haxe variables are referenced inside the interpolation/DSL string. I'm omitting now the need of finding out what macro to run on any given interpolation usage. Do you think it's a viable solution to this problem?

Simn commented 7 years ago

I still don't really know how people expect this to be actually implemented without implementing a full XML parser. Counting opening and closing XML tags is all nice and fun until you have <tag ... />. Can't parse that without understanding XML syntax, and we voted against integrating a XML parser before, so I don't really understand what we are discussing here.

I'm not saying that I'm strictly against something like this, but I'm still not sure if XML is the solution.

markknol commented 7 years ago

Something I think that should be given some thoughts is how to deal with code comments. If the rule is <tag ANYTHING </tag> then this can lead to unexpected results.

var x = <div>
     // </div>; // Am I closing tag, content or comment
        </div>;
Simn commented 7 years ago

As I understand it, the idea is to leave the Haxe domain upon <div>, so it should no longer recognize Haxe comments.

But yes, literals in general are also a good point: In order to parse this correctly, you'd still have to understand e.g. string literals in order to not close on <div x="</div>">...

boozook commented 7 years ago

IMHO. I don't like to mix markup lang in the any lang like Haxe. But i think it can be cool feature if it implement in isolated context like a block and builtin macro, look:

function foo()
{
    var myXml = @:markup(XML) {
        <node foo="bar"/>
    }

    var myYaml = @:markup(YAML) {
        foo:
            - "bar"
            - "far"
    }
}

This method provides modularity and extensibility.


I'll try to explain my strange idea :)

For this method we need one none-breaking compiler modification - new type of Expr - UnsafeBlockExpr. So,

What is that UnsafeBlockExpr and for what? Any "raving lunatic" expressions in this block should be ignored by the compiler until registered MarkupBuilder-function returns Expr not null. Else or if no suitable registered MarkupBuilder-function was than the compiler should generate an error about "have no suitable Markup Builder macro func...".

And I can say with confidence what any markup in future will can be implemented on macro only without modifying the compiler.

And abstract example:

class Main
{
    static function main()
    {
        var myValue = 42;
        var myXml = @:markup(XML) {
            <?xml version="1.0" encoding="utf-8"?>
            <x>
                <ml value=$myValue/>
            </x>
        }
        $type(myXml); // Xml

        var myYaml = @:markup(YAML) {
            foo:
                - "bar"
                - ${myValue}
        }
    }
}

// very stupid simple example of custom builder
class CustomMarkupBuilder
{
    public static macro function init()
    {
        // similarly to Compiler.addGlobalMetadata and something like Context.onAfterTyping but "before"
        Compiler.setCustomMarkupGenerator(":markup", "XML", buildXml)
        Compiler.setCustomMarkupGenerator(":markup", "YAML", buildYaml)
    }

    public static macro function buildXml(srcExpr:UnsafeBlockExpr):Expr
    {
        var src = srcExpr.toString();
        var xml = Xml.parse(src);
        return macro Xml.parse(${srcExpr});
    }

    public static macro function buildYaml(srcExpr:UnsafeBlockExpr):Expr
        return macro null;
}

and call init as initial macro, frag of build.hxml:

--macro "CustomMarkupBuilder.init()"

This is only my hopes and dreams. But I'm sure it would be very cool!

fullofcaffeine commented 6 years ago

Any news here?

EricBishton commented 6 years ago

This type of markup from the Nemerle programming language (http://nemerle.org/About) looks very clean:

def title = "Programming language authors";
def authors = ["Anders Hejlsberg", "Simon Peyton-Jones"];

// 'xml' - macro from Nemerle.Xml.Macro library which alows to inline XML literals into the nemerle-code
def html = xml <#
  <html>
    <head>
      <title>$title</title>
    </head>
    <body>
      <ul $when(authors.Any())>
        <li $foreach(author in authors)>$author</li>
      </ul>
    </body>
  </html>
#>
Trace.Assert(html.GetType().Equals(typeof(XElement)));
WriteLine(html.GetType());

I rather like a few aspects of this: First, the operators '<#' and '#>' as markup(?) delimiters is very readable; Second, the formatting of the xml as a "here" doc (a la bash) makes for very clean code; Third, the keyword (rather than a @macro(something) { stuff here }) is shorter, easier to type, and makes very clear the context.

So, more in a Haxe parlance, I would see it like this:

import Sys;

class Test {
  static function main() {
    var title = "Programming language authors";
    var authors:Array<String> = ["Nicolas", "Simon", "<it>et al</it>"];

    var xml:String = @lang(xml <#
        <html>
          <head>
            <title>$title</title>
          </head>
          <body>
            ${if (authors.size()) @lang(xml <#
              <ul>
                ${for (a in authors) @lang(xml <# <li>$a</li> #>) }
              </ul>
              #> }
          </body>
        </html>
      #>);

    trace(xml);
  }
}

It's not quite as clean because of the nice $when and $foreach that Nemerle has, but it's still fairly nice.

What @lang() does is: 1) Determine which language plugin is to be used, and if it's missing stop the compilation (IIRC, already implemented in Haxe 4); 2) Catenate all characters between the beginning and ending tokens (nesting is allowed!); 3) Then it does interpolation on the collected string (we can disallow that with a '<!#' begin token); 4) Then, it passes the final string off to the language plugin for parsing/processing.

The big benefit with this approach is that the Haxe compiler doesn't have to understand the embedded language.

Can this be done with a macro already? I don't know; it seems like it, except possibly for the token parsing. (It would be a pretty hefty macro to do parsing and validation!)

fullofcaffeine commented 6 years ago

What if we create an external tool to pre-process this? I have two questions/suggestions regarding that:

1) It could possibly be a frontend to the Haxe compiler. It would ignore everything else and pass it over verbatim, but would transform those templates into something Haxe could understand (something like what tink_hxx does nowadays). The only thing is that this binary would have to be called instead of the actual haxe compiler, which could be awkward to setup.

2) Could the Haxe ocaml plugin architecture be used for it? This way this would be implemented as an external project to the core language/compiler.

I agree with @kevinresol that it probably shouldn't be part of core, mainly because of the extreme long release cycles.

EricBishton commented 6 years ago

My thought was to use an ocaml compiler plugin. I'm not sure if it can currently be implemented separately without any new compiler support, but that's the goal. I would expect a number of extension projects for the various types of output. HF would probably like to manage popular technologies (such as XML, HTML, CSS).

If you want mixed code and markup, then that will be difficult to do with just a preprocessor. (However, if all you really want is "here docs," then that can be easily accomplished with a preprocessor, and probably is easily done with macro already).

nadako commented 6 years ago

All these @lang/@markup don't really look useful when it comes to reentrancy tbh, a plain jsx('...') call looks better to me.

Maybe we should consider just supporting JSX on a language level similar to how Reason does and be done with it. Also JSX is really not XML, so the default behaviour should really be just desugaring into method calls.

Long release cycle is something we can fix independently, now when @Simn improved and documented the process :)

benmerckx commented 6 years ago

Maybe we should consider just supporting JSX on a language level similar to how Reason does and be done with it. Also JSX is really not XML, so the default behaviour should really be just desugaring into method calls.

The way Reason handles jsx is pretty clean and well-defined. Might be a good start and would still allow to be somewhat extendable through macros, as long as attribute values and children are valid Haxe expressions.

EricBishton commented 6 years ago

... a plain jsx('...') call looks better to me.

I agree, for the most part. (I like the '<#', '#>' operators -- can't say why.) I was just trying to figure out how to do it without adding a new language keyword for every type of markup that we should ignore (or forcing us to pick just one...). Also, we really don't want to have to add markup parsing and evaluation in the compiler.

EricBishton commented 6 years ago

All these ... don't really look useful when it comes to reentrancy tbh...

@nadako Are you thinking of compile-time or run-time reentrancy issues?

djaonourside commented 6 years ago

The UnsafeBlockExpr solution suggested by @fzzr- is realy needed thing. Haxe gurus please take your notice on it:)

kevinresol commented 6 years ago

@EricBishton Rentrancy is a syntax problem, so it is compile time.

fullofcaffeine commented 6 years ago

Maybe we should consider just supporting JSX on a language level similar to how Reason does and >be done with it. Also JSX is really not XML, so the default behaviour should really be just >desugaring into method calls.

I like this idea, it's bold and focuses on keeping things straightforward and simple.

I suggest we consider this as the implementation approach for this proposal and let the core team vote on this, unless we want to wait a couple more months... anyone? :)

ncannasse commented 6 years ago

Maybe we should consider just supporting JSX on a language level similar to how Reason does and be done with it. Also JSX is really not XML, so the default behaviour should really be just desugaring into method calls.

I think that's very short sighted. While today JSX is the hot thing, it might be something else entirely in two years. Language design is about creating solid bridges for the future, instead of single-usage wooden ladders.

nadako commented 6 years ago

Yeah, I agree to that in general. It's just that I don't really see how we can implement arbitrary syntaxes without pluggable parser/lexer or ugly and useless (for jsx at least) heredoc syntax.

EricBishton commented 6 years ago

@nadako - I don't think you can. To get arbitrary syntaxes, you need to be able to arbitrarily extend the parser; something has to parse it. And there must be a way to delineate it, whether using a heredoc delimiter or a meta.

In truth, @fzzr- and I have shown very similar solutions (his UnsafeBlock example also uses string interpolation). From an IDE implementation point of view, having a dedicated operator and being able to treat the inserted language as a string is a simpler implementation. To treat it as an UnsafeBlock is harder, but really only because we can't detect it directly in the lexer (like we can a new operator). At that point, we either have to have meta support in the lexer (which is not good) or all supportable syntaxes have to be Haxe syntax compatible (which is untenable).

EricBishton commented 6 years ago

As another thought... In truth, we don't require a meta at all. We can create the new operator similar to a markdown tag:

import Sys;

class Test {
  static function main() {
    var title = "Programming language authors";
    var authors:Array<String> = ["Nicolas", "Simon", "<it>et al</it>"];

    var xml:String = <#xml
        <html>
          <head>
            <title>$title</title>
          </head>
          <body>
            ${if (authors.size()) <#xml
              <ul>
                ${for (a in authors) <#xml <li>$a</li> #>}
              </ul>
              #>}
          </body>
        </html>
      #>;

    trace(xml);
  }
}

Then, all we've really done is create a new string type that allows/requires an (ocaml) compiler extension to parse and/or verify it. If the plugin isn't available (or there is no name immediately following the opening operator), it's still just a string to Haxe. And, frankly, this is all people are really asking for: an easier way to embed their markup, which often requires its own string delimiters.

(I still like '<#' and '#>' better than triple-backticks (```), tough.)

markknol commented 6 years ago

@EricBishton The Haxe parser has to parse <# and #>. If the content inbetween is unparsed, How can you nest other blocks in it? How does it make a difference between <# as Haxe code and <# as arbitrary syntax code?

fullofcaffeine commented 6 years ago

Yeah, I agree to that in general. It's just that I don't really see how we can implement arbitrary >syntaxes without pluggable parser/lexer or ugly and useless (for jsx at least) heredoc syntax.

I was under the impression that the new plugin system allows this... no?

EricBishton commented 6 years ago

It is true, the Haxe lexer does have to be extended to use <# and #> as string delimiters. They are easy to add and understand because of the way the lexer works, whereas <div> is ambiguous because (as was discussed above), < followed by letters is a default lexer word separation.

Second, the content is not unparsed. It is a Haxe string, subject to normal string interpolation, therefore must(?) be nested. (And, if we want an option for a non-interpolated block, we can add !#, <!# or some other variation.) However, string itself is unvalidated by the Haxe parser, and would be able to be re-parsed and/or validated by a plugin.

EricBishton commented 6 years ago

BTW, we can come up with all sorts of delimiter combinations. Perhaps {' and '} for interpolated delimiters and {" and "} for non-interpolated. Those actually look like current syntax:

    var xml:String = {'xml
        <html>
          <head>
            <title>$title</title>
          </head>
          <body>
            ${if (authors.size()) {'xml
              <ul>
                ${for (a in authors) {'xml <li>$a</li>'} }
              </ul>
              '} }
          </body>
        </html>
      '};

I kind of get lost looking for the closing '} operator vs. the closing } operator for the interpolated code.

In the end, there are two goals: 1) Easily (e.g. without requiring escape sequences) allow other markup or other language code to be embedded within Haxe; 2) Allow (NOT require) the same to be parsed and/or validated during the Haxe compilation.

To do that, we need to: 1) Choose an unlikely delimiter that is unused in other expected embeddable languages. 2) Have a way to specify the validation.

markknol commented 6 years ago

the Haxe ocaml plugin architecture

I am against a plugin system where one has to use ocaml . The whole point of Haxe is to have one language that rules them all, so that would be a weird contradiction. Also, I wonder how many are willing to learn ocaml to make such plugin. I think if everything is structured and available in the well known macro context, that would be more convenient, no?

EricBishton commented 6 years ago

I am against a plugin system where one has to use ocaml.

I agree wholeheartedly. Unfortunately, that's not what is currently implemented and available; we have the ocaml plugin architecture already in the compiler. I think (though I don't know) that it would be harder to implement a parser/lexer/validator in a Haxe macro than it is in Ocaml.

As I think @ousado said when the plugin architecture was implemented, "All we need now is an Ocaml back-end for Haxe." My corollary is, "All we need is a Haxe (or Neko, or HL) binding for Ocaml."

djaonourside commented 6 years ago

While today JSX is the hot thing, it might be something else entirely in two years.

@ncannasse IMHO You aren't right in this question. Nowdays using JSX and react-like libraries is the fastest, convenient and advanced way to develop web applications( and some type of mobile ones). I think it will continue to evolve and no reasons it will be forgotten. So we need more advanced supporting jsx and other alien syntax in haxe(or a mechanism that can provide it) than just macro function with a string param. This thing can help popularize haxe among potential js-target users. It's realy big audience.

impaler commented 6 years ago

One cool use case that is not React, would be to see haxeui and other ui libs use it :)

While today JSX is the hot thing, it might be something else entirely in two years.

Apart from adoption and time I don't know how we could have a good measurement on whether something is just a hot thing or not. React has been around for about 5 years and it's adoption is quite impressive. Writing ui markup in xml like formats has obviouisly been around much longer.

As for jsx I don't think there have been too many changes apart from addition of the Fragment syntax sugar <></>. Also interestingly there is a project for this in php https://github.com/hhvm/xhp-lib.

haxiomic commented 6 years ago

What about using a syntax similar to ES6 template literals

Template literals are ES6's answer to DSLs

macro function jsx(string, expressions, ...) {
    // parse jsx, return react object expressions
}

function render() {
    var message = 'hello world';
    return jsx`
        <div>$message</div>
    `;
}

function renderSomethingMoreComplex() {
    return jsx`
        <section>${ render() }</section>
        <ul>
            ${ [1,2,3].map(n -> jsx`<li>$n</li>`);  }
        </ul> 
    `;
}
back2dos commented 6 years ago

It doesn't lock us into a single DSL – implementing XML or JSX can be offloaded to the community rather than the compiler team

Which part about this proposal locks us into a single DSL?

Will be familiar to ES6 JavaScript developers and may be adopted by other languages

I'm inclined to doubt that ES6 developers who're looking for familiarity will go for Haxe when they can use FlowType or TypeScript, especially since both of them have JSX support.


I don't mind template literals. It's a neat feature although I'm not sure of the improvement here, beyond having a third type of quote that means you can use single and double quotes in the string. You can already do:

function render() {
    var message = 'hello world';
    return @jsx'
        <div>$message</div>
    ';
}
kevinresol commented 6 years ago

@djaonourside see https://github.com/HaxeFoundation/haxe/issues/7035#issuecomment-419747476

djaonourside commented 6 years ago

@kevinresol Thanks. I've just noticed and removed the question)

kevinresol commented 6 years ago

I wonder if the following will parse or not according to the proposal:

Simn commented 6 years ago

https://github.com/HaxeFoundation/haxe/pull/7438