Closed kevinresol closed 6 years ago
Just wanted to check in. I personally don't use XML and think it's something from 2005 :) But I get that it's probably handy to have for web development, with all the React hype and stuff.
Parsing-wise I think you're right, we can switch into xml mode if expression starts with <
, but I'm not sure about actual data structures. For proper macro support, I think there should be a new structure that has position info and also holds Expr
for interpolated expressions. As for the non-macro interpolation, this requires some thought, because IIRC current string interpolation implementation is a bit hacky.
Also, IIRC, xml supports all kinds of features, like namespaces, P?CDATA and other - do we need to support that as well? Doesn't seem to be very useful for react-like component definition, but then again, if it's the only use for xml, maybe we shouldn't add that in the first place...
i would like to add that jsx doesn't convert xml to react data types, but it converts the inline string to react data types. i guess you're better off having a macro that expects a jsx string and convert that to the proper data types instead of having xml as an intermediate data structure. That could look like this jsx("<mxXml>{a}</myXml>")
I personally don't use XML and think it's something from 2005
Yes it is not 2005 but hey, let's face it, we are still writing a lot of HTML. While XML may not be exactly the same thing as HTML, supporting inline XML will really help writing template engines IMO.
Just did a quick search and I already found a few libraries (or externs) implementing functions like div(attr, children)
, etc. And I see these as evidence for that hierarchical data representation has it usage in the language.
https://github.com/benmerckx/ithril https://github.com/fponticelli/doom https://github.com/back2dos/js-virtual-dom/blob/master/src/vdom/VDom.hx#L34
i would like to add that jsx doesn't convert xml to react data types, but it converts the inline string to react data types.
First of all I have already mentioned jsx('<xml.../>')
as an alternative in my proposal and listed some drawback of such approach. In real jsx you can write something like var element = (<div>my\ntext</div>)
and the string approach would require you to write jsx('<div>my\\ntext</div>')
(note the extra backslash). I think is sufficient to show that jsx is actually inline xml (or whatever-ml you can call if it is not xml) and not string.
p.s. About the extra backslash, we might be able to mitigate that in the current macro-based jsx parser. But the fact is, wrapping things in strings is really inconvenient and will eventually cause problems.
I don't want to underestimate the effort needed to implement inline xml in haxe, because I don't really have any knowledge about the compiler. But I do see such addition is going to make haxe a much richer language, especially when you have the ability to manipulate such xml-data in macros.
I don't think adding this gonna hurt and implementation is quite clear to me, but if we gonna do this, we need a complete specification of what XML features are we going to support and why, also we need to design a data structure for macros. (e.g. new EXml(x:XmlData)
node, where XmlData
is the XML AST containing positions and "interpolated" haxe.macor.Expr
s).
I just want to mention that this feature can be a valuable addition for any xml-based framework (e.g. every UI framework or mxml implementation etc)
I don't think adding this gonna hurt and implementation is quite clear to me, but if we gonna do this, we need a complete specification of what XML features are we going to support and why, also we need to design a data structure for macros. (e.g. new EXml(x:XmlData) node, where XmlData is the XML AST containing positions and "interpolated" haxe.macor.Exprs).
I think a good starting point would be to reference the current Xml class in std lib.
I think a good starting point would be to reference the current Xml class in std lib.
The std Xml
is not position aware, so it's not a very good fit.
Particularly if parsing the aforementioned JSX syntax (which is quite far from being well formed XML) is a desired goal, using actual XML is not going to be of any help.
It would be more conducive to make the parser as liberal as possible (within reason) and just add a CXml(s:String)
to Constant
. Then just let everyone handle it the way they wish.
Some people will want interpolation, others not. I really don't think the parser should decide about that.
On the other hand any EConst(CXml(_))
that makes it to the typer could indeed be transformed into an expression that yields an XML at runtime, with interpolation and what not. The String->Expr
transformation for that could be exposed through the macro API, so anyone can call it as they see fit (be it on CXml payload or subsets of it or simply strings loaded from the file system).
I thought about that as well, but how do you let the compiler decide what is valid then? Anything goes as long is it starts with < and ends with >? Because people will choose different ways to interpolate values/arguments. Which might also upset IDEs so we'll never have true support... In any case I'm a fan of exposing this structure to a macro call so it can form an expression, but outlining the syntax could help.
With annoymous and abstract types currently we have a lot of control over JSON, I would like any amendment to XML to allow similar deconstruction of XML into user structures. https://haxe.org/manual/std-Json-parsing.html I am not exactly sure on how this should or would work but perhaps a consideration when adjusting xml implemention.
I thought about that as well, but how do you let the compiler decide what is valid then? Anything goes as long is it starts with < and ends with >?
Well, I guess it could be a little more sophisticated. I don't think we need comments or CDATA, so in fact I believe it will do to accept anything of the form <identifier
until </identifier>
, while allowing for balanced <identifier
- </identifier>
pairs inbetween.
Because people will choose different ways to interpolate values/arguments. Which might also upset IDEs so we'll never have true support...
Well, there are surely many opinions for this, but I think a good EDSL with lousy IDE support is far better than a lousy EDSL with good IDE support - because reading is more important that writing, after all. That said returning an EDisplay from the macro that processes the CXml would give you completion anyway. The open question here is how to pass the information about completion being desired in a string literal to said macro. That does strike me as achievable though.
With annoymous and abstract types currently we have a lot of control over JSON, I would like any amendment to XML to allow similar deconstruction of XML into user structures.
As a rule, that is impossible, because while JSON is a subset of JavaScript that almost seamlessly maps onto Haxe, with Xml such a thing is impossible in general.
Haxe can define a general mapping of xml to json, by forcing convention perhaps allowing the '-' to be overriden with another character.
<person sex="female">
<firstname>Anna</firstname>
<lastname>Smith</lastname>
</person>
becomes
{
"person": {
"-sex": "female",
"firstname": "Anna",
"lastname": "Smith"
}
}
Or similar see: http://www.utilities-online.info/xmltojson/#.WCXpcNxWdR0
This handles only a very limited subset of Xml. You can't just automatically assume that every child node models a property.
Consider this case:
<playlist>
<song>file1.mp3</song>
</playlist>
vs.
<playlist>
<song>file1.mp3</song>
<song>file2.mp3</song>
</playlist>
becomes
{
"playlist": { "song": "file1.mp3" }
}
vs.
{
"playlist": {
"song": [
"file1.mp3",
"file1.mp3"
]
}
}
These two objects have different types, i.e. { playlist: { song: String }}
vs { playlist: { song: Array<String> }}
Note that in JSON you simply do not have this problem. You would encode the first case as { "playlist": [{ "song": "file1.mp3" }] }
and it would still simply be an array, even if it is just one element. You can even do { "playlist": [] }
for an empty playlist. Can't do that in XML. That is, unless you have a DTD or something.
This is not a trivial matter and lies out of the scope of this proposal. If you have something concrete in mind, you should start a separate proposal.
Well, there are surely many opinions for this, but I think a good EDSL with lousy IDE support is far better than a lousy EDSL with good IDE support - because reading is more important that writing, after all.
What I meant was, with a few basic rules (even as basic as what you proposed), I hope IDEs get at least the formatting and coloring right. I don't really care about autocompletion or other features here, but for example IntelliJ regularly screws up valid haxe code (marking syntax errors which aren't) which sometimes makes the whole document unreadable. So yes, reading should be priority. Also as an example, writing Ithril templates in FlashDevelop is still a pain because it does this automatic indenting which is never correct in that context, so it makes me have to correct every line I write afterwards. I guess I should really be filing bug reports :)
So Haxe is going to become another bloated programming language with proposals like these? Seriously, for things like this, just use macro... Or maybe you guys should also add on
, off
, yes
, and no
as aliases to true
and false
to Haxe, because why not? CoffeeScript have it. AS3 had inline XML. I see no difference. And for templating you are mentioning, I think Haxe supports string interpolation, right? So why can't you just use strings for XML? From your example:
var i = someValue();
return <div>{i == 0 ? <span>nothing</span> : <span>We have {i} items</span>}</div>
can be rewritten using string interpolation to this:
var i = someValue();
return '<div>${i == 0 ? '<span>nothing</span>' : '<span>We have $i items</span>'}</div>'
and if you want parse that Xml, then just use Xml parser in Std.
@deathbeam JSX constructs (virtual) DOM trees, not strings, which is why string interpolation is not helpful. What's next? "Unbloat" Haxe by removing all expressions except string literals and let people define implementation through brainfuck code? How elegant! Also, you're comparing apple trees and oranges. Adding yes
and no
as aliases for true
and false
is trivial. Having a clear and concise notation for describing view hierarchies is not. Maybe this is a bad way to achieve it and maybe you even have a good argument why. Too bad you are withholding it from us :P
"Unbloat" Haxe by removing all expressions except string literals and let people define implementation through brainfuck code?
Nah, Haxe source file is a string itself already. =P
The std Xml is not position aware, so it's not a very good fit.
What I really meant was to have a look at the currently supported XML types, which tries to answer @nadako:
what XML features are we going to support
Here is my view:
DocType
and ProcessingInstruction
, obviously not needed Document
, just a special case of Element
, not neededComments
, we can just use //
or /* */
when xml become valid haxe code, not neededPCData
, every xml is just PCData, I don't know when it is useful even in normal xml, not neededElement
, obviously neededCData
, I do think this might be useful, let's discusscan be rewritten using string interpolation to this:
@deathbeam If you read carefully, the said solution is macro-based, which means it is quite limited in terms of string interpolation. Also the quotes ('
& "
) are going to mess up things really really quickly.
The whole point of this proposal is about compile-time parsing of XML-alike structures. So any runtime solution is pretty much unrelated.
@back2dos
I was interested in your point about difficulties with parsing XML into suitable structures, but I wonder if the limitation is more down to lite xml ocaml library used rather than the full problem space?
"PXP is an XML parser for O'Caml. It represents the parsed document either as tree or as stream of events. In tree mode, it is possible to validate the XML document against a DTD.
The acronym PXP means Polymorphic XML Parser. This name reflects the ability to create XML trees with polymorphic type parameters. "
But not sure how something like this might look if used to generate Haxe structures of tree or callback.
@kevinresol I'm confused: do you want to use this for JSX or not? Because I wonder how <Component {...props} />
would fit into the semantics you propose?
I am not specifically after jsx actually. I am just inspired by it, and realized that Haxe lack some native syntax to represent hierarchical structures.
In fact, <Component {...props} />
is something that I love and hate. Love is for that it is so simple to extend/passthrough props. Hate is that it is so hard to map to Haxe's type system.
If I am to answer the actual question, I think here involve at least two problems, one is xml interpolation, another is the spread operator. While the latter is not the scope of this proposal I will just skip.
For interpolation, I think the attributes in an element could be parsed as Expr.
For example <img src="url"/>
would be parsed as:
EXml(XElement("img", [EBinop("=", "src", "url")], []))
(omitted those EConst's for simplicity)
where XElement
is defined as XElement(tag:Expr, attr:Array<Expr>, children:Array<Expr>)
and <Component {...props} />
is just XmlElement("Component", [EUnop("...", "props")])
That should answer your question. But what bothers me now is how we should type such thing...
Haxe lack some native syntax to represent hierarchical structures.
Actually, anonymous structures are hierarchical
Actually, anonymous structures are hierarchical
Well, let's not word play on that. Maybe my chosen word isn't precise enough, but the difference between json (anonymous structure) and xml is already explained
PXP seems to have an approach to parsing:
"Once the document is parsed, it can be accessed using a class interface. The interface allows arbitrary access including transformations. One of the features of the document representation is its polymorphic nature; it is simple to add custom methods to the document classes. Furthermore, the parser can be configured such that different XML elements are represented by objects created from different classes. This is a very powerful feature, because it simplifies the structure of programs processing XML documents."
So if the same types of structures can be replicated in Haxe then this would seem useful?
" Note that the class interface does not comply to the DOM standard. It was not a development goal to realize a standard API (industrial developers can this much better than I); however, the API is powerful enough to be considered as equivalent with DOM. More important, the interface is compatible with the XML information model required by many XML-related standards."
Reading on https://gitlab.camlcity.org/gerd/lib-pxp/blob/master/doc/README.xml it seems that PXP might help give Ocaml the extra xml stuff we might need? SAX etc... and it provides linear processing with file size so not too slow?
I am first to admit understanding the role of PXP is at the edge of my understanding and perhaps it's too much of a guess, but suprised it was dismissed or ignored without any comment on why.
I am curious if it already has solutions that we are trying to reinvent and if it's a matter of thinking about how to connect it usefully to Haxe, and looking inside to see how it encodes xml it seems only to have a few class/enum types in it's tree.
Certainly the ability to remap xml to custom structures sounds pretty neat.
Somewhere it talks of an example of custom mapping xml to TK, this is exactly the type of stuff we need for projects like haxeui.
@Justinfront Thanks for the search! It will take me some time to digest and after that I will try to add that into the proposal. Btw, here is a rendered version of the readme for easier reading: http://projects.camlcity.org/projects/dl/pxp-1.2.8/doc/README
Hey good discussion.
What I'm reading runs around "should we add inline XML support".
Problems:
React.createElement
, h
...) but also allowing low level optimisations (React accepts pre-computed JSON literals instead of the function call).Honestly I think the compiler should be "dumb" and only could angle brackets and let a macro figure the rest, eg. this would be valid:
var foo=21;
return <yadayada ${foo*2}>yolo</whatever>
The result should be passed as a string or some loose AST structure (just whatever is between each <>
as a string) to a macro which will take care of validating and transforming into Haxe AST.
Note: code completion support is critical for Haxe sub-expressions if we don't want to be laughed at by TypeScript users. Hence why I suggest sticking to String-interpolation-like syntax for those.
What if var tree = <tag>...</tag>;
get parsed as var tree = @:xml "<tag>...</tag>"
? So that macros can pick up the @:xml
meta and process the string accordingly. Then if the expression is untreated (i.e. the @:xml
meta remains), the typer could attempt to parse it into an xml object.
I’d prefer Haxe itself produced Expr
objects describing the tree so that there is one official way to represent it. It could then enforce a consistent way of resolving sub-expressions and different macro authors won’t handle the same data differently. I think passing macros a string would make parsing complex Haxe subexpressions in attribute and content value expressions hard to get right and result in weird hacks like XML-escaping Haxe expressions or writing Haxe expressions prior to the XML and only referencing variable names inside of the XML.
If new AST expression types are created, this would give macro authors the cleanest API to work with.
I do like the idea of just converting it to an expression using the existing AST, though. Instead of creating new AST elements, a Xml.createElement()
call could be emitted. I think Xml.createElement()
could be easily modified to be like .net’x new XElement()
): just add optional attributes
and children
parameters. For macro authors, it would be more complex to parse the AST, but you could completely write away the Xml
API by replacing it with your React.createElement()
calls or whatnot. As long as the structure of the created expression is well-specified and considered a frozen API, this should be sufficient for implementing macro transformations on top of it. I’ve tried to give an example of what I think the generated expression should look like in this gist.
Regarding the typing issue, to deal with passing non-string types to attributes or non-XML expressions as children, all attribute expressions should be unconditionally wrapped in Std.string()
(even if they’re strings, to make macro writing easier). Node content that is not the Xml
type should also be unconditionally wrapped in Std.string()
. That would allow you to insert both Xml
fragments as children and text through Haxe expressions.
As this discussion continues, it doesn’t seem to be heading towards any consensus. We now have three proposed ways of doing inline XML:
ExprDef.EConst
, Constant.CXml
(patterned after CRegexp
precedent)We have these issues:
<x y='${1*4}'/>
and <x y="${1*4}"/>
would yield equivalent documents).
""
vs ''
''
string, so dollar signs need to be escaped.How to get consensus? xD
How to get consensus? xD
Start with the minimum, which is what Kevin proposed above. There can be a Context.parseXml(s:String, pos:Position)
that exposes the transformation applied to untreated @:xml "someString"
. That macro authors can choose, rather than having some particular API forced on them.
I agree a Context
callback to transform the XML AST would be the most extensible approach.
Quite honestly though, I'd like to just basically embrace the JSX syntax as-is (yet to be processed by a custom Context
handler): JSX is much more than react and it makes fully sense in itself as a XMLish DSL for declaring any sort of objects.
Having the Haxe compiler "recognise" XMLish, including malformed garbage!, can actually prove to be very tricky.
Quite honestly though, I'd like to just basically embrace the JSX syntax as-is (yet to be processed by a custom Context handler): JSX is much more than react and it makes fully sense in itself as a XMLish DSL for declaring any sort of objects.
That is your particular assessment and while everyone is free to have their own, you can't just assume it's the right choice for everyone. It is not for at least two very good reasons:
Namespace tags are not supported. ReactJSX is not XML.
. Therefore it can't represent MXML or XAML or vue.js (which uses v-bind:property
for bindings). But sure let's just throw everything else away in favor of what a bunch of dudes from Facebook (a company that will rather spend time trying to "fix" PHP than teaching its employees a decent language) came up with ...<span>Hello<img src="world.png"></span>
won't parse, which regardless of how stupid that may be, is the correct way to express it in HTML5.JSX is mediocre at best. I still think it would be good have decent support for it in Haxe, for various reasons, the most important one being that it should be easy to migrate from JS to Haxe. It is however extremely shortsighted to outright constrain ourselves to one of the many things that happens to be a la mode in the hypefest that the JS community is.
Either way, you should all understand that as long as people contribute to this thread by insisting that their particular vision of handling these XML literals is the universally right one, we will not make any progress and we'll not have anything. That's self-centered at worst and quixotic at best. The way forward is to have a radically simple basis upon which we can explore various syntaxes and then reason about tangible results rather than abstract arguments or appeal to personal preference/popular opinion.
Regular XML would be usable if we can do custom AST transformations.
We can argue around the limitations of JSX (which we don't have to follow) but I don't think FB hating helps the discussion. JSX is core to many non-FB implementations and use cases.
My main point is about the fact that such XML support needs proper support for handling bad markup. And ideally I'd like the thing to be loose enough to support things like JSX but we can compromise and for instance not support recursive XML in expressions in XML.
Start with the minimum, which is what Kevin proposed above. There can be a
Context.parseXml(s:String, pos:Position)
that exposes the transformation applied to untreated@:xml "someString"
. That macro authors can choose, rather than having some particular API forced on them.
@back2dos Could you clarify what you mean and perhaps write a PR against @kevinresol’s repo with this described in spec-level detail? To verify, you’re suggesting:
"
, '
, <
, &
.(<
or <
with no lvalue attempts to parse it as XML until it finds the closing tag. It either emits @:xml "<x/>"
or aborts with an XML parsing error. The emitted string must be the exact XML as found in the source file, not a serialization of what the parser parsed (to allow tricks like single-quote attributes as interpolated strings by macro authors).Context.parseXml(s:String, p:Position)
would implement some “standard”, as yet unspecified, way of doing interpolation on the XML. All it would use p
for is to find the context from which to get local variables or otherwise resolve symbols used by inline expressions, e.g. module resolution. So you could call it with a macro-generated XML string if you want to. Though I actually have trouble seeing why this is necessary, especially if going for simplicity. Wouldn’t macro authors already have sufficient tools to do custom interpolation and XML parsing using Xml.parse()
and MacroStringTools.formatString()
? Could this part of the poposal be dropped or become its own proposal?Good discussion. I also want to share a couple of thoughts. :)
Not sure that I want inline-XML in my code, but definitely want more powerful strings support. Haxe Json-like structures has a compile time type checking, how about this aspect in point of inline-XML? How about also add XMLSchema, XSLT and XPath support into Haxe core? :)
This is already mentioned above, but I want to draw your attention to the fact that the problem can be solved by improving the current capabilities of Haxe: 1) Support for "Raw string" literal as in another languages (solving escape characters problem) 2) (optional) Adding mechanism of user-defined literals (syntactic sugar for macros).
For example, there is many "Raw strings" implementations in other programming languages:
C++ Raw string literal - http://en.cppreference.com/w/cpp/language/string_literal C# Verbatim string literal - https://msdn.microsoft.com/en-us/library/aa691090(v=vs.71).aspx Python raw strings - https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals PHP Nowdoc strings - http://php.net/manual/en/language.types.string.php#language.types.string.syntax.nowdoc
C++ User defined literal - http://en.cppreference.com/w/cpp/language/user_literal
PS. Also Bytes literal maybe useful? :)
XML from a functional perspective ... https://www.xml.com/pub/a/2001/02/14/functional.html
We have voted to reject this.
Adding an XML parser would be out of scope for Haxe, even if we don't consider the time and energy that would be required to support this. We acknowledge that there are some situations where inline XML "looks nice", but at the same time we believe that Haxe already provides enough features to handle this.
We acknowledge that there are some situations where inline XML "looks nice", but at the same time we believe that Haxe already provides enough features to handle this.
Could you please shine a light on what those features are?
We have voted to reject this.
Adding an XML parser would be out of scope for Haxe, even if we don't consider the time and energy that would be required to support this. We acknowledge that there are some situations where inline XML "looks nice", but at the same time we believe that Haxe already provides enough features to handle this.
Is this considered?
You could have made Haxe Great Again. Sad!
@kevinresol that wouldn't work either because it also means the compiler adds an XML parser to identify the beginning and end of somehow well formatted XML.
@Simn any chance to consider a (possibly even Ocaml based) plugin system to add DSLs?
@Simn any chance to consider a (possibly even Ocaml based) plugin system to add DSLs?
This is an interesting idea, we could maybe load custom lexers/parsers with Dynlink and use those instead of the default one. That would require some refactoring, but I think it's possible. Not sure if it's worth it just for that JSX stuff.
I'm a bit disappointed as well that this one is fully rejected without providing any real explanation.
@kevinresol that wouldn't work either because it also means the compiler adds an XML parser to identify the beginning and end of somehow well formatted XML.
I don't mind accepting "malformed" XMLish stuff. But there has to be a way to determine the "end" of the expression.
This is an interesting idea, we could maybe load custom lexers/parsers with Dynlink and use those instead of the default one. That would require some refactoring, but I think it's possible. Not sure if it's worth it just for that JSX stuff.
Seems very much worth it as the potential goes far beyond jsx stuff :)
just some food for thought, instead of xml you could use something like:
@div {
className = "myClass";
name = "whatever";
@p "hello";
@div _;
@div {
className = "mySubClass";
foo = "bar";
enabled;
@p { "hello"; @bold "text2"; "text3"; };
@headline "myHeadline";
@copy "myCopyText";
};
};
Just like everyone and apart from the obvious new possibilities this could lead to I think there is one other major argument that should be taken seriously (even if it may look of little importance or retrograde):
make Haxe more popular: make it approachable by every web developer that are only used to html and js.
Honestly, most developers don't care about abstracts, typedefs, macros... they are fantastic and fun to play with but those aren't features that most devs want to have/use from the get go. As an example, most dev don't even know how react works internally, how the binding works and how jsx gets transpiled to function calls. But it uses a syntax they are familiar with: js and html/xml. Even if the jsx thing is really far from being good and involves a lot of tricks and "impossible things to do" that nobody should have to deal with, especially not in 2017...
And given the exponential rise of js and html, Haxe does not truly compare anymore in terms of being "crossplatform". Many use html/js to develop mobile and "native" apps (phone gap, react native, electron). For most uses, html and js are good enough and we don't need that special "native performance" touch.
My point is: most dev don't necessarily need hardcore features like macros, they want something that looks like what they are used to but that is a bit more evolved/improved and functional. And haxe could provide them with exactly that. And after that, if they want more, they will have typedefs, abstracts, pattern matching, ... and macros.
This would be some kind of an "opening feature" with which html/js devs will feel at ease and that resembles what already exists but with more features, simpler and proper.
Target html/js more seriously and we could have that popularity growth that I suppose we all want haxe to have in order to be the new serious "platform"/"toolkit" of the web.
As for the implementation details of such a feature, I would propose something simple:
<Point id="p01" x=10 y=12.5 />
would be transpiled into haxe code
var p01:Point = new Point(10, 12.5);
or to avoid constructor parameters
var p01:Point = new Point();
p01.x = 10;
p01.y = 12.5;
The parent>child structure could be managed in very simple ways:
<Curve id="curve01" />
<Point id="p01" x=10 y=12.5 />
</Curve>
would give
curve01:Curve = new Curve();
p01:Point = new Point();
p01.x = 10;
p01.y = 12.5;
curve01.addChild(p01);
You could express GUI in a more familiar and efficient way, as well a scene graphs, simple data structures up to a full featured front and back end template system.
But the most important part is that you could add Haxe code straight into attributes or in between nodes as such:
function giveMeYValue(offset:Int):Float
{
return Math.random() * offset;
}
...
<Curve>
for(i in 0...10)
{
<Point x={Math.random() * 50} y={this.giveMeYValue(i)}>
}
</Curve>
giving the JSX approach a real boost or what it should have been.
I guess adding this to Haxe will be a lot of work. Maybe we could try to start with the minimal and have only the xml like syntax being a valid syntax (that will be discarded at compile time) and let macros rewrite/convert it into proper haxe expressions... ?
That way we can write different implementations for use in different contexts ?
Maybe we could start by writing a tool that runs before the haxe build process and parses every hx files and convert all the xml like data into their haxe counterpart. That could be a solution to test the case but we wouldn't have auto completion and good error reporting.
Anyways, I really think you should reconsider this feature request as it might be the next thing that will give Haxe a real "perceived" advantage compared to js/typescript.
I still have to use javascript in big projects because of clients not being comfortable with a technology that they either never heard of or are not comfortable with because it is by far less popular.
@NoRabbit What is the advantage of writing <Point id="p01" x=10 y=12.5 />
vs new Point("p01", 10, 12.5)
or just {id:"p01", x:10, y:12.5}
in normal Haxe code?
"My point is: most dev don't necessarily need hardcore features like macros"
"Maybe we could start by writing a tool that runs before the haxe build process and parses every hx files and convert all the xml like data into their haxe counterpart."
Sooo, you want to write a preprocessor to edit source files? But avoid macros? 😛
Personally, I don't see why you want to blend markup (Xml) inside normal code, and not just separate your data/markup into a separate files or use a different data structure that suits the language.
E.g. I think it shouldn't be too hard to write a MyComponent.hx class that has a build macro that picks up its own MyComponent.xml file next to it.
Then the macro can translate/replace some expressions and apply string interpolation on it (with MacroStringTools.formatString
) and do whatever you want with it (put it in a field, generate a render-function). I think the newline escaping issues are also solved.
I think most of this can all be part of a library unless I miss something. Also I really don't think some new language features will make Haxe more popular (Please prove I'm wrong when arrow functions are officially released 😄), but greatly maintained libraries with good architecture, big community and decent documentation will.'
Offtopic;
I still have to use javascript in big projects because of clients not being comfortable with a technology that they either never heard of or are not comfortable with because it is by far less popular.
Isn't that how you present Haxe? I have used Haxe/JavaScript for several big clients. It is mostly just selled as HTML5/JavaScript, since that is what it is at the end of the day. I don't see why you make this difference between writing JavaScript yourself and compiling to it, does that matter? We do mention that the codebase is build with Haxe, but most of time that means the same as writing that you use Photoshop for images; it doesn't matter since that is just a tool. You shouldn't limit yourself in good tools.
And given the exponential rise of js and html, Haxe does not truly compare anymore in terms of being "crossplatform". Many use html/js to develop mobile and "native" apps (phone gap, react native, electron). For most uses, html and js are good enough and we don't need that special "native performance" touch.
You still can use Haxe/JavaScript for greater coding experience.
Personally, I don't see why you want to blend markup (Xml) inside normal code, and not just separate your data/markup into a separate files or use a different data structure that suits the language.
No, this kind of statement is not useful. Because one can almost substitute any phrases in the sentence and produce new arguments. E.g.:
Personally, I don't see why you want to use Haxe, and not just plain JS
.
Personally, I don't see why you want to take a flight, and not just walk from China to Spain
.
People are proposing what they think can improve the language here. Feel free to critcize with concrete points so that we can improve. But abstract statements like these only pull us back.
To actually answer that statement: Separating the code and XML (or whatever data structure) hurts readability, especially when the XML reference variables from Haxe. Also think why arrow function finally gets approved while it can be achieved by macro.
In fact, many more supporting points has been mentioned a few times in this thread already. So let me just quickly recap some of them:
Also, one can definitely express such data with plain functions:
div({style: "display:none"}, [
span({}, ['Hello, World!']),
]);
Personally, I don't see why JSX is invented, while the data can be represented in plain functions
.
Also, one can definitely express such data with metadata + macro:
@div {
className = "myClass";
name = "whatever";
@p "hello";
@div _;
@div {
className = "mySubClass";
foo = "bar";
enabled;
@p { "hello"; @bold "text2"; "text3"; };
@headline "myHeadline";
@copy "myCopyText";
};
};
Personally, I don't see why you want inline XML, while the data can be represented with metadata + macro
.
( ... loop for any other alternatives ... )
Using string literal + macro has its drawbacks (see proposal content)
return jsx('<div>{i == 0 ? jsx('<span>nothing</span>') : jsx('<span>We have {i} items</span>')</div>');
Why not do:
return jsx('<div>{i == 0 ? jsx(<span>nothing</span>) : jsx(<span>We have {i} items</span>)</div>');
true you'll have to parse it to find the sub jsx and do the conditional, but somewhere someone will have to do it anyway.
If a macro parse it at least you'll have it in the format you want/need.
If the compiler do it, it's a potentially huge add to the parser, and you will have to post process anyway.
Why not do:
return jsx('<div>{i == 0 ? jsx(<span>nothing</span>) : jsx(<span>We have {i} items</span>)</div>')
;
In short, because writing "jsx" is clumsy.
In fact, this statement is technically same as "Why not use plain functions?"
return div(i == 0 ? span('nothing') : span('We have $i items'))
If the compiler do it, it's a potentially huge add to the parser
If the benefit is as huge, then it is worth to do.
and you will have to post process anyway.
What I see inline XML is kind of like the macro API, both of them provides a foundation for people to build useful things. Just like the macro Expr
, people has to process them anyway, but it is proven very useful.
Seems that no one is interested in discussing a pre-proposal issue. So I will just write one.
Rendered version here
Note:
The whole point of this proposal is about compile-time parsing of XML-alike structures. So any runtime solution is pretty much unrelated.