facebook / react

The library for web and native user interfaces.
https://react.dev
MIT License
227.2k stars 46.32k forks source link

TypeScript support for JSX #759

Closed thorn0 closed 9 years ago

thorn0 commented 10 years ago

Would be great if JSX recognized TypeScript constructs.

syranide commented 10 years ago

JSXTransformer uses fb-esprima which is a JSX-enhanced dialect of esprima, which is a complete JavaScript parser. So for JSX to support TypeScript would require extending or replacing esprima with a complete and compatible parser for TypeScript. Unless such a thing already exists, doing this would most likely require significant initial and ongoing effort.

Unless you are willing to put in that effort, it's unlikely that this is going to happen any time soon or at all in the foreseeable future. After all, JSX is only sugar for very similar looking and functionally equivalent JS code.

thorn0 commented 10 years ago

Why does JSXTransformer need a JS parser at all? Why can't it just replace its XML-like constructs with JS code wherever it finds them (except string and comments)?

sophiebits commented 10 years ago

Contrived example:

<input value={this.props.text + "}"} />

The value shouldn't end at the first }; you need at least some JS parsing logic to understand that.

syranide commented 10 years ago

@spicyj Or just something as simple as "<tag />" should not be parsed by JSX.

@thorn0 You are probably right in that it isn't strictly necessary, but it aids in generating valid code, as we now actually makes sure that the JS parses before move on.

sebmarkbage commented 10 years ago

Technically you don't need a parser but you need a tokenizer.

Sweet.js macros operate on the token stream before it hits the parser. JSX almost works with Sweet.js except there are special rules for regular expressions. Closing tags doesn't work with the / token depending on context. The regexp resolution rules are hardcoded into the tokenizer. Sweet.js probably can't support our whitespace rules neither.

However, if we treat JSX as strictly part of a custom tokenization stream, we can do the transformation purely on the token level, just like Sweet.js. That should leave it compatible with any other language extension that doesn't need special tokenization rules. The resulting code would even be compatible with Sweet.js macros.

What do you think @jeffmo ?

syranide commented 10 years ago

@sebmarkbage As long as we assume that all opening brackets must have a corresponding closing bracket it should work technically. It will also have to assume that expressions, object notation, array notation, etc is compatible or the output will have to be configurable.

But we will lose the ability to detect and report syntax errors before we inject {expressions} into the output stream, so eventual errors in the output stream will likely be a lot less comprehensible as a result, I'm unsure about how bad they actually can become in the worst case.

There's no denying that there are advantages to operating on the token stream like this, I just fear that it will be at the expense of making JSX less user friendly and thus less appealing, JSX is just sugar so I would never sacrifice the convenience of really good error message with the convenience of something that looks like HTML. Hopefully I'm wrong and it's not bad at all, but I think it's a very fine line between JSX being a very nice optional feature and a nice toy.

JSX is really compelling to me, primarily because it's cleaner than the JavaScript equivalent without any immediate drawback (--watch makes it painless enough), but it still lacks features that vanilla JavaScript offers, which means that you have to revert to JavaScript from time to time, so I still stick to JavaScript, but I can see that I may switch to JSX in the future. But I would never if I end up with crappy error messages that plagues so many template frameworks. But that's my opinion.

sebmarkbage commented 10 years ago

Even if it was true that we can't get good enough error messages using a token-transform alone you can still have the parser provide context to the tokenizer.

If your parser can provide a compatible context , that's great, you get better error messages for free.

If it can't and you have to pipe the raw transformed strings, you still have the option to do that. If you want to use TypeScript or HipsterScript you can.

We can't add a convenient extension to the language and expect that nobody else will add other extensions. People will want them both. If you have to choose between TypeScript or JSX then JSX will certainly be seen as the toy.

Unless there is an ambiguity problem ofc. TypeScript may not be compatible for other reasons. That's why I like macros because they can be contextual extensions. An identifier can be treated differently depending on if it's a known React/JSX component or a Type/Class/Interface.

thorn0 commented 10 years ago

TypeScript is similar to Sweet.js macros in that it's a superset of JS, just another convenient extension. However, yes, there might be incompatibilities as the type cast operator looks like this in TypeScript:

this.span = <HTMLSpanElement>document.createElement('span');

Oops...

syranide commented 10 years ago

@sebmarkbage You're more experienced in this area, but it seems like you would have always to explicitly extend the parser of the target language or use the standalone token-transform version, the likelihood of their and our parsers and tokenizers being compatible seems remote (unless esprima is the parser in the JavaScript world). Or am I missing something?

If that's the case, I haven't looked into the details, but it seems like a simple fix to just implement an optional parseProgram/parseExpression in fb-esprima which has no knowledge of the language features at all other than that brackets must be balanced for the sake of XJS expression containers. Token-transform for free really. The only issue is that string/comment syntax would have to be compatible, or the tokenizer would need to be configurable as well.

I could probably even put together a proof-of-concept if it's worth pursuing that path.

jordwalke commented 10 years ago

Apologies for just chiming in without reading the discussion. I just thought I'd let you know the result of a previous discussion I had with someone who started adding JSX support into TypeScript. We found it was difficult to distinguish type parameters Array<Things> from JSX. My suggestion was to require that all JSX be wrapped in parens like (<Typeahead />). This makes it easier to parse, and most of the time we end up wrapping our JSX in parens anyways to guard against Automatic Semicolon Insertion, so people are used to it.

syranide commented 10 years ago

@thorn0 @jordwalke It seems like Array<Things> shouldn't be an issue to solve if you're extending the actual parser (but perhaps you're not), as the parser should not be in a state where it would consider JSX to be valid. However, <A>1</A> seems intuitively harder to solve as we cannot know if <A> is JSX or a type-cast... until we've found the (possible) closing </A>, but that does not seem reasonable.

It seems to me that your suggestion (<A>) has the same issue unless you explicitly disallow type-casts as the first instruction inside parens, which isn't perfect, but should very rarely be an issue. So it's a surprisingly neat distinction/solution to a major ambiguity.

sebmarkbage commented 10 years ago

I was worried about maintaining two versions. In theory you could make a tokenizer that can be a drop in replacement in an esprima based typescript parser. But that's certainly more difficult.

This could be a nice quick fix. You can always add special look ahead/behind rules. It doesn't have to be pure.

On Jan 2, 2014, at 7:14 AM, Andreas Svensson notifications@github.com wrote:

@thorn0 @jordwalke It seems like Array shouldn't be an issue to solve if you're extending the actual parser (but perhaps you're not), as the parser should not be in a state where it would consider JSX to be valid. However, 1 seems intuitively harder to solve as we cannot know if is JSX or a type-cast until we've found the closing but does not seem reasonable.

It seems to me that your suggestion has the same issue unless you're explicitly disallowing type-casts as the first instruction inside parens, which isn't perfect, but should very rarely be an issue. So it's a surprisingly neat distinction/solution to a major ambiguity.

— Reply to this email directly or view it on GitHub.

syranide commented 10 years ago

@sebmarkbage I'm playing with a proof-of-concept and it was a simple as I had hoped, the major issue that I hadn't foreseen is that it's basically not possible unambiguously detect the initial tag, without the parser to provide context or without any additional starting token/constraint. ...<abc is that a comparison perhaps?

If a language supports a feature like <abc> then we're basically all out of luck unless we require an initial parenthesis as @jordwalke suggested. It could make certain edge-cases in existing scripts break though (x(<string>y())) and there would be no obvious no-op way of escaping it for the user, so it's not 100% safe with just parens. The benefit would be that it wouldn't really look out-of-place in any language as you can't really get away from the mathematical parenthesis in any language.

So I'm unsure how flexible/verbose we should make it, the major issue being that if we require an additional token for the starting tag, it has to be repeated for every branch of conditional expressions with tags.

sebmarkbage commented 10 years ago

This is already a problem in JavaScript with regular expressions. You can still implement JS highlighting without a proper parser because you can use look behind to disambiguate. It's not easy to enumerate all the potential cases you'll have to look for though. Hopefully they're bounded.

Sweet.js has a pretty good write up on this for regular expressions. Since those are allowed in similar places as JSX I figure the solution would be similar. https://github.com/mozilla/sweet.js/wiki/design

sebmarkbage commented 10 years ago

Of course the extended language could add features that you can't account for. So it may not always work. That's why it would be great to have at least a little bit of feedback from the parser. E.g. if a regular expression is allowed, then can we also assume that < is the start of a tag.

syranide commented 10 years ago

Yeah, so I took a step back and analyzed the problem properly. I see three ways of doing this:

  1. Look-ahead for <aaa>, <aaa /> and <aaa bbb=.
  2. Look-behind for any operator:ish character :=,[({+-*/, etc and look-ahead for <a.
  3. Both.

  1. The straight-forward approach, we assume only that none of those exact occurrences are allowed by the target language. <aaa> is directly conflicting with TypeScript and I see no way out of it without modifying our syntax and <aaa bbb= is conflicting with a hypothetical language that allows a non-comma-separated object notation, with = for assignment instead of the default :.
  2. We assume that the target language is based on operators, and we can easily test that we are reasonably inside of an expression, in a state where a value/identifier is expected. This would explicitly prevent support for wordy languages that would allow let A be <tag> (if there even is a worthwhile one for JavaScript, I doubt it). TypeScript sadly conflicts here too as <TypeCast>... is valid in the same contexts as <Tag> (even if we had their parser to provide context).
  3. Stricter, but TypeScript would still be an issue. So not sure if there's a point.

Unless I'm missing something, TypeScript can only be solved (with or without parser backup) by either modifying the syntax for the empty root tag (like <!tag>, <tag empty> or whatever) or using something like what @jordwalke suggested. However as I mentioned above, the parenthesis constraint doesn't actually solve the ambiguity. What you're really doing is disallowing type-casting as the first instruction inside of parenthesis, and replacing it with JSX-tags.

@sebmarkbage Your take? Do we need an alternate syntax for special-cases?

vjeux commented 10 years ago

Note: typescript also uses for generics

var greeter = new Greeter<string>("Hello, world");

http://www.typescriptlang.org/Playground/#src=class%20Greeter%3CT%3E%20%7B%0D%0A%20%20%20%20greeting%3A%20T%3B%0D%0A%20%20%20%20constructor(message%3A%20T)%20%7B%0D%0A%20%20%20%20%20%20%20%20this.greeting%20%3D%20message%3B%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20greet()%20%7B%0D%0A%20%20%20%20%20%20%20%20return%20this.greeting%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Avar%20greeter%20%3D%20new%20Greeter%3Cstring%3E(%22Hello%2C%20world%22)%3B%0D%0A%0D%0Avar%20button%20%3D%20document.createElement('button')%3B%0D%0Abutton.textContent%20%3D%20%22Say%20Hello%22%3B%0D%0Abutton.onclick%20%3D%20function%20()%20%7B%0D%0A%20%20%20%20alert(greeter.greet())%3B%0D%0A%7D%0D%0A%0D%0Adocument.body.appendChild(button)%3B%0D%0A

sebmarkbage commented 10 years ago

@vjeux I can't think of any case where generics are ambiguous with JSX. Because they're always preceded by an identifier which is not valid JSX. They're ambiguous with JavaScript though!

Type casting is a big issue though. The parenthesis doesn't help.

Most cases are actually not ambiguous:

(<foo>x) // cast
(<foo />x) // error, x is invalid after a component
(<foo />) // jsx
(<foo attr="">x) // error, missing closing tag
(<foo attr="">x</foo>) // jsx can be early determined by the attributes

The parsing code and error messages becomes really weird when you have an opening tag without attributes. You have to optimistically parse ahead a long way to find the matching closing tag which breaks the ambiguity.

(<foo>x + (y) + </foo>)

Even then it's ambiguous with regexps.

(<foo>x</foo>/+5)
// could mean:
(<foo>(x < new RegExp('foo>') + 5))
// or
(foo(null, x) / 5)

So, yea. The type cast syntax screws us up. I really want to make this work though.

sophiebits commented 10 years ago

Note that (<foo>x)</foo>) is valid JSX.

sebmarkbage commented 10 years ago

@spicyj It's not valid TypeScript though. Need to make it into a RegExp to make it ambiguous I think. Maybe you can think of another case?

sophiebits commented 10 years ago

Sorry, I thought you were implying that you could stop parsing at the close paren. You may be right that regexes are the only tricky part; perhaps we can solve that by requiring people to wrap regex literals in parens? I think JSLint might already warn about that. Still, it does sound like we may need arbitrary lookahead to disambiguate which sounds like a recipe for confusion.

(One other idea I mentioned to @syranide in IRC was requiring people to wrap each JSX expression in backticks, sort of like how we recommend JSX in CoffeeScript now. Obviously that's a bit of a pain but it easily removes any ambiguity.)

sebmarkbage commented 10 years ago

Ideally we would use

jsx`<foo>${x}</foo>`

To make it fully executable ES6. I think that JSX is always close to being more of a pain than it's worth. Compared to just invoking functions. Back ticks, as small as they seem, might be the final straw.

sebmarkbage commented 10 years ago

I guess back ticks would actually remove an ES6 feature unless we prefix with something. So it doesn't work without a prefix anyway.

syranide commented 10 years ago

@sebmarkbage We cannot reliably disambiguate (<foo>x)+(<foo></foo>) (in a real context). Sure with an infinite look-ahead we could possibly make it reliable enough to actually be useful, but any error messages would be complete dog poo as we would never be able to identify intent. Which in my eyes makes it useless.

I agree with backticks, while it would be a solution, it would remove an ES6 feature and it would be quite error-prone when dealing with nested conditionals. Although realistically one could likely just assume that any <> inside an expression is JSX and not a type-cast.

However, <aaa> is the only issue, the two other cases can be disambiguated, quite easily. So reasonably, we only need to introduce an (optional) syntax for <aaa>, but it is not trivial what it would look like and possibly not very neat either. The simplest being <aaa >. However, for all the different optional syntaxes I can come up with, none really feels natural or not weird to me.

sebmarkbage commented 10 years ago

Yes. I agree that infinite lookahead is not an ok solution. It's more of a thought experiment to see where it leads us.

I think that you're right that an alternative syntax for cast is the right way to go here. Particularly since this syntax is not really intuitive or common for the current use in TypeScript anyway. It would seem that making that change is possible.

Some ideas without really thinking it through... The simplest most intuitive to me is that you type the expression:

var x = { } : foo;

Another alternative would be to add a contextual keyword somehow:

var x = cast<foo> { };
var x = cast { } as foo;
...
syranide commented 10 years ago

@sebmarkbage The only thing I worry about effectively overriding TypeScript syntax is that this would be a solution only for TypeScript, and it would effectively mean that either you end up with two different type-cast syntaxes depending on whether the current file is run through the parser or not, or you risk breaking existing code if all files are run through it.

Also, depending on the replacement syntax for type-casts, you potentially have to extend the actual TypeScript parser as opposed to just having a language agnostic transform which a special-case for TypeScript-style type-casts. What are the current thoughts on extending the language parsers vs a language agnostic transform? It seems like having an agnostic transform would be preferable, and if people really are invested in TypeScript (or whatever) then nothing prevents them/us from implementing a "language native" solution down the road.

One potential idea, if my JSX namespaces PR is accepted and merged, would be to simply have the syntax be something like <.tag> or <:tag> and thus <.React.DOM.div> or <:React:DOM:div>, not the neatest syntax, but there's some logic to it at least. Perhaps an acceptable trade-off for languages where it conflicts?

<tag>
</tag>

vs

<.tag>
</tag>
sebmarkbage commented 10 years ago

@syranide There can always be conflicts and we have to solve them on a case by case basis. Namespaces could easily conflict with another language too.

You can also just use an external function if we kill the type assertion operator.

class Parent {}
class Child extends Parent {}

function cast<T>(obj : any) {
    return <T>obj;
}

var x : Parent = new Child();
var y = cast<Child>(x);
syranide commented 10 years ago

@sebmarkbage Ah, interesting solution.

fdecampredon commented 10 years ago

There is other problems with typescript + React than just the JSX syntax. TypeScript currently does not support mixins at all, (and won't before at least some months I would say). So it's basically very hard to make it understand what is the result of React.createClass (and I doubt it will be ever possible to make it understand React mixins). I've tried to work with React + typescript, and ended up wrapping a lot of React mechanism to be able to make TypeScript fully understand type of what I worked with. Perhaps the real solution here would be to create a JSX to Typescript definition file compiler to integrate with typescript, and keep JSX for creating React components.

syranide commented 10 years ago

@fdecampredon It seems like React intends to move to ES6 classes soonish, I'm assuming that would improve the situation?

pspeter3 commented 10 years ago

@fdecampredon Thanks for the work that you did on that!

basarat commented 10 years ago

Suggestion for supporting this in TypeScript : new file type .tsx which is effectively just a simple .ts file in which we replace multiline strings using the same logic we have for .jsx -> .js files.

This will depend on TypeScript having support for multiline strings which is being tracked here : https://typescript.codeplex.com/workitem/19

jbrantly commented 10 years ago

I wanted to share how I've dealt with this problem while waiting for a more permanent solution. I took an idea from the edge.js project where foreign code is inserted as a multiline comment. So I can write my render method in TypeScript as such:

render() {
    return React.jsx(/* 
        <div className="row">
            {this.props.children}
        </div>
    */);
}

React.jsx() is just a declared function (using the bindings from @fdecampredon):

declare module React {
    function jsx(): ReactComponent<any, any>;
}

Then in my build process I use a simple transform. Using Gulp:

var gulp = require('gulp'),
    typescript = require('gulp-tsc'),
    replace = require('gulp-replace'),
    react = require('react-tools');

gulp.task('compile', function () {
    return gulp.src(['**/*.ts'])
        .pipe(replace(/React\.jsx\((\/\*((.|[\r\n])*?)\*\/)\)/gm, function (match, fullComment, commentContents) {
            return '('+ react.transform('/** @jsx React.DOM */' + commentContents).slice(21) + ')';
        }))
        .pipe(gulp.dest('dest/')) // typescript requires actual files on disk, not just in memory
        .pipe(typescript({ out: 'compiled.js' }))
        .pipe(gulp.dest('dest/'))
});

While you do not get Intellisense, etc, you do get full TypeScript checking of all the JSX code at build time since it's transformed prior to being passed to the typescript compiler. Also source code line numbers are preserved.

Hope this helps.

eteeselink commented 10 years ago

@jbrantly, really nice! a bit of a hack, of course, and it doesn't invalidate any of the more courageous attempts in this thread, but I think that I'm going to steal your approach. Only sad thing: you still can't do:

return React.jsx(/*
    <li>
        {this.props.items.map(item => (<ul>{item}</ul>))}
    </li>
*/);

:-)

STRML commented 10 years ago

@eteeselink You could get the same by using the ES6 arrow functions:

// ...
            return '('+ react.transform('/** @jsx React.DOM */' + commentContents, {harmony: true}).slice(21) + ')';
// ...
eteeselink commented 10 years ago

Wow, hidden feature! Thanks! :-)

fdecampredon commented 10 years ago

The best way to integrate jsx and typescript is to create a typecript compiler fork that support natively jsx syntax, I've started to work on that in this repo https://github.com/fdecampredon/jsx-typescript I wait for the typescript internal to stabilize before resuming my work. In fact I think that JSX is the easiest part, TypeScript type system is very incompatible with React for multiple reason "private type" (see this discussion: https://typescript.codeplex.com/discussions/545965) and mixins make working with react and typescript a very painful experience. (and yes the planned es6-class support might help, but it won't resolve every problems)

syranide commented 10 years ago

@fdecampredon Mixins is just a handy tool, it's not a fundamental part of React, you're free to substitute with anything you like (or nothing) if it isn't a good fit for TypeScript. Ofc that could potentially conflict with some third-party helpers you choose to use, but I'm assuming those wouldn't work out of the box regardless.

fdecampredon commented 10 years ago

@syranide Yes I agree, however there are quite used in every third party library. Anyway that's not the main problem private type is. see the codeplex discussion that I pointed.

CyrusNajmabadi commented 10 years ago

Hey Guys,

TS developer here (and guy who wrote the current 1.0 parser). I haven't read through the entire thread. However, from looking at a few examples of what you're asking for, this seems like it would be very easy to add in a fork of the typescript parser, if one were so willing.

The algorithm would be pretty darn simple. First, you'd prescan the file, looking for XML close tag candidates. i.e. you'd have a regexp that would match strings like </id> (allowing for spaces between things). With that, you'd have a table mapping potential end tags to locations in the file.

You'd then need to augment the scanner and parser. The parser would be updated such that if it hit a potential opentag, it would need to speculatively parse ahead to see if this was actually XML or just a cast. The end-tag table would be helpful here as it would allow you to even decide if it was worthwhile to do that speculative parse. i.e. if you saw <number> and had never seen </number> in your doc, then there would be no point speculatively parsing. On the other hand, if you had <Book> and did have a </Book> tag later on, then it would be worth going forward.

If you never find the end tag, then you rewind parsing, and go ahead parsing the entity as a type assertion expression.

This approach is very similar to the one we already need to take today to deal with the ambiguity between type assertion expressions and generic arrow function expressions. Speculative lookahead is necessary (and already done here), so it's no stretch to augment it to support XML. The only little trick here is to get the end tags to ensure you don't speculatively parse every time, most of the time when it's not going to be fruitful.

abergs commented 10 years ago

Hey @CyrusNajmabadi and thank you for your feedback! @fdecampredon have been working on a fork of TS compiler, although I think he wanted to wait until TS-development stabilized.

There is some more things in TS which are problematic when used together with React, for example: https://github.com/Microsoft/TypeScript/issues/229 and https://typescript.codeplex.com/discussions/545965 (if we deep dive on the codeplex issue it actually seems to be resolved by https://github.com/Microsoft/TypeScript/issues/341

syranide commented 9 years ago

@sebmarkbage This a task for the community (and I know some are experimenting) and not the responsibility of React-JSX, or?

davidreher commented 9 years ago

anything new on this issue? We currently thinking about making the step towards using typescript in addition to react to make our software more bullet-proof. It would be great to have the possibility of writing JSX in TS ... since using plain JS for JSX is an option but not a very good one ...

pspeter3 commented 9 years ago

FWIW, it's been fine just using the DOM factory functions in TypeScript

RReverser commented 9 years ago

@davidreher I believe it's superseded with Flow now which supports TypeScript + JSX + ES6 syntax at once.

sebmarkbage commented 9 years ago

Unfortunately I think that we need to leave this to the TypeScript team to make the decision if they want to support JSX since they have other syntax that conflicts with JSX.

Using TypeScript without JSX will still be possible but it's expected that it won't work quite as well as Flow. http://flowtype.org/ At least not as quickly. Since Flow is being developed with the intent to support React concepts.

basarat commented 9 years ago

since they have other syntax that conflicts with JSX

Which ones specifically (I suppose its just type casting)? Doesn't flow intend to have syntax parity with TypeScript?

Note: TypeScript calls these type assertions because casting generally implies runtime safety.

brigand commented 9 years ago

Basart, no, flow is intended to just work with JS+JSX+optional type annotations. The ideal case for flow is where you don't need any type annotations, and it's just plain JavaScript (possibly with JSX)

TypeScript adds a lot of stuff that isn't in JS.. those things are specific to TypeScript.

On Fri, Jan 16, 2015 at 11:09 AM, Basarat Ali Syed <notifications@github.com

wrote:

since they have other syntax that conflicts with JSX

Which ones specifically (I suppose its just type casting)? Doesn't flow intend to have syntax parity with TypeScript?

Note: TypeScript calls these type assertions because casting generally implies runtime safety.

— Reply to this email directly or view it on GitHub https://github.com/facebook/react/issues/759#issuecomment-70295475.

sebmarkbage commented 9 years ago

@basarat yea, just type assertions conflicts. Flow doesn't support type assertions since it attempts to be more sound than TypeScript.

@brigand that's not quite correct. Flow works well without type annotations, but it definitely intents to use type annotations too. For three reasons:

Flow aims to unify with TypeScript where it makes sense but it doesn't attempt to support all the syntax/concepts and does diverge in some areas.

brigand commented 9 years ago

Thanks for clarifying.

On Fri, Jan 16, 2015 at 12:11 PM, Sebastian Markbåge < notifications@github.com> wrote:

@basarat https://github.com/basarat yea, just type assertions conflicts. Flow doesn't support type assertions since it attempts to be more sound than TypeScript.

@brigand https://github.com/brigand that's not quite correct. Flow works well without type annotations, but it definitely intents to use type annotations too. For three reasons:

  • Documentation of intent and as a way to catch mistakes that would pass type inference but isn't intended to work.
  • As a way to intentionally limit the API surface area even if it would currently work with a wider set of inputs. I.e. defining a contract.
  • As a way to speed up type analysis of large code bases by enabling parallelized analysis.

Flow aims to unify with TypeScript where it makes sense but it doesn't attempt to support all the syntax/concepts and does diverge in some areas.

— Reply to this email directly or view it on GitHub https://github.com/facebook/react/issues/759#issuecomment-70305443.