Open harrysolovay opened 3 years ago
I'm just a random developer who enjoys typescript, cubing, and 3d printing.
gqx
is a (very unpolished) library I've been working on that aims to provide a type-safe, implementation-agnostic way to express, communicate, and instantiate GraphQL fragments. For example,
const authorFrag = Author.$(
Author.id,
Author.name,
);
const bookFrag1 = Book.$(
Book.id,
Book.title,
Book.author.name,
);
const bookFrag2 = Book.$(
Book.id,
Book.author.$(authorFrag)
);
const bookFragCombined = Book.$(
bookFrag1,
bookFrag2,
);
gqx.query.getBook({ id: "abc" }, bookFragCombined).then(book => {
book; // Hover
/*
(parameter) book: {
__typename: "Book";
id: string;
title: string;
author: {
__typename: "Author";
id: string;
name: string;
};
}
*/
})
Playground Link (Very unresponsive to edits; I recommend throwing into a vscode tab)
It also spews out lots of generic info about the graphql schema (both in types and runtime objects), which I've used in a couple projects to type tangental aspects of the gql schema, like resolver types.
That looks beautiful! Have you ever experimented with TS-first approaches to schema definition as well?
Thanks! If you're referring to type-level schema parsing, I haven't much. Though I very much like the concept theoretically, since it seems like they aren't going to support it, I think it might be more trouble than it's worth, especially since I prefer to have my schema in separate .graphql
files in a folder; I just have a script set up to watch the folder and run gqx
every time it changes.
gqx
is actually the reason I moved to typescript from flow; I had been trying to implement the frag -> type for days in flow and had been unsuccessful (circular recursion had weird bugs), but I was able to open a typescript playground window and implement it in like an hour. I haven't looked back!
@harrysolovay I thought this might interest you: https://github.com/dotansimha/graphql-typed-ast
I guess this is to be our unofficial chat :)
Apologies for the delay. Lots & lots of work.
That repo absolutely did interest me! It was in fact a catalyst for my current project. I've been building a spec-compliant type-level GQL parser, and those recursion limits are brutal. Can barely parse a 30 loc schema without hitting limits. Also, the lack of throw types (proposed here) makes parse errors a pain: one needs to create a spec-incompliant node kind to contain errors, recursively extract all error nodes from the final tree, and display them with shady tricks (["error message 1", "error message 2"] is not assignable to "false"
).
Those HKT hacks you've been sharing seem promising! I find it hard to imagine that Ryan's comments against type-level parsers will stick... it would be such a beautiful way to tie together languages in a single env. One way or the other, 4.1 will be of huge benefit to GraphQL-in-TS tooling. I can see you've been thinking about the same problem space. Let's post in this issue whenever we want feedback from one another.
What've you been hacking on? Any cool issues I should take a look at?
Also, any thoughts on this issue? I feel like we're still writing conditional types like cave people ;)
@harrysolovay Yeah, I saw it the other day and I agree that it would be nice. I started to write up a comment for it, but it must have ended up unsent among my tabs đ
Oh, I didn't see your earlier comment.
Yeah, throw types are a must for complex recursive conditional types, though IMO they should be assignable like never
, as that is what people use right now, and them being the "anti-any" messes up passing conditionals involving them to constrained type parameters.
The problem that I've been running into with the HKT hacks is that if you have an HKT that increases the depth by e.g. 10, you have a lot less depth to work with than one that increases the depth by e.g. 1. I've been thinking about if there would be a way for it to detect that it hit depth limit (sometimes if the recursive type is wrapped, the error doesn't propagate and just returns any
), backtrack, and repeat, to get the maximum possible recursion depth, but I haven't tried implementing such yet.
I've recently been playing around on the discord with a ConvertibleTo
type (best explained by an example), and have been trying to get typescript to recognize this "axiom":
type Axiom<A> = Assert<ConvertibleTo<A>, ConvertibleTo<ConvertibleTo<A>>>
Eventually I got to this, but it would be nice to be able to do it in a more general way.
Wrt interesting issues, I personally think microsoft/TypeScript#41370 is a pretty big type hole, though I may be biased :)
Wow! That is absolutely a hole. Surprised I haven't encountered this limitation before.
I like that ConvertibleTo
type! That could come in handy for building libs promoting trait-style patterns in TS. Aka, instead of a subtype check, you'd want to ensure convertibility as a "bound".
... which has me thinking... we need conditional assignability, badly.
Yeah, that's exactly what I'm using/developing it for :)
Yeah, we really do.
Also useful for the trait-style lib is what I've been using in my project to handle plugins:
// Main library
declare global {
namespace someLibrary {
interface Conversions {
aToB: Conversion<A["id"], B["id"]>
}
}
}
// In some plugin
declare global {
namespace someLibrary {
interface Conversions {
myPlugin: {
bToA: Conversion<B["id"], A["id"]>
misc: (
| Conversion<B["id"], C["id"]>
| Conversion<X["id"], Y["id"]>
)
}
}
}
}
This might take me a little while to grok. Is myPlugin
the "trait"? What is the mental model?
That was an alternative for my:
type Conversions =
| Conversion<A["id"], B["id"]>
| Conversion<B["id"], A["id"]>
| Conversion<B["id"], C["id"]>
| Conversion<X["id"], Y["id"]>
AFAIK, unions cannot have multiple declarations / be extended.
Instead, I use an interface, and then to form the union of all conversions, I deeply get the values of the interface:
type ConversionsUnion<C = someLibrary.Conversions> = {
[K in keyof C]: C[K] extends Conversion<any, any> ? C[K] : ConversionsUnion<C[K]>
}[keyof C]
With this, plugins can add conversions by extending the global interface someLibrary.Conversions
.
The keys are irrelevant, and just have to be unique so they don't conflict between plugins; I allow it to be deep just for easier namespacing.
A
, B
, C
, X
, and Y
are the different types, and the Conversions
interface just represents the relationships between them.
I see! How do youââat the type-levelââsafeguard against plugins providing conflicting conversions for the same to types? (I'm assuming there's a runtime component which injects properties in the module being augmented?)
Right now, I don't have any type-level safeguards against conflicting conversions; all I care about in the type system right now is whether or not some path exists from A
to B
, to determine if something of type A
can be passed to something expecting type ConvertibleTo<B>
.
Run time wise, I have a registry for the conversions, and then it essentially path finds from A
to B
; I haven't yet figured out how I'm handling two different conversions of the same from and to types.
(I'm assuming there's a runtime component which injects properties in the module being augmented?)
What do you mean?
This approach of yours is just spectacular. Hopefully it'll enableââat the very leastââtype-level GQL document parsing to work for production apps (even if the schema types still need be generated).
Is that why you chose to give the conversions keys? (so that users need explicitly select their desired conversion?)
Would you possibly be able to write out an end-to-end from the user's perspective? It's still looking quite foreign to me.
Did this comment of yours mean that the HKT solution is not viable for circumventing recursion limiting in building string type parsers?
Is that why you chose to give the conversions keys? (so that users need explicitly select their desired conversion?)
No; the keys are completely irrelevant. The end user doesn't really care about conversions (except for library authors); for the most part, it Just Worksâ˘
Would you possibly be able to write out an end-to-end from the user's perspective? It's still looking quite foreign to me.
Some context: I'm writing a programmatic 3d modeler (similar to OpenSCAD, but less horrible better).
That's somewhat difficult, as the end user doesn't ever really care about the details of conversions etc. unless they are writing a lower level library, but here's an example of the logic.
// myModel.ts
import escad from "escad"
import { cube } from "@escad/cube"
export default () => {
return escad
.cube({ sideLength: 1, center: true })
.translateZ(1)
.sub(cube({ sideLength: 2, center: true }))
}
Ignoring chaining implementation details:
interface Cube extends Product { /* ... */ }
interface TransformationMatrix extends Product { /* ... */ }
interface Mesh extends Product { /* ... */ }
interface BspNode extends Product { /* ... */ }
interface CsgOperation extends Product { /* ... */ }
type Transformation<P extends Product> = CompoundProduct<[TransformationMatrix, Cube]>
type CsgOperation<P extends Product, Q extends Product> = CompoundProduct<[CsgOperation, P, Q]>
cube // (args: CubeArgs) => Cube
translateZ // (z: number) => <P extends Product>(p: P) => Transformation<P>
sub // <P extends Product>(p: P) => <Q extends Product>(q: Q) => CsgOperation<Q, P>
import myModel from "./myModel"
const output = myModel() // CsgOperation<Transformation<Cube>, Cube>
type Assert<T, U extends T> = U;
type _0 = Assert<ConversionsUnion, (
| Conversion<Cube, Mesh>
| Conversion<Transformation<Mesh>, Mesh>
| Conversion<BspNode, Mesh>
| Conversion<Mesh, BspNode>
| Conversion<CsgOperation<BspNode, BspNode>, BspNode>
)>
// Thus,
type _1 = Assert<ConvertibleTo<Mesh>, typeof output>
// Because:
// CsgOperation<Transformation<Cube>, Cube>
// CsgOperation<Transformation<Mesh>, Cube>
// CsgOperation<Mesh, Cube>
// CsgOperation<BspNode, Cube>
// CsgOperation<BspNode, Mesh>
// CsgOperation<BspNode, BspNode>
// BspNode
// Mesh
This might seem over complicated, but it's useful for a number of reasons:
Shape
, and add Conversion<Transformation<Shape>, Shape>
Beam
type, and then
Conversion<Beam, Mesh>
Conversion<Transformation<Beam>, Beam>
Volume
typeConversion<Mesh, Volume>
Volume
type on the clientMesh
. It finds Volume
, and sees that a statistic is registered for a volume. It tells the server to convert the Mesh
to a Volume
, receives the volume, and displays the statisticDid this comment of yours mean that the HKT solution is not viable for circumventing recursion limiting in building string type parsers?
When I tried using it with the HKT recursion circumvention, it was 2589ing very quickly. I likely could have tweaked it to not 2589, but I wasn't really in the mood :)
Haha I know that feeling: it's gotta be a labor of love. The manic, unpredictable kind.
That's a very elegant experience you're working on! Once again, a lot to grok, but I feel the essence & I'd imagine users will appreciate the thoughtful typing experience. Extraordinary how much one can achieve with TypeScript's type system nowadays.
Out of all the coders I've met, the most sophisticatedââwhen it comes to type systemsââwere once big into cubing. Funny coincidence I suppose. Or who knows, maybe some common experiences guided us to this niche.
If you do manage to circumvent 2589, please let me know. Although it looks like you have your hands full.
Also, feel free to add me as a reviewer if you ever want fresh eyes on your projects. Looking forward to seeing what escad becomesââeven if it's purely research đ
Thanks! Once it gets further along and I clean it up a little, I'd love to hear your thoughts on it!
Here a "sneaky type" I made; when typescript is checking its constraint with the type parameter $
unresolved, it thinks the constraint is F
, but when it evaluates it with $
resolved, the type is V
.
I'm unsure if this has any uses, but it's an interesting edge case.
What do you think about a syntax like the below?
type [infer A, infer B] = [1, 2];
Where that would be equivalent to:
type A = [1, 2] extends [infer A, infer B] ? A : never;
type B = [1, 2] extends [infer A, infer B] ? B : never;
I think this would be especially useful in the context of microsoft/TypeScript#41470; the example I provided in there could be rewritten to:
type PromisifyAll<T> = {
[K in keyof T & string as `${T}Async`]: {{
type ((...args: infer OrigArgs) => void) = T[K];
type [...infer Args, (error: any, result?: infer R) => void] = OrigArgs;
type = (...args: SplitArgs["args"]) => Promise<SplitArgs["result"]>;
}}
}
Which IMO looks very clean.
If it could be generic, you could nicely rewrite some common type aliases:
type [infer Head, ...infer Tail]<T extends any[]> = T;
As opposed to:
type Head<T extends any[]> = T extends [infer X, ...infer _] ? X : never;
type Tail<T extends any[]> = T extends [infer _, ...infer X] ? X : never;
I love the look of it! It's parallel to destructuring. Reminds me a bit of this issue, although I don't think the proposal was as strong as yours would be.
It would be useful for type parameters as well.
(example from issue)
type A = {
B: {
C: [
{D: "E"}
]
}
}
type X<{B: {C: {[infer D]}}} extends A> = D;
How would one deal with conditional destructuring / potentially-missing fields. Would it make sense forââupon a destructured element containing never
ââto opt for the next of the ||
sequence?... or is this a bit too much magic in your opinion? Plus, this relies on other new syntax:
type T = [1];
type U = [1, 2];
type [infer A, infer B] = T || U;
declare const a: A; // `1`
declare const b: B; // `2`
I don't quite understand how you would use this:
type [infer Head, ...infer Tail]<T extends any[]> = T;
Seems like you'd wanna do the following instead:
type HeadAndTail<T extends [infer Head, ...infer Tail]> = [Head, Tail];
type [infer Head, infer Tail] = HeadAndTail<["a", "b", ""c]>;
// or better yet, don't even use a utility
type [infer Head, ...infer Tail] = ["a", "b", ""c];
I like the anonymous type scoping proposal! As of today, I use namespaces to keep tidy... anonymous scopes could be of great help with that. Will comment in that thread soon.
Also apologies for not responding to your sneaky type! I hadn't yet had the time to understand it. Looks fascinating though.
It would be useful for type parameters as well.
Hadn't thought of that, though I've always wanted some way to do that.
Though when I've thought about something like that, in my mind it has always been something like your HeadAndTail
, rather than the X<{...} extends A>
from the issue. IMO the one in HeadAndTail
reads a bit cleaner (and has direct parity with conditionals), and having it both ways would seem inconsistent to me. Thus, I would probably rewrite the one from the issue as:
type X<_ extends A & {B: {C: [infer D]}}> = D;
How would one deal with conditional destructuring / potentially-missing fields. Would it make sense forââupon a destructured element containing
never
ââto opt for the next of the||
sequence?... or is this a bit too much magic in your opinion? Plus, this relies on other new syntax:
I had actually been thinking about this; I was initially thinking about :
for parity with conditional types, but I think ||
makes more sense. For an initial proposal, it might make sense to leave that out, just for a narrower scope, but it might also be better received if it didn't make it implicitly never
.
I think it could be nice if typescript threw an error if it couldn't prove that the infer pattern would match; if you wanted to to fall back to never, you could use ||
:
type [infer A, infer B] = [1]; // Error
type [infer A, infer B] = [1] || never; // Ok, A & B are both never
I don't quite understand how you would use this:
type [infer Head, ...infer Tail]<T extends any[]> = T;
In my mind, that would be equivalent to:
type Head<T extends any[]> = T extends [infer Head, ...infer Tail] ? Head : never;
type Tail<T extends any[]> = T extends [infer Head, ...infer Tail] ? Tail : never;
While somewhat strange for that case, I think allowing generic support could be nice, to allow something like
type { x: infer X }<T> = Y<T>;
Instead of
type X<T> = {{ type { x: infer X } = Y<T>; type = X }};
or
type X<T> = Y<T> extends { x: infer X } ? X : never;
But perhaps it's not clear enough from the declaration that X
is generic.
Also apologies for not responding to your sneaky type! I hadn't yet had the time to understand it. Looks fascinating though.
No problem! I probably should have given some explanation of it. Essentially, I made Wrap
/Unwrap
such that Unwrap<Wrap<A>
resolves to A
, but Unwrap<Wrap<A> & Wrap<B>>
resolves to B
. Then, when typescript is traversing Sneaky
with $
unresolved, it decides that the constraint of ($ extends infer _ ? Wrap<V> : never)
is unknown
(idk why), and then Unwrap<Wrap<F> & unknown>
resolves to F
. However, when $
is resolved, it becomes Unwrap<Wrap<F> & Wrap<V>>
, which resolves to V
.
type Sneaky<$, V=$, F=never> = Unwrap<(Wrap<F> & ($ extends infer _ ? Wrap<V> : never))>;
That's quite an edge case you've found! Very cool!
Unrelated:
Since 4.0, I'm wondering when stricter generator types will happen. Generator signatures should IMO contain the ordered sequence of yields, next args, & return. This would enable library developers to create type-safe generator-based experiences, wherein the user supplies a to-be-type-checked generator.
For instance, let's say I want to mimic inheritance: a generator would allow me to yield
the base class and super
props, and thenââafter a behind-the-scenes super
callââpass a scope back to the generator (wrapping the generator as a constructor). For instance:
import {toConstructor, useBase} from "generator-to-constructor";
// To inherit from:
interface CowProps {
spotted: boolean;
}
class Cow {
spotted;
constructor(props: CowProps) {
this.spotted = props.spotted;
}
moo() {
console.log("MOOO!");
}
}
// Synthesizing a constructor:
interface SmartCowProps extends CowProps {
name: string;
}
const SmartCow = toConstructor(function*(props: SmartCowProps) {
const scope = yield useBase(Cow, {spotted: props.spotted});
scope.moo();
return {
name: props.name,
}
});
You could even allow useBase
to accept andââunder-the-hoodââwrap other generators.
import {toConstructor, useBase, Instance} from "generator-to-constructor";
interface CowProps {
spotted: boolean;
}
function* Cow(props: CowProps) {
yield; // doesn't inherit anything
return {
spotted: props.spotted,
moo() {
console.log("MOOO!");
},
}
}
interface SmartCowProps extends CowProps {
name: string;
}
function* SmartCow(props: SmartCowProps) {
const scope = yield useBase(Cow, {spotted: props.spotted}); // essentially a `super` call
scope.moo();
return {
name: props.name,
}
};
const smartCow = Instance(SmartCow, {
spotted: true,
name: "Rick Ross",
});
Type-checking this would be crucial, as we're expecting a single yield containing a constructor and that constructor's props, and then a return which is to be instance members.
I'm curious to hear your thoughts on generator strictness, as well as the above.
AlsoââI feel like this is a topic you'd enjoy: any ideas about what a macros system for TS would look like? Was sifting through issues and though of maybe submitting a proposal in this one.
Generator strictness has always bothered me, but unfortunately they won't really be able to do anything without official hkt support, which they seem to be against for some reason. I think they could implement hkts fairly feasibly with a flow-like Call
intrinsic utility type; they already have to have some way to take a function type and its parameters and get the output type.
Relatedly, do you know where Ryan Cavanaugh was referring to in this comment?
The specific case of resolving a call expression is tracked elsewhere
I have searched and searched, and have not been able to find such an issue. The closest thing I've found has been microsoft/TypeScript#37181.
Looking through these threads again just now, I came across microsoft/TypeScript#26043 and microsoft/TypeScript#40179; the former was closed as a dupe of #6606, but the latter... seems to maybe be the issue?
While macros sound nice, I think they might be anti typescript goals, since typescript tries not to add runtime syntax. I was also recently looking through the typescript discord Q&A, and came across this comment by Daniel Rosenwasser:
The problem with macros is that it would fundamentally change the way most build tools operate. Most modern JS build tools assume that they can analyze a single file at a time. That means you can run a tool like Babel on one file at a time without needing to look at any other file to know what the emit looks like, and that means you don't have to figure out how any given file affects any other.
To transform macros, you either need to have a full view of every file you're compiling (like a bundler/linker), or you have to have some sort of common format that works at runtime that every build tool agrees on.
The other problem with macros (apart from our current compiler architecture) is that they are very difficult to reason about when it comes to compiler errors. Which node do you choose to error on? What if the node doesn't actually exist in source?
So we're not very bullish on macros... :smile:
What I think might be more in scope is either a way for plugins to add intrinsic
utility types, or a type provider like syntax, as in microsoft/TypeScript#3136. One point I had come across wrt the macro/type provider things is that it makes running Apparently that's the second bullet point in the issue you linked đ tsc
on unknown ts code just as dangerous as running the code.
That all being said, people have hacked together 50 different solutions in the thread you linked, and the issue is marked as open and needs proposal, so perhaps the ts team would be amenable to it if given a proposal.
I don't really like the triple slash comments for macros; maybe they could be imported and exported like variables / types? To that end, it might be worthwhile if macros had a sigil as part of the name (like with the new decorator syntax), so that typescript could easily tell if a file had any macros and if so where they were located. IMO #
would be the best choice for this (and that seems to be what was used in that issue), but it does conflict with es private properties, which afaik didn't exist when that issue was created.
I think this would partially mitigate the issue described by Daniel Rosenwasser, as there would only be a few files that needed to be preprocessed, and they would be readily apparent by looking at the ast.
One concern I have with the issue is that is has the following code:
// validation.macros.ts
function interfaceToValidatorCreator(interface: ts.InterfaceDeclaration): ts.Statement {
// Would return something like "const [interface.name]Validator = new Validator({...});",
// using types from the interface
}
macro CreateValidator(interface: ts.InterfaceDeclaration) {
return [interface, interfaceToValidator(interface)];
}
IMO, macros need not be separate files, and could be incorporated into existing ts files like so:
macro CreateValidator(interface: ts.InterfaceDeclaration) {
function interfaceToValidatorCreator(interface: ts.InterfaceDeclaration): ts.Statement {
// Would return something like "const [interface.name]Validator = new Validator({...});",
// using types from the interface
}
return [interface, interfaceToValidator(interface)];
}
@#CreateValidator
interface Person {
name: string;
age: number;
}
PersonValidator.validate(foo)
Also, how would go-to definition work? Would it just highlight the macro node that created the definition?
I think it would definitely be worthwhile to post a proposal to that issue; I'd love to collaborate on such if you'd be interested. If so, I'd suggest opening another issue on this repo; gists afaict don't have a good way to collaborate, but as the owner of the repo I can edit comments, so that could be a good place to work on a proposal. If not, that's of course completely fine as well :)
I love the idea of enabling the addition of intrinsic utility types. And I agree with you: a Call
utility would be quite nice. Too bad they've deemed it off limits for the time-being.
I agree with your suggestion that macros "should" be able to exist alongside their usage. Ideally one could even share code between macros and runtime.
There's one constraint that might be very useful for simplifying the work of the type-checker and for predictability: no updating a node such that it results in a new signature.
This would be very difficult to type-check... beyond the fact that the AST is not fully-narrowed, it would be extremely taxing to map the AST representation into the type system... Ryan would be cross with us ;) (or, hopefully proud, who knows).
How's this look to you?:
import ts from "typescript";
// used within both the runtime code and the macros
function prefix(s: string) {
return `pre-${s}`;
}
// solely used within the macro code
function createHelloStringLiteral(): ts.StringLiteral {
// ...
}
// ugly & mutative... but this is the gist:
macro myMacro<Node extends ts.VariableDeclaration>(node: NextStatement<Node>) {
const init = node.declarations[0]?.init;
if (init && ts.isTaggedTemplateExpression) {
init.quasi.expressions.push(prefix(createHelloStringLiteral()));
}
return node;
}
myMacro()
const hello = myTag`some${identifier}`;
const another = prefix(hello);
Which would expand into:
import ts from "typescript";
// used within both the runtime code and the macros
function prefix(s: string) {
return `pre-${s}`;
}
const hello = myTag`some${identifier}${"pre-Hello"}`;
const another = prefix(hello);
As for go-to-definition... I'm not sure why it would need differ from the existing behavior. Where do you anticipate differences?
Ideally one could even share code between macros and runtime.
IMO that would be an antipattern and add a bit of complexity; macros are in their own scope / runtime, and I think a proposal would only be accepted if there was a very clear line there.
Maybe in a future proposal a shared function or similar could explicitly be declared, that would only have access to other shared functions around it, but I think its out of the scope for an initial proposal.
Another way I could see this being feasible is if macros had (readonly) access to their own AST, so that they could inject a function helper they defined, or similar.
no updating a node such that it results in a new signature.
What do you mean by this? The asts has to have the same structure?
As for go-to-definition... I'm not sure why it would need differ from the existing behavior. Where do you anticipate differences?
In the example in the issue, the macro made a new variable PersonValidator
. What happens when you run go-to-definition on PersonValidator
?
The author would never be able to hover over the PersonValidator
ââas it's not part of the source. This does bring up another point: how would the expanded code be represented in source maps?
EDIT: I see what you're saying. You're using what is to-be generated... I think that'd be confusing.
I might actually prefer more reflection capabilities as opposed to a macros system. Especially relating to serializing and reifying closures. This is a cool repo. Seems like something you'd enjoy.
The author would never be able to hover over the PersonValidatorââas it's not part of the source.
@#CreateValidator
interface Person {
name: string;
age: number;
}
// Go-to-definition on below
PersonValidator.validate(foo)
This does bring up another point: how would the expanded code be represented in source maps?
I think:
macro ast validatorAst(%interfaceName: ts.Identifier, %validatorName: ts.Identifier) {
const %validatorName = (arg: %interfaceName) => {
// Impl
}
}
macro CreateValidator(interface: ts.InterfaceDeclaration) {
function interfaceToValidatorCreator(interface: ts.InterfaceDeclaration): ts.Statement {
return validatorAst(/* ... */)
}
return [interface, interfaceToValidator(interface)];
}
I might actually prefer more reflection capabilities as opposed to a macros system. Especially relating to serializing and reifying closures. This is a cool repo. Seems like something you'd enjoy.
That's really cool!
A quine I made in typescript types:
type Q<X extends string[]=['`','$',Q<['${X[0]}','${X[1]}','${X[2]}']>]>=`type Q<X extends string[]=['${X[0]}','${X[1]}',Q<['${X[1]}{X[0]}','${X[1]}{X[1]}','${X[1]}{X[2]}']>]>=${X[0]}${X[2]}${X[0]}`
Very, very beautiful sequence. What inspired this?
Side-note, surprised there's no error regarding the circular default X
.
Thanks! Nothing, really. Just whim. đ
X
doesn't have a circular default; the references to it are quoted with 's, not `s.
Ahh, ya got me ;)
Gave ya a shoutout in this article that I just published.
That was a fun read! Sorry to ruin your theory, but I am not Elvis on a private island đ . How did the SafeTail
utility prevent 2589? Do you happen to have a repo/playground of the final code in your article that I could take a look at?
Also, I realized a way type level parsing might be able to perf well (assuming it didn't 2589): build mode with project references will only build it when it changes. (Though thinking about it now it might not actually instantiate the type before it generates the d.ts)
Reading through your article gave me an idea for possibly circumventing 2589 more legibly. I'll report back soon đ .
(Edit: just sticking this here so I can find it)
Not sure what to say, so I'm just gonna link this and let it speak for itself. đ
Not Elvis... dang ;)
The project references idea is interesting. Would every sub-parser be its own package? And how would this be more performant?
I'm not so sure I fully understand the link. Are you suggesting there's an approach that would make type-level GraphQL parsing feasible?
Also, I'm glad you enjoyed the read!
In my mind, the project references would be more performant because dependents only get built on change, but thinking about it now, the generated d.ts might leave the ParsedAST
type as Parse<Src>
and not actually instantiate the type.
I was trying to demonstrate a recursion limit hack; given a tail recursive type like below:
// type Final = "x".repeat(1e3)
type LotsOfX<T extends string = ""> = T extends Final ? T : LotsOfX<`x${T}`>;
You can transform it into:
type _LotsOfX<T extends string = ""> = T extends Final ? Result<T> : Defer<LotsOfX<`x${T}`>>;
type LotsOfX<T extends string = ""> = x1e9<_LotsOfX<T>>;
// "x".repeat(1e3)
type X = LotsOfX;
In that example, it went through 1000 iterations of 'T = x${T}
', then 999999000 (1e9-1e3) no-op iterations, and then returned the final value of T
.
In retrospect, this is probably a better example.
At that point, the challenge is to make the Parse
type tail recursive. I modified my generic Parse
type from this comment, and it works (imvho) spectacularly:
namespace Patterns {
export type Leaf = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9";
export type Element = Leaf | Group;
export type Group = ["(", Element[], ")"]
}
// Parses without 2589
type X = Parse<Patterns.Element, "((1)((((0))))(1(0101(((1(((((1)))))()()()(0001234567890()0000)1)))01010((10)1)010)1))">
Parse
essentially takes a nested union of all possible matches, and recursively narrows it down to match the Src
string.
I'm going to be modifying it to support a mapping at the end so that it can produce a normal-looking ast.
With a bit more nesting, we get the following.
Writing these type-level parsers is fun... but it's also such a limited process. That conditional assignability syntax would make a difference to this process. Even if we had it though, a plugin system for intrinsics might be preferable. While I love the problem of type-level parsing, it doesn't seem like we're quite there yet :/
On another note, I'm surprised no one else has chimed in on the documentation utility type discussion. Seems like it'd be an extraordinary addition to the language.
& how is the 3D modeling lib coming along?
With a bit more nesting, we get the following.
đ
Even if we had it though, a plugin system for intrinsics might be preferable.
Agreed.
& how is the 3D modeling lib coming along?
It's going well. I'm currently procrastinating making the front-end, but the core model generation / conversion stuff is mostly done. I have yet to put it through a real stress test, however.
Update: stress test did not go well. Will need further testing đ
I think it might have been in an infinite loop, but maybe my algorithm requires 3.5+ million iterations đ¤ˇ
Haha sorry to hear. Hopefully you don't let it loose on an AWS lambda function with self-provisioning permission ;)
Looking forward to seeing / hearing about whatever it becomes
This inconsistency is annoying.
Very interesting! Likely the same issue where inference breaks upon unwrapping known types in conditionals. Probably worth an issue in TypeScript. Is there any reason that this behavior would be desirable?
Call
utility type / HKTs :)
I think that's a separate issue; there it doesn't infer the constraint of the type (though if you ignore the error, it works). With the one I linked, it doesn't infer it properly in the end result either. I guess the inference of conditionals is separate from inference of function arguments?
I've enjoyed our chats in TypeScript issues. I see you're also a "cuber" (cool simulator!). Gqx looks auspicious as well.
You don't list a name, Twitter handle, etc. anywhere on your GitHub profile or project manifests.
Perhaps this is an unusual request, but I'd love to know who is behind the screen & make sure to stay posted. If you'd prefer to stay anonymous, I completely understand & respect your wishes!
Kind regards