Open MichalLytek opened 6 years ago
Something that may be worth considering during this refactor is multiple schemas in a single codebase. This is a requirement for my services because, for our application, there is a public graphql, which clients (web, mobile) interact with, and an internal graphql, with which other microservices interact. Placing these schemas in the same codebase allows easy code reuse.
If this is something you'd be open to, I'd be happy to work on it. I'm not familiar with this codebase, but there does not appear to be anything in the API that restricts it to only one schema.
Of course, #110 is about separating schemas with a temporary solution for your codebase problem.
One of the things i reacted on was that you define a typeDef as a class in the current implementation.
I know it's probably because the interfaces gets stripped by tsc
at compile time ...
but perhaps it would be possible to have a base interface like Type
, Enum
, Scalar
etc.
and then create a ts transformer to convert interfaces that extend these base types, in to SDL format.
that way, it should be possible to write you typedefs more naturally (consider psudo code):
import {ID} from 'type-grahpql';
interface Avatar extends Type {
<T>(size: T): T;
}
interface User extends Type {
id: ID;
username: string;
quote?: string;
avatar?: Avatar;
}
That way, classes can be dedicated to the implementation rather than the specification of the model. The typescript transformer can extract the interfaces and convert them to the actual SDL. And you might be able to drop the use of decorators, as they are still considered experimental.
@Evanion
Custom TS transformer/plugin is of course planed but in a long future. They are much less supported than "experimental" decorators. But with TS transformer I was thinking about extending reflection capabilities to get rid of all type => [Foo]
annotations. So decorators would be very concise and clear, generics would be probably just as simple as with normal classes, etc.
TypeGraphQL use classes to describe the types because it allows e.g. to easily integrate with class-validator
and TypeORM which also returns instances of Entity classes, so it's much more pleasant to use instanceof
checks rather than recreating the object and providing __typename
or other solution to solving interfaces and unions. Also you can define getters and inline methods while using classes, not interfaces. If you like interfaces, just use pure abstract classes that are 1:1 interfaces.
The interface syntax looks nice but without decorator it would be hard to provide information which method is query and which is mutation or subscription. Something like this:
interface UserResolver extends Query {
users: User[];
}
interface UserResolver extends Mutation {
addUser(user: UserInput)?: User;
}
doesn't look nice. Also as you see in TypeORM example, it would be more complicated to exclude some property from emitting in schema than now by not placing decorator.
So to sum up: TypeGraphQL is not a tool to create SDL from TS interfaces (which could be done as a CLI that works in oposite to graphql code generator
) but a whole framework for creating GraphQL API in Node.js and only the main part of it is it's reflection and schema generating capabilities - it also focus on interoperability with other tools, libs and frameworks.
Would this also hook into the pipeline to allow dynamic creation of additional schema? For instance, I'd like to generate Inputs dynamically from the model attributes.
Not directly by this task but yes, the new pipeline will be designed with a hook/plugin system in mind, as well as thinking about #134 😉
@19majkel94 Any plans for a future release?
@Fyzu Not enough support → not enough time → no ETA 😞 https://opencollective.com/typegraphql
That's a strange situation though. There are people that are willing to give time to help developing a new solution but you don't want to accept it (https://github.com/19majkel94/type-graphql/pull/304#issuecomment-480621261).
So you're willing to accept donation so we can have the resources to move forward with this internal rewrite. Although fellow developers, sometimes, prefer to help by spending some time developing and testing.
I hope everything is going to be sorted soon!
but you don't want to accept it
Noone will accept a PR with a complete rewrite done by an external contributor. How could I then develop and extend the core which code I don't know or completely understand?
If you can't wait:
you can continue your work and use your fork until the proper fix will come
I am preparing a complete rewrite, with monorepo support, with plugins in mind, with splitted packages and an architecture that can easily solve current issues/blockers and possible future issues. It's not a task that you can do in two weekends - the description here is just an idea, a draft that is probably outdated.
Although fellow developers, sometimes, prefer to help by spending some time developing and testing.
I've accepted plenty of PR for many features or bug fixes: https://github.com/19majkel94/type-graphql/pulls?q=is%3Apr+sort%3Aupdated-desc+is%3Aclosed+no%3Aassignee+review%3Aapproved So you can help with smaller things, not by starting rewriting the whole code.
So keep calm and stay tuned! :wink:
Guys! If someone want to give me some hints on monorepo setup, here is my vNext branch: https://github.com/19majkel94/type-graphql/tree/vNext Any tips are welcome :wink:
@19majkel94 : what is your issue with the monorepo?
@tkvw I need to configure changelog generation from commits.
I would also love to have continuous release, so every commit to master/next branch would result in automatic pre-release (maybe with @next
or @beta
scope).
And I also have to configure a full build package script, maybe with pika-pkg, to not only build TS files but also transform pacakge.json, copy readme and licence, etc.
Great idea!
To help, I'm already contributing monthly.
@MichalLytek I was going to try to help with allowing support for nested inputs (I feel that inputs are almost pointless if they don't convert to what a user would expect them to be). I found out the hard way that validation doesn't run on nested inputs which I think all users of this library would assume would work as well. I was missing functional tests with those since I just unit test to validate if a class has a validator function associated to it.
Anyway, when going through this and the decorators PR I submitted, one thing that I feel could benefit this library is to maybe convert metadata to classes and have "MetadataDefinitions" as interfaces for decorators. This is sort of what the Doctrine
PHP guys do. In their ORM, a lot of the logic is given to the main metadata class, and the builder builds from definitions.
An example that I wanted to add was to params.
interface CommonArgDefinition extends BasicParamDefinition {
getType: TypeValueThunk;
typeOptions: TypeOptions;
validate: boolean | ValidatorOptions | undefined;
}
interface ArgParamDefinition extends CommonArgDefinition {
kind: "arg";
name: string;
description: string | undefined;
}
abstract class AbstractParamMetadata<T1 extends CommonArgDefinition, T2 = any> {
constructor(protected def: T1) {}
abstract getParam(data: ResolverData<any>, opts?: T2);
}
class ArgParamMetadata extends AbstractParamMetadata<ArgParamDefinition, { globalValidate: boolean }> {
getParam(data: ResolverData<any>, opts?: T2) {
// get param specific to arg
}
}
I'm kind of spitballing here, but I feel like code navigation would be easier to understand.
I'm not sure how much progress you've made on the re-write.
convert metadata to classes
I am trying to use as much FP approach as possible, rather that rich, unmaintainable classes and OOP 😕
The solution for nested input is simple - have access to metadata in resolver/middleware scope and then just use the type metadata to recursively transform nested object into class instances. Then depending on the plugin system (other validation libraries than class-validator) call the validate function recursively or on the root arg object.
Classes are unmaintainable when they start doing too much. FP is unmaintainable when you have a single function trying to do too much. But to each their own :P
I'll play with some ways of doing what you've said above in the current code base. Non-nested inputs is kind of blocking at the moment which is super frustrating. And in previous issues regarding nested inputs have "solutions" that are insanely ugly and definitely not maintainable.
You can try a fork that uses class-transform
to create instances and calling Type(() => Foo)
decorator inside @Field
. I had to abandon class-transform
because of it's problem with Promises - example with TypeORM lazy loading failed dramatically.
The one thing I'm trying to think about if that recursion and type checking is done on the resolver, you're having to check if a field is an input type on each resolve instead of coming up with a more performance solution.
@MichalLytek Yeah, I don't use class-transform
either. https://github.com/j/type-mongodb I do it my own way here.
I've had issues as well with class-transform
. Constructing nested input types shouldn't be hard. I just don't want to iterate over input types on every resolve to check if it needs to be constructed differently.
don't want to iterate over input types on every resolve
Oh, that's what you mean... so how we can generate on schema build time a "transform" function that will take args
as a parameter and return the transformed and deserialized class instance?
All I can think of is to prefetch the types info and build something like a convertion tree that we can then traverse in runtime mapping the args object data 😕
All I can think of is to prefetch the types info and build something like a conversion tree that we can then traverse in runtime mapping the args object data
Yup, exactly.
I was thinking ArgParamMetadata
can either contain a tree, or a transform type function that is pre-generated upon schema building.
In my library I posted above, I did some tests on using maps vs array iteration for finding metadata and found that maps were pretty darn fast. Is there a reason you're using arrays for metadata lookups as apposed to having everything be in a map? (off topic). It'd make it so that performance for just adding it to the resolver wouldn't be too bad. Pre-generated would be better though.
Is there a reason you're using arrays for metadata lookups as apposed to having everything be in a map? (off topic).
You can find some comments in code base about that thing 😉 I have plenty of legacy code that was born in PoC phase and "works" till current day 😄
When I arrive to the transformation point, I think I will just make it work first and then refactor to more performance solution after some profiling.
// TODO: refactor to map for quicker access
Haha, okay, cha-chingggg.
I wrote a simple test and got nested inputs to work with arrays & non-arrays, but then I made the inheritance tests fail. So I'll need to dig a little bit more. I'm definitely using the "slow" method.
@MichalLytek https://github.com/MichalLytek/type-graphql/pull/452 does the job. I can add more tests on caching, etc too. I just got it to work while not breaking anything.
@MichalLytek,
In addition to #134, would resolving this issue also resolve:
?
@glen-84 Yes, the typegraphql-next
is being written from scratch with support for that in mind and the built-in decorators are relying on that so I'm actually eating my own dog food 😅
@MichalLytek Do you have a public roadmap, or more information about this rewriting ?
@EdouardBougon No ETA yet, I don't know if it will be an open core, sponsorware or under a corpo-restricted licence as I suffer from lack of time for the development 😥
Right now, schema generation pipeline works in this way:
MetadataStorage
MetadataStorage
)graphql-js
using "built" metadata fromMetadataStorage
It wasn't designed for such complicated features as inheritance, etc., so it has started to became a bottleneck. So the schema generation pipeline has to be rewritten into a new design:
MetadataStorage
store all small parts of metadata from decorators (like now)MetadataBuilder
takesresolvers
andtypes
to build the extended, full metadataMetadataStorage
, working like a "pure" functionSchemaGenerator
accepts built metadata object and generate schema corresponding to the metadata (without need for checking the metadata storage)This would simplify this pipeline a lot, as well as allow to implement new features and allow to fix #133 and #110.