Closed un33k closed 5 years ago
Hi,
you can do like this:
class JwtRefreshQuery(object):
token = graphene.String(graphene.String, token=graphene.String(required=True))
def resolve_token(self, info, token):
return refresh_token(token)
class Query(JwtRefreshQuery, SomeOtherQuery, graphene.ObjectType):
pass
I agree intuitively I think it would make more sense if the API for mutations and queries were more similar. In any case yes the docs/examles are a bit lacking in showing how you can do "merge multiple query classes"
So I've been thinking about this a lot and I think that being able to define a class to resolve an ObjectType
would be quite useful. I've got some working code based on the Mutation
class @un33k if you want to include it in your application and see if it works:
from collections import OrderedDict
from graphene.utils.get_unbound_function import get_unbound_function
from graphene.utils.props import props
from graphene.types.field import Field
from graphene.types.objecttype import ObjectType, ObjectTypeOptions
from graphene.types.utils import yank_fields_from_attrs
from graphene.utils.deprecated import warn_deprecation
class FieldResolverOptions(ObjectTypeOptions):
arguments = None # type: Dict[str, Argument]
output = None # type: Type[ObjectType]
resolver = None # type: Callable
class FieldResolver(ObjectType):
@classmethod
def __init_subclass_with_meta__(cls, resolver=None, output=None, arguments=None,
_meta=None, **options):
if not _meta:
_meta = FieldResolverOptions(cls)
output = output or getattr(cls, 'Output', None)
fields = {}
if not output:
# If output is defined, we don't need to get the fields
fields = OrderedDict()
for base in reversed(cls.__mro__):
fields.update(
yank_fields_from_attrs(base.__dict__, _as=Field)
)
output = cls
if not arguments:
input_class = getattr(cls, 'Arguments', None)
if input_class:
arguments = props(input_class)
else:
arguments = {}
if not resolver:
_resolver = getattr(cls, 'resolve', None)
assert _resolver, 'All field resolvers must define a resolve method'
resolver = get_unbound_function(_resolver)
if _meta.fields:
_meta.fields.update(fields)
else:
_meta.fields = fields
_meta.output = output
_meta.resolver = resolver
_meta.arguments = arguments
super(FieldResolver, cls).__init_subclass_with_meta__(
_meta=_meta, **options)
@classmethod
def Field(cls, name=None, description=None, deprecation_reason=None, required=False):
return Field(
cls._meta.output,
args=cls._meta.arguments,
resolver=cls._meta.resolver,
name=name,
description=description,
deprecation_reason=deprecation_reason,
required=required,
)
You use it in pretty much the way you want to in your example:
class JwtRefreshQuery(FieldResolver):
class Arguments:
token = graphene.String(required=True)
token = graphene.String()
def resolve(self, info, **args):
token = refresh_token(args['token'])
return JwtRefreshQuery(token=token)
class Query(graphene.ObjectType):
jwt_refresh = JwtRefreshQuery.Field()
I'm not sure what to call it at the moment or where it might fit in in the project. It might even be that the ObjectType
class should be extended to allow this kind of use. Any thoughts @syrusakbary ?
nice @jkimbo I like it!
@heyhugo I have a fractal style schema where one module per entity and a mutation / type / model / query / subscription files within that.
I also have my resolvers separated out between query/mute and field resolvers just to keep the logic semi organized and then all my fields have resolver= in the types.
What is magic is that I auto generate my schema by some old fashioned python and inheriting from common classes by root type. I forget from whom I sourced this from, but it's worked really well for me for the past year ish.
It's a workaround, and I'd love to see the query / mutation homogenity too :) https://gist.github.com/ProjectCheshire/277c5ff1468460a05c2a44556e270eff
@ProjectCheshire How do you handle circular imports? I can use lambdas if I put everything in one file, but I'd like to use yours and ahopkins' approach, which otherwise works great.
@cmmartti similar to how graphene does with lazy import. https://pypi.org/project/lazy-import/
There's probably a github issue around here that mentions it specifically :)
For functions I often use lazy callable if there is some shared logic I need*. (such as inflating models via neomodel) Since I'm using a graph database, I get a lot of circular models.
*I'm not a guru, so if you delve deep and find the way to pass the type hints (eg your typical ide sugars) along when calling a lazy callable, I'm all ears. (geography is just my mixin for wkt and geojson fields and some common funcs I use when using the spatial plugins on neo4j)
from graphene import String, Int, ObjectType, Float, Field, lazy_import
from api.models.global_resolvers import country_
from ..fields import GeographyFields
class CityType(ObjectType, GeographyFields):
"""A City represents authoritative data on a location as defined in Geonames.org"""
name = String()
country = Field(lazy_import('api.models.country.type.CountryType'), resolver=country_)
timezone = String()
latitude = Float()
longitude = Float()
population = Int()
featureCode = String()
featureClass = String()
geonameid = Int()
@ProjectCheshire Oh wow, yet another thing that should be in the docs but isn't. That solves my problem perfectly :)
Found an issue that mentions it. Thought I'd been thorough with my searches, but apparently not.
I put together a small working example of this "fractal-style schema" approach here.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Is it possible to have a similar pattern for queries, similar to mutation. This is to avoid long classes with multiple resolvers.
It would be great if this would be possible.