Closed KingDarBoja closed 2 years ago
This should happen together with the switch from GraphQL-Core version 2 to version 3.
@ekampf has been already working on that in the gql-next repository. That work should be merged back here and released as new major version of gql. But as long as we support the GraphQL-Core legacy version 2 which explicitly supports Python 2, we should also support the legacy Python 2.
My suggestion is to fast forward the version number of the current gql to version 2, and then release the version based on GraphQL-Core 3 as version 3. This versioning scheme has also been used with other GraphQL-Core based libraries such as Graphene.
Sounds good, the only issue (which I have no idea) is the merge step, as I see there is not too many shared stuff between each version, perhaps the upcoming gql version (or v3) is made from scratch, right?
And regarding the fast forward to v2 following the semantic versioning process, is there any schedule date?
Regarding the merge step, give me some time to look into this and decide how to best merge the two since now gql has moved forward. With "from scratch" you still mean based on the code and API of v2, or do you want to create a completely new API?
Regarding the timing for v2, we should first triage the open issues and PRs and check if there are bugs that should be fixed or features that should be added to v2, or be postponed or rejected for good reasons. It would be great if you could help with that.
Regarding the "from sratch", I was actually asking about how it was done since it doesn't seem to be the same repository organization (files, directories, functions) on the quick look between gql-next
and gql
.
I am open to keep contributing to the already existing open issues, which some of them have been addressed on some PRs, but I do feel like those can be done in a better way.
Once again, not the most skilled on the GraphQL spec but willing to improve and keep contributing 👍
Just talked with @ekampf and had a look at his gql-next repository. Actually, he created a pretty different mechanism for loading GraphQL documents, that includes auto-generation of Python files, it's not just a simple port of gql to core-next as I thought earlier. We need to decide whether we should port some of that work back here or whether we should keep the mechanism as it is.
I also wonder whether it would make sense to support importing GraphQL documents directly from Python by hacking the import machinery.
I also think before releasing a new major/final version, the obscure dsl
language should be documented. It is currently unclear to me where it is coming from, whether it mimics an existing thing or was an invention of Syrus. Does anybody know?
I also think before releasing a new major/final version, the obscure
dsl
language should be documented. It is currently unclear to me where it is coming from, whether it mimics an existing thing or was an invention of Syrus. Does anybody know?
Not an invention but more like an alternative to simple JSON queries. You can check these blog posts to see where it came from (of course, more can be found on Google):
In fact, it should be spelled like "SDL" instead of "DSL" 🤔
SDL has a well defined meaning as the "schema definition language" of GraphQL. But DSL means "domain specific language" and could be anything. I think it's a bit misleading, because "domain" is probably meant to be "GraphQL" in this case. But it usually means application domain, while GraphQL is general purpose and the "DSL" here is also intended to be general purpose. just an alternative way to express GraphQL documents in Python instead of JSON. Similar to the "SQL Expression language" in SQLAlchemy which allows writing SQL queries in Python.
Makes sense now, as the DSL search yield few results compared to SDL.
Regarding the merge with gql-next repo, I saw the readme weeks ago and had the same feeling but didn't expected to be a entire different thing. The best action could be moving (cherry picking maybe?) the gql-next
stuff into this repo but first update all gql
to fit Python 3.6+.
Hi, Do you have any comments about the approach I used in the pull request #70 ? I started to add async methods to the existing Client class but it appears that most of my code had no place there so I put it all in an AsyncClient class (and same for AsyncTransport) In addition fetch_schema_fromtransport=True cannot work for async transport because we cannot use await in __init_\ so I added a fetch_schema method
@leszekhanusz sorry for the long delay responding to your great contribution. The problem is that I'm currently maintaining gql only as a deputy, but actually it's not really my priority and I don't have the time and energy to do this in an appropriate way. I fear this project will only move on if somebody else steps up and acts as a maintainer and protagonist for gql.
As far as I see, the most important is to address all existing PRs and issues, push out a legacy version v2 that supports Python 2 and GraphQL-core v2, call it a day, and then quickly move on to a modern version v3 that supports Python 3.6+ and GraphQL-core v3 only. Otherwise maintenance and adding async heavy features like #70 becomes a real burden, and we keep getting new PRs that target the legacy version. We should probably still get #70 into v2 first, but then we need a feature freeze for v2.
Also, regarding the merge with gql-next, I now think it is probably the best to postpone these ideas, since the two libraries are too different in their approaches, and this makes the maintenance task even more complicated.
Again, if anybody wants to help out managing the v2 and v3 release of gql, please come forward!
@cito
The library is now clearly in the middle of a difficult transition.
We have v2 with the sync client and only http transport.
In my opinion it would make more sense to put #70 only into a v3 version.
Then we need to decide for v3 if we want to keep a sync client (and/or sync transports) because it would be difficult to manage all in parallel.
What I would do:
What do you think about this plan ? If it is ok for you I am willing to help with this V3 version and keep it maintained afterwards.
Hi @leszekhanusz, thanks for helping out!
Yes, this transition is a bit difficult. We should triage the existing issues and PRs first, putting a label v2 or v3 on all of them (v2 means it will be included in v2 but also ported to v3). In case of doubt, we should label as v3 only, including #70, as you suggested, so we can move forward more quickly. Next we should release a v2.0 with all the features labelled as v2 and a v3.0 with the same features, ported to GraphQL-Core v3 and Python 3.6+. Then we can add #70 and the other features labelled as v3 and release that as the final 3.0.
Regarding the async transports, do you intend to support wecksockets only? I think that would be too much. In some cases, websockets might not work properly with firewalls, proxies or load balancers, and they are only really needed for subscriptions. So I think we should continue to support both. Requests makes the sync transport not too difficult to maintain.
Hi @Cito, happy to help!
Regarding the async transports, do you intend to support wecksockets only? I think that would be too much. In some cases, websockets might not work properly with firewalls, proxies or load balancers, and they are only really needed for subscriptions. So I think we should continue to support both. Requests makes the sync transport not too difficult to maintain.
Like I said in my previous message, I think we should support both as well. We need to support http transport with async (no only websockets). But websockets can be enough for a lot of people. For any real production application, you would use websockets over SSL these days, which will go through most firewall, proxy and load balancers.
Regarding the transition, why don't you create a v3 git branch right now? (without any releases of course) This would allow you to already accept PRs for v3 into that branch.
I would like to work next on this async http transport. Should I add it to #70 ?
Actually I think we should use master for v3 already, and create a branch vor v2 instead. Yes, you can simply add the http transport to #70, and we can then merge everything to master. Will have a bit more time towards the end of a week so that I can give feedback and we can move this forward.
I am happy with @leszekhanusz proposal of becoming a maintainer of this library as he seems to have better understanding of advanced features like the websocket thing.
I can bring a hand if needed while taking care of graphql-server-core
stuff 💯
Sorry, could not find time this week, but it's still on my to do list. As a remedy, I have published a quick 0.5.0 release.
@Cito no problem.
I have now implemented the http async transport with aiohttp in #70
Also now with python 3.6, from gql import Client
will import the AsyncClient which is fully compatible with the previous Client (except the retry which has been removed now that it is in the request transport)
Using this client it is now possible to execute queries synchronously
result = client.execute(query)
or asynchronously
async with Client(...) as session:
await session.execute(query)
Now because the Client is different depending on the python version, the coverage got lower even though it is 99% in reality. I tried to combine the coverage of python 2.7 and python 3.8 with tox. It works on my machine but not with coveralls so I need some help about that.
@leszekhanusz Do you want to get this into the v2 version as well? I.e. should I merge this first to master and then branch to v3, or the other way around? If the latter, then you don't need to care about these coverage issues since we will drop Py 2 support in v3 anyway.
You're right, It is probably not necessary to bother with it. I will then completely remove python2 support in #70 in a new commit in the following days.
To move this forward, I have now created a v2.x branch for the releases with legacy support (Python 2 and graphql-core v2). The master branch will now be modernized to use Python 3.6+ only and support graphql-core v3. I will work on this modernization today and bring the master branch in shape again.
As you may have also noticed, I have triaged the open issues and PRs, closed some that were too outdated and added labels.
I have now merged the two larger PRs #70 and #83 to the master branch, made everything work with graphql-core v3 instead of v2, removed the cruft left behind from legacy support, merged the two test directories back together, and cleaned up the code style a little bit (see the latest commit messages). Please pull these latest changes before you continue to work, since so much has changed.
Thanks again @leszekhanusz for your great contributions!
One thing I find a bit annoying is the spamming of the console with output messages from the server when running the tests (despite the -s
option to pytest). We should add an option to run the tests more quietly, because it gets hard to see the real error messages between all the clutter.
I have now merged the two larger PRs #70 and #83 to the master branch, made everything work with graphql-core v3 instead of v2, removed the cruft left behind from legacy support, merged the two test directories back together, and cleaned up the code style a little bit (see the latest commit messages). Please pull these latest changes before you continue to work, since so much has changed.
That's great!
One thing I find a bit annoying is the spamming of the console with output messages from the server when running the tests (despite the
-s
option to pytest). We should add an option to run the tests more quietly, because it gets hard to see the real error messages between all the clutter.
Yes, sorry about that. I added the -s
option when tox was OK on my machine but not on travis and I wanted to have more logs to understand why. It was not working because the pypy3 version on travis was older.
Anyway you can remove the -s
option in tox.ini if you want.
Note: If you just want to run tox without all the clutter on your own computer, you can run:
tox -- tests
@Cito
Before the next version I think that the LocalSchemaTransport could be modified to be an AsyncTransport so that we could run subscriptions (tests/starwars/test_subscription.py
) the same way than the websockets transport.
Regarding the -s
option: It was late and I thought -s
would enable capturing and silence all output, and was wondering why we still see output. But the opposite is the case, -s
disables capturing. So all is well. But we should add some docs regarding testing, and mention things like the -s
and --run-online
options.
Regarding the LocalSchemaTransport: If we make LocalSchemaTransport async, then we should probably add a LocalSchemaSyncTransport variant for schemas with only sync resolvers, similar to graphql
and graphql_sync
in GraphQL-core.
Regarding the LocalSchemaTransport: If we make LocalSchemaTransport async, then we should probably add a LocalSchemaSyncTransport variant for schemas with only sync resolvers, similar to
graphql
andgraphql_sync
in GraphQL-core.
We could have only an async version (See PR #84 ).
The only caveats is that we cannot use sync context managers (with Client(schema=schema)
)
We can use:
client = Client(schema=schema)
result = client.execute(...)
for result in client.subscribe(...):
print(result)
or
async with Client(schema=schema) as session:
result = session.execute(...)
async for result in session.subscribe(...):
print(result)
Ok, that might be a solution, since the transport is only used via the Client, which has a synchronous execute method with its own event loop. I'll merge #84 then.
I just saw several emails regarding these PRs on the week but was too busy to look at them. I do feel @leszekhanusz should join the slack channel so we can discuss any suggestion or improvements regarding gql
and the graphql-python
ecosystem.
Overall, impressive work @leszekhanusz and thanks to @Cito for all its hard work on reviews and cleanup.
@KingDarBoja and @leszekhanusz - do you think we're ready to release the v2.0 legacy version together with the first v3.0 alpha this weekend, so people can already try it out? For the final v3 release, I'd also like to include #33 and #49, and get some feedback from users. Anything else to add from your side? Then please add an issue/PR with the v3 milestone so we don't forget it.
I do feel like v2 is pretty much ready to be released, I will await the release on pypi so conda feedstock can be updated too. Same goes for the v3.0-alpha release as most new stuff has been already covered by the docs.
@cito @KingDarBoja
Yes, the library is in a good state for a v3.0-alpha release. Great collaboration with you!
Some thoughts:
What could be added soon:
README.md
CONTRIBUTING.md
I was thinking also about adding documentation on how to use the backoff
module to:
In the future:
documentation of variable_values and operation_name in README.md
Although there are docstrings for each class and method at the library, I think these options should be placed on some documentation like you proposed (sphinx) or maybe include it as a section at the graphene docs as it's already configured.
In any case, this section should be splitted into its own markdown file in the meantime as the readme is getting longer already, unless we index those sections at the start.
improvement of CONTRIBUTING.md
I believe we should include the command shortcuts provided at the Makefile and mention users should run the formatting one before submitting a new PR and also provide a Make file for Windows users (like me).
Ok, I have published the 2.0.0 and 3.0.0a releases now, and added #89. The other ideas sound all good, maybe you should create them as separate issues.
also provide a Make file for Windows users
@KingDarBoja Better times are coming for Windows users with WSL 2, but yes, creating a make.bat from that would be easy.
why do I always unpin this by mistake ? Sorry for the noise...
We got a nice looking documentation hosted at ReadTheDocs: https://gql.readthedocs.io/en/latest/ 🍰
I guess this is related to the pip install **--pre**
mentioning in the readme. I was looking for a package exactly like this one, but kind of hesitant to use it because of this.
Can anyone shed light on why this is considered a pre-release version?
@uripre mostly because of a lack of time to finish a proper release.
On a hopeful note I left my full time job and will have a lot of time to spend on this project in october.
But you can already use it now, I can assure you it is quite solid already.
@leszekhanusz Any updates that we might be able to anticipate for releasing v3 as stable? What are the remaining tasks out of interest?
@divyanshmanocha I would like to make a Release Candidate before December, probably sooner.
Thanks @leszekhanusz 🙂
The release candidate is now available :rocket: Please try it and give your feedback. If no big issues are found, I'll make the stable version beginning of next year.
@leszekhanusz I have an issue when I use this library in Django project with snapshottest
with 3.0.0rc0
, but with 2.0.0
works fine:
File "/opt/venv/lib/python3.7/site-packages/graphene/test/__init__.py", line 34, in execute
executed = self.schema.execute(*args, **dict(self.execute_options, **kwargs))
File "/opt/venv/lib/python3.7/site-packages/graphene/types/schema.py", line 482, in execute
return graphql_sync(self.graphql_schema, *args, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/graphql/graphql.py", line 141, in graphql_sync
is_awaitable,
File "/opt/venv/lib/python3.7/site-packages/graphql/graphql.py", line 173, in graphql_impl
document = parse(source)
File "/opt/venv/lib/python3.7/site-packages/graphql/language/parser.py", line 103, in parse
return parser.parse_document()
File "/opt/venv/lib/python3.7/site-packages/graphql/language/parser.py", line 199, in parse_document
definitions=self.many(TokenKind.SOF, self.parse_definition, TokenKind.EOF),
File "/opt/venv/lib/python3.7/site-packages/graphql/language/parser.py", line 1083, in many
self.expect_token(open_kind)
File "/opt/venv/lib/python3.7/site-packages/graphql/language/parser.py", line 977, in expect_token
self._lexer.advance()
File "/opt/venv/lib/python3.7/site-packages/graphql/language/lexer.py", line 79, in advance
token = self.token = self.lookahead()
File "/opt/venv/lib/python3.7/site-packages/graphql/language/lexer.py", line 88, in lookahead
token.next = self.read_token(token)
File "/opt/venv/lib/python3.7/site-packages/graphql/language/lexer.py", line 103, in read_token
body_length = len(body)
TypeError: object of type 'DocumentNode' has no len()
@wowkin2 please submit an issue with more info
The next release candidate v3.0.0rc1 is now available using graphql-core version 3.2
The stable version could be released next week if no big problems are found.
The stable 3.0.0 gql version is now there.
Thanks @cito, @KingDarBoja and all the many contributors for this release!
@wowkin2 @leszekhanusz - the error reported above can happen when you pass a value to gql
that is already the result of a gql
call. In PR #435 I have suggested a patch to get a better error message in that case.
Python 2 has reached EOL and no further development will be done as stated on the official Python release schedule page for 2.7
Also, Python 3.5 support will stop at 2020-09-13 as seen at status-of-python-branches, so would be great to drop support for Python 3.5 in order to push forward and use 3.6+ features on the code (like f-strings).