Closed tomchristie closed 4 years ago
Aside: It's also worth noting that the community could still perfectly well maintain an HTTPX based sync client, either using the bridging techniques we've used, or using a code-gen style approach, but that wouldn't neccessarily be an "encode" project.
Big 👍 from me. I agree 100% with embracing the async model and focusing on that. This was the reason I became interested in HTTPX in the first place.
I do think however that a lot of users have come to HTTPX looking for an alternative to requests given the recent news about its development. We'll need to consider carefully the migration path for them.
It's possible that the code-gen approach being raised in https://github.com/encode/httpx/issues/508 could make some of this moot, since it would largely mean that our codebase could become async-only, and we'd only create a sync version at package build time. (Which would resolve some of the issues I've mentioned here anyways.)
Tho we've not really considered if the code gen approach could present race conditions in a threaded context, that are not present in our async case, because the context switching can occur anywhere, not just at await
points.
I’d be perfectly fine personally on having HTTPX only provide an async interface.
I’d imagine that the high-level interface would go async, right? What I was wondering is whether HTTPX would still address the use case of “let me fire up an interpreter and make HTTP requests in the shell”. Thankfully, IPython and the 3.8+ asyncio shell would allow us to use HTTPX smoothly. And then people can just asyncio.run(httpx.get(...)) if they’re in a regular shell.
Overall, I think the benefits for both the development team and users (mainly thanks to better clarity at all levels) are very convincing. So, yup, +1 for me.
Btw, I think trying to keep a story around Requests and how easy it is to migrate would still be nice. Basically it would be “just import httpx instead, and put an await in front of all method calls.” Not sure if we should still commit to be Requests-compatible after that point, although the general approach seems to be “mostly compatible” already.
I think it's too soon (if ever?) to jettison non-async users. Much of python code are in scripts, not daemons, and I haven't seen asyncio pop into too many of those just yet. In fact I think sigalarm based concurrency is probably more prevalent then threading there.
My opinion is that HTTPX should support synchronous requests, and should implement synchronous requests without any async calls internally. Perhaps we can make synchronous requests be HTTP/1.1-only because a synchronous HTTP/2 request doesn't make much sense.
Sync code will always be used in Python for experimentation, tools, utilities, and one-time scripts and shutting out that section of users isn't going to help adoption of HTTPX. I guarantee the first thing every user does with HTTPX is use the sync requests, not the async ones.
We want HTTPX to bring a lot more to the table than just async HTTP and give those benefits to as many users as we can.
I think @sethmlarson and @toppk have a point here w.r.t. user adoption. My previous comments suffered from quick reasoning, so I'll try to pause and ponder things longer here.
It seems to me there are more fundamental questions at play here, all related to the vision we want this project to pursue:
I put "?" as the goal here because I actually can't think of a convincing one. I could have put "provide an async-capable HTTP client", but there's already one: aiohttp
.
In fact, it seems the above is precisely the vision that aiohttp
pursues. Not sure it succeeded there — partly because it tries to do too much, and people need to learn a different API from that of Requests — so maybe there's room for an alternative.
But by pursuing this vision, I suspect we'd end up in the same situation, and we won't truly shake up the status quo. 80-90% of Python users will continue using Requests because they don't need async, and thus don't want to (shouldn't have to?) deal with the associated cognitive overhead.
According to the Python Developers Survey 2018:
Vision 1 would only allow us to address 1) and 5), while vision 2 addresses all use cases, because most of them just need a quick (synchronous) way to make HTTP requests.
Also, among Python users:
So, here we go: either we address and solve problems for 84% of all Python 3 users, or we do so for a small fraction of them (a fraction that will be even smaller because of the friction induced by async).
My +1 from a previous comment was probably too centered on my position as a maintainer. There is indeed a lot of (nearly-duplicated) code that's solely targeted at supporting sync usage. But shouldn't the ultimate goal be to serve our users well, at the cost of the implementation being not as elegant as it could because of the hard dichotomy between the sync and async worlds?
I see the fact that people doing web development and data science (see #508) are starting to adopt HTTPX as a hint that the community is seeking the modern Requests successor we've been shaping up. It's probably most sensible to support this trend, and address a need that would otherwise stay unsatisfied until another project takes over, and goes through the same process we've been going through in the past few months.
I'd now say +1 to @sethmlarson suggestions on ensuring that the sync client is as purely sync as possible (no async calls, HTTP/1.1-only). #525 is a good first step towards that.
I could have put "provide an async-capable HTTP client", but there's already one: aiohttp.
It seems to me that Python users looking for a synchronous requests
are already well served by... requests
and they can continue to use it indefinitely.
I'm just popping up as a user who's looking for a the best async http client. API aside, aiohttp
doesn't fit my needs as I'm after something which works with trio
. The competition here is asks
rather than aiohttp
and the decision I have to make is which one is better for my usecase and which will be around for the long term (both are very young projects)
With IPython's async repl and native support in Python 3.8 I don't think mucking around with an async api necessarily introduces that much friction these days.
Anyway, just popping up here to provide a user perspective FWIW. I got interested in httpx as an option when I saw tomchristie pop up on the trio gitter...
It seems to me that Python users looking for a synchronous requests are already well served by... requests and they can continue to use it indefinitely.
Exactly. Use the right tool, for the right job.
As much as the project would clearly be more popular if it covers both cases, that's actually more just a "ohh... shiny new things" effect. There isn't actually any genuine pragmatic benefit in us re-implementing "requests".
The grunge of bridging across to supporting threaded concurrency makes the project significantly less comprehesible when digging into it, and I think it's holding us back from excelling as "the high performance HTTP client" option.
I'm way more invested in the potential of async Python in helping the langauge remain competative against Node and Go, even if it is currently a niche ecosystem.
I'd personally rather see us completely nail HTTP for asyncio and trio, and only then figure out if and how we want to extended that to threaded concurrency.
- And no, neither of the existing candidates fits the bill. I don't think aiohttp or asks covers HTTP/2, and aiohttp happens to come coupled to a server framework.
Right, I see my analysis of other async HTTP clients was quite incomplete. There's indeed a lot more that HTTPX brings to the table relative to e.g. aiohttp
(HTTP/2, not-just-asyncio, not-also-a-server-framework, familiar Requests-like API, etc).
To my mind, both options make sense, with different objectives and trade-offs, and I agree that pushing for excellence by keeping the scope down is attractive. It's also an approach that's consistent with the general push for focus and modularity across Encode projects.
I wrote previously that going async-only would not break status quo as much as trying to address both sync and async. Actually, it probably would: as you said there's currently no other option that compares with what we have, even today.
And it would make our lives as maintainers and contributors easier, because the scope would be narrowed-down, resulting in a multitude of issues today becoming non-issues tomorrow (e.g. everything that's related to using HTTPX in a sync context).
So, hmm, I think it's a vision I'd be willing to support, yup — as long as we make it as loud and clear as possible to our users. Example re-worded README headline:
**HTTPX** - High performance HTTP client for Python
> Note: HTTPX is meant to be used in programs that use `async`/`await`. If you don't use async, you should probably use [Requests](https://github.com/psf/requests/).
Features:
- Requests-compatible API.
- ...
From the outside, as someone who's been watching and has on the list to pick up HTTPX just the very next time I would have reached for requests, I want a sync wrapper, even if just to start with, so I don't have to spin up an event loop just for a get(...)
. As long as it's got that...
(Because the real reason is, "Yes, I probably should write this as async" — but I don't want to change library just to do that.)
Hope that's useful/makes sense.
(Because the real reason is, "Yes, I probably should write this as async" — but I don't want to change library just to do that.)
☝️ Couldn't have put it better myself.
Why not best of both worlds? Async-only but with sync versions without the threading nonsense. Expose sync functions which make event loops temporarily and run the request in that (Don't bind that to the current thread). It works very well in my opinion. This way, the users calling the sync functions don't even notice they are calling coroutines. So I strongly disagree with @sethmlarson, because I think this approach works really well. Has some overhead (starting a new event loop every time) but probably still better than starting threads! What's even better is that the users calling the "sync" versions will benefit the async advantages, because they are running them almost the same way as if they were running an event loop themselves. The only gotcha I think of it has to be documented, because if users are running an event loop, but calling the sync version, there might be problems, but I would just raise an Exception in that case, not a valid use-case IMO.
My use-case with this approach was that I wrote a Slack bot in fully-async style, but I needed to call some method from Django, sometimes even running event loops in different threads. But it was very easy to understand and write. Here is a concrete example.
This is all the code I needed to "convert" the async methods to sync:
https://github.com/kissgyorgy/gerrit-slack-bot/blob/ba83dce88ad8a57818204ca566389fa8a36d9140/slack.py#L250-L272
class Api(AsyncApi):
def __init__(self, token):
try:
self._loop = asyncio.get_event_loop()
except RuntimeError:
# when running in a thread, get_event_loop doesn't create another one
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
self._session = aiohttp.ClientSession(loop=self._loop)
super().__init__(token, self._session)
@lru_cache(maxsize=None)
def __getattribute__(self, name):
attr = super().__getattribute__(name)
if name.startswith("_") or not asyncio.iscoroutinefunction(attr):
return attr
def call_sync(*args, **kwargs):
coro = attr(*args, **kwargs)
return self._loop.run_until_complete(coro)
return call_sync
Then I can simply call them from Django: https://github.com/kissgyorgy/gerrit-slack-bot/blob/ba83dce88ad8a57818204ca566389fa8a36d9140/web/slackbot/models.py#L67-L68
slack_api = slack.Api(config.BOT_ACCESS_TOKEN)
slack_api.delete_message(self.channel_id, self.ts)
Maybe there could be a global event loop when a sync version is first called, so the overhead of creating one is not that bad, because it only have to do once.
@kissgyorgy Just a btw we shouldn't have to start threads for sync. We'd have to be thread-safe though so users can make multiple synchronous requests using threads if desired. (See urllib3/requests)
If we dig into this further, I guess that if this gets accepted then we'll end up with an async high-level API, right? Like…
$ python -m asyncio
>>> import httpx
>>> r = await httpx.get("https://example.org")
So, maybe as a middle-ground to not completely knock users off, we could imagine keeping a very limited sync high-level API variant. Users would be able to use it as…
$ python
>>> import httpx.sync as httpx
>>> r = httpx.get("https://example.org")
By its existance, it would help users get started, prototype, or perform one-off requests in a shell, but it should be just-cumbersome-enough to strongly discourage users from using it in a production setting. So, minimal autocomplete/editor support, disallowed to run in an async environment, only allowed to run in the main thread, no support for trio, etc. Basically:
# httpx/sync.py
import asyncio
import httpx
def get(url, **kwargs):
loop = asyncio.get_event_loop()
assert not loop.is_running()
loop.run_until_complete(httpx.get(url, **kwargs))
The whole module could even be "generated" like this:
import asyncio
import httpx
def syncify(entrypoint):
def syncified(url, **kwargs):
loop = asyncio.get_event_loop()
assert not loop.is_running()
loop.run_until_complete(entrypoint(url, **kwargs))
return syncified
get = syncify(httpx.get)
post = syncify(httpx.post)
...
Of course, exposing this functionality obviously means that users are going to, well, use it. And they'll probably at some point want to ask for more. Which means we'd need to make extra clear that the sync API is very limited on purpose, i.e. just to make users' lives easier when all they want is to make a quick prototype without having to install Requests.
@florimondmanca This is what I had in mind. Keeping only the async use case as primary, but providing very simple sync possibilities for the simple/interactive shell cases with minimal code.
With one exception: I would not call asyncio.get_event_loop()
because that ties the created event loop to the currently running thread, which might cause problems when users mix and match async and sync styles.
@sethmlarson Just a btw we shouldn't have to start threads for sync.
Yes. What I meant that users can even run multiple threads and those threads will all have different event loops when calling the sync version. Maybe I would print a warning (which could be turned off) in case running in a thread to suggest that the async style might fit better or they could use requests.
@kissgyorgy
I would not call asyncio.get_event_loop() because that ties the created event loop to the currently running thread, which might cause problems when users mix and match async and sync styles.
What alternative would you suggest? The goal there was to limit users to only be able to use the sync high-level API in the main thread outside of any async environment (so, zero multi-threading support, and zero sync-in-async support). Again, extremely limited by design. If we're fierce about it, we'd fail as soon and as loudly as we can. "Are you trying to use HTTPX as a replacement for Requests? :)", or in that vein.
What alternative would you suggest?
I would just create a new event loop and throw it away. Almost the same as yours, just don't mess with a possible loop in the current thread, which might even get created later, maybe even with a different EventLoopPolicy
. Something like this:
if (sys.version_info.major, sys.version_info.minor) == 3, 6:
get_running_loop = asyncio._get_running_loop
else:
get_running_loop = asyncio.get_running_loop
def syncify(entrypoint):
def syncified(url, **kwargs):
if get_running_loop():
raise Exception("There is already a running loop, please consider.... instead...")
loop = asyncio.new_event_loop()
loop.run_until_complete(entrypoint(url, **kwargs))
return syncified
Or maybe save the created event loop in a thread local (maybe a custom EventLoopPolicy
) for use with httpx and avoid conflict with possible user-created event loops and avoid creating new event loops all the time (some kind of lightweight caching).
I would not limit this api to only able to use it in the main thread, because I don't think it's very hard to support multiple threads in a sync program, because when users would call the sync function, they shouldn't have event loops, so there would be no conflict with other (possibly already running) event loops.
For me the answer is "whatever keeps the dev team coming back to work on this project". If that's keeping things simple and focused then that would seem to be async-only. If it's having the largest install base possible then its sync-and-async. But I would prioritize based on what will make you want to keep coming back and contributing to this project above everything but if people burn out then it serves no one.
so I don't have to spin up an event loop just for a get(...). As long as it's got that...
As @florimondmanca demonstrated, with an async repl you don't have to spin up an event loop manually - calling async functions Just Works:
>>> import httpx
>>> r = await httpx.get("https://example.org")
It's got an extra await
in there - is that a good reason to introduce a huge amount of added complexity and importantly maintenance burden?
If (potential) users don't want to write async code, maybe just let them keep using requests? I don't see that the httpx
project would lose much by doing that except a whole bunch of users who don't want to write async code who have issues with the sync api you provide.
Of course, as Brett says, having the largest possible user base is also a valid goal and it's up to the actual maintainers where you want to position the project.
I’m not gonna tell you either way, however I feel the need to point out that one of your premises is flawed: requests
does not serve us well.
Requests 2.x has been declared to be security-fix only and has moved into the GitHub PSF organization. Currently the project is in a weird space that I don’t even want to elaborate. For all intents and purposes, it's unmaintained and its future is unclear. Requests 3 is supposed to be async-first too but there's nothing ressembling a time line.
For that reasons I’m using urllib3 in all my projects and while it’s great, a higher level API would be nice (I kinda built my own :)).
All that to say: don't feel pressured into building and maintaining (!) something you don't want, but there is room and opportunity to become the number 1 Python HTTP package if you keep sync around.
Requests 2.x has been declared to be security-fix only and has moved into the GitHub PSF organization
As a heavy user of requests that both concerning & surprising - I don't want to be depending on a project that will fall into bitrot.
I saw that it had been moved to the PSF organisation but I thought that was an indication that the project was in a more healthy state, not less healthy - i.e. that it was an officially sanctioned/blessed 3rd party tool by the PSF.
AFAICS there's no mention in the README or the docs that this is the case and the pulse shows fairly decent activity for a mature project. If you've got a link to where this is discussed it would be appreciated as it will certainly feed into any consideration of changing frameworks.
I saw that it had been moved to the PSF organisation but I thought that was an indication that the project was in a more healthy state, not less healthy - i.e. that it was an officially sanctioned/blessed 3rd party tool by the PSF.
That is definitely not the case. Most of the maintenance work over the years has been done by maintainers that have resigned months and years ago (Cory Benfield & Ian Cordasco come to mind – c.f. https://vorpus.org/blog/why-im-not-collaborating-with-kenneth-reitz/ for some context). Then Kenneth decided he doesn't want to maintain it himself anymore either, which caused the move into the PSF repo.
Eventually Kenneth seems to have changed his mind, added sprinkles to all the logos but also caused other problems and he doesn't have PyPI upload rights anymore.
It seems that now, Nate Prewitt has picked up some of the slack. But it's still far from a healthy project in my book and I wouldn't rely on its long-term future.
I don't think it was @tomchristie's intention to create a "requests-alternative" but to create an "aio-http-alternative". In its inception HTTPX was called requests-async
and, somewhat organically sync support was added. During that time the whole situation with requests happened and people rushed to look for an alternative which caused a raise HTTPX's popularity.
I know @hynek's point is to challenge the assumption that "the sync case is covered by requests" but I think the solution to that admittedly delicate problem is not to swap it requests for HTTPX but to make revitalize the requests's development. I find it's unfair to put the pressure of replacing requests on HTTPX.
I'd be interested in knowing how many Requests users actually use anything more than the high-level API (.get()
, etc.).
I've never used Requests' sessions or adapters myself, and I'm pretty sure a lot of casual users haven't either.
This data point would help determining if only having a sync high-level API would allow addressing > 80% of use cases. For the other 20%, I'd be fine with "sorry, it's just async from here on, please migrate or use a workaround". Thoughts?
I know @hynek's point is to challenge the assumption that "the sync case is covered by requests" but I think the solution to that admittedly delicate problem is not to swap it requests for HTTPX but to make revitalize the requests's development. I find it's unfair to put the pressure of replacing requests on HTTPX.
Yes, as I wrote, I'm absolutely not telling you what to do. I'm just pointing out the falsehood of the premise (and it seems that something that I thought is well-known is actually anything but). Do with that information whatever you want; I'm not prone to telling people to do more free labor. :)
'd be interested in knowing how many Requests users actually use anything more than the high-level API (
.get()
, etc.).I've never used Requests' sessions or adapters myself, and I'm pretty sure a lot of casual users haven't either.
This data point would help determining if only having a sync high-level API would allow addressing > 80% of use cases. For the other 20%, I'd be fine with "sorry, it's just async from here on, please migrate or use a workaround". Thoughts?
I use both. API wrapper and web scraping libraries all commonly use sessions. But I still use the high-level API at times for random things or manually testing a call without running a bunch of other code. I wonder how many people only use the high-level API that really should be using sessions.
I am with @hynek here. I don't think HTTPX should drop sync support. But if that's really what needs to happen to keep the maintainers sane and happy and enthused, that's what has to happen.
But, the reasoning that "just use requests" is a good thing for users or the future of HTTPX is wrong. @hynek pointed out some very good reasons. But, if requests does get its act together and becomes an async HTTP library that also handles sync, then what's the point of HTTPX? Sorry if that's a bit harsh, but ask yourselves why would people use HTTPX over requests if we had a Kenneth-less requests with sync and async support. @brettcannon knows from the discussion about one packaging tool to rule them all, people generally want one tool to reach for if possible. The only reasons async HTTP client libraries exist in Python is because requests didn't support it. I know it's not about wanting a big user base. But, if you're writing/maintaining a big, complex library, you do want it to be used to justify the time/energy spent in doing so, right?
The big problem is the pain of using async in Python. Once you go async somewhere, you pretty much need to go async everywhere. This just isn't going to be feasible without re-implementing what HTTPX currently does to support sync requests. Plus, lets face it, the world still revolves around HTTP/1.1 and synchronous code for the foreseeable future.
Here are the benefits we'd gain from dropping sync support:
Instead of dropping, I suggest to wrap sync part of httpx
over battle tested requests
. This will probably save mentioned benefits.
Thanks all, I'm going to close this off for now - I think there's plenty of useful feedback here.
Very much appreciate @hynek and @brettcannon's observations in particular.
I think it's most likely that we'll continue to provide both sync + async support. My personal use-case is atypical because I'm hugely motivated by leveling up the async Python ecosystem, and don't really have any need for a threaded concurrency client, myself.
The awkard aspect is simply the extra bridging and API surface area that async+sync support involves. We're doing perfectly okay there, but the package is less graceful as a result. Seems like we should probably just concentrate on handling that as nicely as we can.
Coming back to this again, already, because.
Having dived into httpx
again today, as I'm integrating an ASGI service against it, I do think that it has moved too fast and is over-scoped right now. (Discovered that cookies are not being persisted on redirect responses, and got stuck into some bits I'd not been in for a while, found it quite hard to get back to grips with.)
My initial assessment when starting httpx was that a sensible thing to do would be to support sync+asyncio, and then adapt from there to support trio. (Also I was initially aiming at it meeting the requirements that "Requests III" had outlined for itself, potentially even with a view to it headlining as requests3
.)
My assessment if I was starting from this point in time, would be that the sensible thing to do would be to initially focus exclusively on supporting asyncio+trio, making sure that we're staying in line with trio's structured concurrency constraints everywhere, and only look at adapting to sync support, once we've got the async case absolutely hammered down.
We've done some absolutely stellar work here, but there's still an awful lot to do in order to really get into the polished state we want this project to be in...
pip install httpx
and pip install httpx[standard]
as options.urlib3
doesn't need to use a read-and-write concurrently approach to handle early returning servers that don't read the request body, but httpx
and urllib3-fork
do both need to. Is it just blindly buffering up the body in the threaded case? If we could do something similar that'd be less complex the having to deal with a reader/writer pair, and presumably that's been working well enough for the existing urllib3
case. Or not? Is it the source of an outstanding unresolved bug there?I'm not absolute on this, but I do think there'd be a good argument to be made that we should cut the scope right back for a bit, and target a 1.0 release solely as an async client, and later target a 2.0 release as a sync+async client.
@tomchristie In line with my previous comment, if the general idea is to scope down to async, but keep sync support on a longer-term roadmap (and make that clear to our users), as a maintainer I'm down for that.
Yeah, it wasn't my opening case, but I think based on the discussion that's probably where my leanings would be at the moment.
My very subjective opinion; if you guys want an async-only framework or feel like doing it, if that is what you enjoy maintaining and improving in the long-term, the "greater good" (in this case, replacing requests for example) shouldn't matter that much, you should go for it.
I think it is way better to have a super-cool, fast and modern async-only http framework than "serve the community" with both sync and async APIs which feels like a burden and is overly complex.
I support dropping sync support. Not as a permanent "mission statement", but atleast till the foreseeable future.
I am looking at this from two angles - usage and effort. I'm primarily looking at this from the point of view of existing production codebases and not really start-a-new-project.
Async and sync usage dont generally intersect, so most people who are already using requests will most likely not move anytime soon, thereby reducing the usage of that part of httpx. People who will use httpx will only use it for async usecases. This directly impacts the effort to support sync usecase by the small team of developers (to whom we are grateful for). There is just not enough ROI.
Another happy side-effect that I hope will come about is that the new async frameworks (Starlette, Fastapi, etc) will better support requests ... thereby improving the migration path from sync frameworks. So there's a larger tie in.
Done, and released as 0.8. 😬
I'd can see us coming back to covering sync at a later date, but I just don't dig our approach onto it right now - it's making the API & implementation significantly poorer.
Haha. Was wondering why my calls to AsyncClient broke. Good change though.
Thank you @tomchristie
Right, the project is looking much better already, with our bridging approach to sync+async out of the way.
I think we've got a much better tack onto this on the horizon...
✨ "Supporting Sync. Done right." #572 ✨
The big, big, big difference with #572, is that it won't impact the quality of the codebase in the same way that our prior approach was doing. We'll also be using regular ol' function calls and sync socket operations in the sync case, rather than hiding async stuff underneath a sync facade.
I'd just like to make a point here that @sethmlarson assumed that a "synchronous HTTP/2 request doesn't make much sense".
My use case is that I have a endless http/2 stream that I cannot use with await
. I need to read it synchronously.
It's a stream of events that generates further synchronous http requests that need to be performed before the next event is read off the stream. Having a 100% synchronous interface with an async backend was perfect for my use case.
So basically the concept is having multiple streams open, but pausing them on the fly to switch to reading from a different stream.
Forcing everything to be purely async makes the design very hard to work with as it now requires all the async to sync implementation to be rehashed, something that was previously be handled by the library.
I don't understand exactly why is this a problem for you, because I don't see your code, but would asyncio.run(handle_stream_event)
not work for you? So basically, you would not run the asyncio loop all the time, just for the short period of reading one event at a time. It's actually pretty simple to control the event loop yourself. If you can show code I would take a look.
@kissgyorgy https://github.com/phillmac/py-orbit-db-http-client-dev/blob/b785cc3289ece70d797f672742d7cfc0a2bc140e/tests/test.py#L159 https://github.com/phillmac/py-orbit-db-http-client-dev/blob/b785cc3289ece70d797f672742d7cfc0a2bc140e/tests/test.py#L170
The problem is I need to consume events until I get the one I want, do some stuff, consume some more events, then do some more stuff. I don't want to have to go fully async just to pause the stream
I'm opening this issue so that we can have a discussion about something a bit radical. 😇
Right now
httpx
support standard threaded concurrency, plus asyncio and trio.I think that it may be in the projects best interests to drop threaded concurrency support completely, and focus exclusively on providing a kick-ass async HTTP client.
The big design goals of HTTPX have been to meet two features lacking in
requests
...Which is great, but here's the thing... the primary motivation for HTTP/2 over HTTP/1.1 is it's ability to handle large numbers of concurrent requests. Which also really means that you should probably only care about HTTP/2 support if you're working in an async context.
For users working with the standard threaded concurrency, HTTP/2 is a shiny new headline feature, that plenty of folks will want to jump at, that isn't actully providing them with a substantial benefit.
Given that
requests
already provides a battle tested HTTP/1.1 client for the threaded concurrency masses, my inclination is that rather than trying to meet all possible use cases we should focus onhttpx
being a "the right tool for the right job" rather than "one size fits all".Here are the benefits we'd gain from dropping sync support:
BaseRequest
,BaseResponse
that are cluttering up our API surface area.asyncio
andtrio
.It's absolutely more of a niche (right now) than just aiming at being a requests alternative, but it's one that I'm personally far more invested in. It seems to me that we may as well embrace the split between the sync and async concurrency models, and build something that excels in one particular case, rather than trying to plaster over the differences.
So, tentatively (hopefully)... what do folks think?