Closed sandys closed 6 years ago
Hi @sandys,
We've not seen this before, it does look like it could well be related to kennethreitz/requests#3458... Could we see the full error message and traceback to see if we can make any sense of it? What's your stack like, do you have the same kind of reproduction steps as in that issue?
Finally, are you seeing the event successfully triggered even when you have this error?
hi @hot-leaf-juice ,
thanks for replying. I have filed a separate Pusher issue with detailed trace log (as we got them from sentry). Our stack is flask, python 3.6.1, gunicorn (tried with both gevent and sync) running on google cloud.
we are unable to reproduce it consistently - this happens intermittently. we have been using pusher for a long time, but this is the first time i have tried this out on python 3.6.1 . our previous stack was python 2.7
So you did! Thanks for that, I've got the trace.
It does look like you may well be experiencing the same issue, is looks like the connection is probably being closed prematurely by some proxy between your servers and ours. If this is the case then it's unfortunately largely out of our hands.
If you could find out if events are successfully triggered or not that would be great. If they go through successfully 100% of the time then you can safely catch and ignore the error. If they fail 100% of the time then you can catch the error and retry. We could probably look in to including some sensible default behaviour and wrapping this error up in a nicer way at the library level as well, but a good temporary solution is to catch yourself and decide whether to retry or not.
It's interesting that you didn't see this with python2. Did anything else about your architecture change when you made the switch?
hi @hot-leaf-juice i dont have any proxy between your servers and ours. i mean these are outgoing requests. we do deploy on docker (always have, even on python 27).
nothing else changed - i mean these are new servers on google cloud (obviously) and latest docker release and latest debian, etc.
but my honest opinion is that it is related to the requests thing. it would be nice if there was some clarifications/fallback from requests end that you could incorporate in your code.
I was getting the same error running h2o-py locally, with Python 3.5 and the latest stable version of h2o.
I was using a for
cycle to do the following:
[in] for i in range(1,10):
h2o.init()
h2o.remove_all()
python code
h2o.cluster().shutdown()
[out] **H2OConnectionError** [...]
So, I fixed it by changing the h2o.init()'s port everytime:
[in] for i in range(1,10):
h2o.init(port=54320+i)
h2o.remove_all()
python code
h2o.cluster().shutdown()
Hope I've helped. Pedro
This is a duplicate of https://github.com/pusher/pusher-http-python/issues/23, the error messages are just a bit different in different versions.
hey guys, It still happens in pusher==2.1.4 (python 3.7). @callum-oakley do you have any idea? Or what is root cause?
hi guys, I'm seeing intermittent 'Connection aborted.' errors with python 3.6.1 . the line of code that causes this is
pusher.trigger('private-oms', 'client-message', event)
This doesnt happen all the time, but at high rates of call.. it happens pretty frequently.
Not sure, but possibly related to https://github.com/kennethreitz/requests/issues/3458
any idea of what's happening ?