Closed harshitsinghai77 closed 2 years ago
Looks like this is a duplicate of this issue which already has a PR: #20
Does that occur in https://github.com/mymarilyn/clickhouse-driver? Most code is reference from it.
Does that occur in https://github.com/mymarilyn/clickhouse-driver? Most code is reference from it.
@long2ice unfortunately, I did not find anything similar in clickhouse-driver problems, and it is unlikely that someone holds such long connections without a pool in clickhouse-driver (they only have a pool using third-party libraries), which is probably why there are no such tasks.
I need a little more time to see how the problem can be solved and if the code from this PR #21 will help, and then quickly take it to our project. In any case, I will test the solution on my fork, and if everything goes well, I will create a PR to your repository so that we get a normal package :)
That's good
Using connection pool, I also have this problem, but it cannot be repeated
Yes problem still
@caochao18 @shurshilov, if you are using the PyPi version, as far as I understand, the fixes have not yet been posted. @long2ice am i right?
Try installing from this commit (b08a027) or from master and see if the issue persists. This problem fixed in #33 (it helped us and some other people).
I understand that this option does not look very good, but it helped us at one time. Since it was necessary to fix it quickly and there was no time to wait for the release (they deployed it right after fixing the bug).
@caochao18 @shurshilov, if you are using the PyPi version, as far as I understand, the fixes have not yet been posted. @long2ice am i right?
Try installing from this commit (b08a027) or from master and see if the issue persists. This problem fixed in #33 (it helped us and some other people).
I understand that this option does not look very good, but it helped us at one time. Since it was necessary to fix it quickly and there was no time to wait for the release (they deployed it right after fixing the bug).
Yes, i understand that in github has fix. But the production department puts modules regardless of me and I have no authority to tell them to put them from the github, so of course a fix in the python package is desirable. I can't even handle this error through try except I would prefer this option now. I catch an exception, close the pool and create a new one, but it doesn't work.
But the production department puts modules regardless of me and I have no authority to tell them to put them from the github, so of course a fix in the python package is desirable.
Of course, this should be put into PyPi, but unfortunately I cannot influence this and only the creator of the package can do this.
But if you urgently need to solve this problem, I can offer you only 3 options:
In general, in any case, you will have to negotiate with the production department until there is a release, and there has not been a release since June 2021.
I can make a package in PyPi for the latest version of asynch and name it a certain way if that eases your pain. But for now it will look like a crutch in terms of support for this package. I think that in the evening or on the weekend I can do it.
P.S.: I still don't understand why the @long2ice doesn't post updates to PyPi...
@long2ice, thank you so much for posting the new version on PyPi!
But it looks like we have a small problem: the fact is that when the dev branch was merged, a conflict occurred and my changes were overwritten by dev and now they are not in master. I will prepare another PR to fix this if you like.
But, unfortunately, the release will have to be repeated again :(.
UPDATE: #48 PR ready to merge for fix it.
Making an update before Monday would be cool
@shurshilov, It looks like the release has taken place and this fix is in PyPi since v0.2.1.
@long2ice, thanks!
Unfortunately the problem still exists. asynch version is 0.2.2
File ~/miniconda/envs/py_3.10/lib/python3.10/site-packages/asynch/proto/streams/buffered.py:130, in BufferedReader.read_varint(self)
128 packets = bytearray()
129 while True:
--> 130 packet = self._read_one()
131 packets.append(packet)
132 if packet < 0x80:
File ~/miniconda/envs/py_3.10/lib/python3.10/site-packages/asynch/proto/streams/buffered.py:120, in BufferedReader._read_one(self)
119 def _read_one(self):
--> 120 packet = self.buffer[self.position]
121 self.position += 1
122 return packet
IndexError: bytearray index out of range
This error is weird. Sometimes I get
IndexError: bytearray index out of range
sometimes I don't.