Closed massvoice closed 5 years ago
This is expected, I haven't finished work on #3 so transfer will stop after exactly 1MB:
Ok sure. I did not see that. Can you please let me know if you have any approximate time frame around which you will be able to complete it. It is required for the project that I am currently working on, and I will be really thankful if this can be implemented. Also, if this is not in your immediate pipeline, can you let me know any workaround which I can use.
I tried testing it with a file size of approx 300 KB, but still I am getting the same issue.
Sorry but I don't have a set schedule, this is really a "best effort" mode project. At this stage, I wouldn't depend on aioquic for any critical project unless you're willing to hack on it yourself.
MAX_DATA and MAX_STREAM_DATA now get raised as needed
Hi, I pulled your commit. I don't think the issue is with the max data or max stream data.. The server is unable to serve any request which is larger that around 120 KB. I modified the connection.py to check the packet number during the decryption
if not crypto.recv.is_valid():
continue
try:
plain_header, plain_payload, packet_number = crypto.decrypt_packet(
data[start_off:end_off], encrypted_off, space.expected_packet_number
)
print (packet_number),
and what I see is, the transmission stops after it reaches 96 or 97. I have tried it with multiple different ways and I observe the same everytime.
Also, I am not using aioquic for any critical project, so I don't mind hacking the code myself, if you can assist in where might be problem be.
Thanks.
There are two issues here, both of which are mentioned in #3 :
on the receiver side there was a hard limit imposed by MAX_DATA and MAX_STREAM_DATA, which were never raised. This is now fixed.
on the sender side, congestion control and loss recovery are not implemented. This means the sender immediately tries to send all the data it has, which is bound to fail for anything except a trivial amount of data. I also see a limit at around 120kB
Here is a super-crude patch which improves things, but it's definitely not a fix!
--- a/aioquic/connection.py
+++ b/aioquic/connection.py
@@ -1646,7 +1646,8 @@ class QuicConnection(asyncio.DatagramProtocol):
is_one_rtt = self.__epoch == tls.Epoch.ONE_RTT
buf = builder.buffer
- while True:
+ packets = 0
+ while packets < 50:
# write header
builder.start_packet(packet_type, crypto)
@@ -1742,6 +1743,7 @@ class QuicConnection(asyncio.DatagramProtocol):
if not builder.end_packet():
break
+ packets += 1
def _write_handshake(self, builder: QuicPacketBuilder, epoch: tls.Epoch) -> None:
crypto = self.cryptos[epoch]
Hi, I understand now. I tried to pull the latest code and modified the code according to this patch. It is still at around 120 kB. I guess the limit will only improve once congestion control is in place. Thank you for your help.
Give it another try with the latest code. There is still no congestion control so there is huge packet loss, but it should no longer deadlock.
@massvoice in my experience congestion control is working decently, I'm able to transfer a 50MB file over WiFi.
Hi, Yes I too checked it. It is working really well. Thank you for the implementation.
Hi,
I have modified the server.py in the examples and instead of sending the template, I am reading a file and sending the value to the client. It is working perfectly fine for small amount of text, but as soon as I increase the text to around 10 mb, the transmission is still happening, but I am unable to see anything at the client side.
Can you please assist.
Thanks.