named-data / python-ndn

An NDN client library with AsyncIO support in Python 3
https://python-ndn.readthedocs.io/en/latest
Apache License 2.0
24 stars 17 forks source link

Producer exits when content size above 9000 bytes #22

Closed sandlbn closed 3 years ago

sandlbn commented 3 years ago

I tried to implement producer and serve the image. However, when the image exceeds 9000 bytes, I'm getting the below error.

  File "producer.py", line 71, in <module>
    app.run_forever()
  File "/home/iolie/.local/lib/python3.6/site-packages/ndn/app.py", line 293, in run_forever
    aio.get_event_loop().run_until_complete(task)
  File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "/home/iolie/.local/lib/python3.6/site-packages/ndn/app.py", line 259, in main_loop
    await self.face.run()
  File "/home/iolie/.local/lib/python3.6/site-packages/ndn/transport/stream_socket.py", line 63, in run
    typ = await read_tl_num_from_stream(self.reader, bio)
  File "/home/iolie/.local/lib/python3.6/site-packages/ndn/encoding/tlv_var.py", line 112, in read_tl_num_from_stream
    buf = await reader.readexactly(1)
  File "/usr/lib/python3.6/asyncio/streams.py", line 674, in readexactly
    yield from self._wait_for_data('readexactly')
  File "/usr/lib/python3.6/asyncio/streams.py", line 464, in _wait_for_data
    yield from self._waiter
  File "/usr/lib/python3.6/asyncio/selector_events.py", line 714, in _read_ready
    data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer

Is it a limit for content size? Should I serve content in smaller chunks?

Pesa commented 3 years ago

The maximum NDN packet size is 8800 bytes, including all fields (name, signature, etc.)

sandlbn commented 3 years ago

Thanks a lot for so fast reply. Is there automatic way in the library that will allow to fragment packets? Or this logic must be implemented in producer as in consumer?

Also, would be good to provide better error handling in this case. Instead of ConnectionResetError.

Pesa commented 3 years ago

Is there automatic way in the library that will allow to fragment packets? Or this logic must be implemented in producer as in consumer?

I'll leave this question to @zjkmxy

One note on terminology: we usually call segmentation the process of dividing a large piece of application content into smaller chunks (this happens at the application layer or in a library). The term fragmentation instead usually indicates the splitting of network-layer packets (e.g. NDN Data packets) into several link-layer packets (e.g. NDNLP fragments).

Also, would be good to provide better error handling in this case. Instead of ConnectionResetError.

That's possible (ndn-cxx does it). One drawback of doing so is that we'd need to hardcode a specific size limit in the library, as opposed to not caring about it in the library and letting the forwarder reject the packet if too large. Again I'll leave this to @zjkmxy

zjkmxy commented 3 years ago

Thanks a lot for so fast reply. Is there automatic way in the library that will allow to fragment packets? Or this logic must be implemented in producer as in consumer?

No. python-ndn does not currently support fragmentation. For segmentation, you can refer to https://github.com/named-data/python-ndn/blob/9934c50671550188b04bb379825a5c4f2e4181c2/examples/putchunks.py#L40-L48

Also, would be good to provide better error handling in this case. Instead of ConnectionResetError.

Agree. I never tried to send large packets so I didn't know there is such error. I will wrap it up.