RossBencina / oscpack

Automatically exported from code.google.com/p/oscpack
Other
47 stars 31 forks source link

increase maximum buffer size to UDP/IPv4 practical limit #25

Open kybr opened 4 years ago

kybr commented 4 years ago

the actual limit for the data length, which is imposed by the underlying IPv4 protocol, is 65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP header)

https://en.wikipedia.org/wiki/User_Datagram_Protocol

marcel303 commented 4 years ago

I would like to see a larger buffer size being supported as well. However, changing it to 64kb by default may be a bit much. The thread may not have sufficient stack space to keep it on the stack. Perhaps it should be made configurable. Or allocated on the heap.

kybr commented 4 years ago

The thread may not have sufficient stack space to keep it on the stack. ... Or allocated on the heap.

The buffer is allocated on the heap. Here's the place where the buffer is allocated:

    const int MAX_BUFFER_SIZE = 65507;
    data = new char[ MAX_BUFFER_SIZE ];

Unless the new operator is overloaded in some strange way, that's a heap allocation, right?

This allocation happens once, in the setup of a receiver.

I would like to see a larger buffer size being supported as well.

Why? I'm curious to hear about your particular situation.

I'm using IPv4 UDP/OSC to send large OSC Blobs, so the practical limit of 65507 bytes makes sense to me. The actual size of the packet may be smaller than that, but the size cannot (or should not) be bigger.

However, changing it to 64kb by default may be a bit much.

Given that the allocation is done on the heap, how might 65507 break things? Why is it a bit much?

(The only case where I can imagine a problem is on embedded, very low-memory systems. Users of those systems tend to opt for other other OSC libraries, I believe.)

Perhaps it should be made configurable.

Maybe, but it would be more work to do that and risks come with adding more code. This is a very, very simple change.

marcel303 commented 4 years ago

The thread may not have sufficient stack space to keep it on the stack. ... Or allocated on the heap.

The buffer is allocated on the heap. Here's the place where the buffer is allocated:

    const int MAX_BUFFER_SIZE = 65507;
    data = new char[ MAX_BUFFER_SIZE ];

Unless the new operator is overloaded in some strange way, that's a heap allocation, right?

Hi @kybr, yes it is. Sorry for the confusion. In my memory the buffer was allocated on the stack. I should've looked at the actual source code instead of relying on memory. I hit the issue over a year ago, and just recently revisited it and thought I'd take a look at this repo to see if it's fixed, and to voice my support for this change.

Since it's just a regular heap allocation, I see nothing that would argue against increasing it to 64k.

This allocation happens once, in the setup of a receiver.

I would like to see a larger buffer size being supported as well.

Why? I'm curious to hear about your particular situation.

Sending a ton of OSC messages packed into message bundles (for efficient ethernet utilization and reduction of cpu overhead). When sending large bundles of data, the packet size easily adds up. As for cpu usage, sending many UDP packets adds many small OS-level system calls (which are quite expensive) and ofc the extra driver work that needs to be done. At least, when addressing a dozen or so endpoints, at an update rate greater than 100 Hz, this can become significant when sending lots of messages.

I'm using IPv4 UDP/OSC to send large OSC Blobs, so the practical limit of 65507 bytes makes sense to me. The actual size of the packet may be smaller than that, but the size cannot (or should not) be bigger.

However, changing it to 64kb by default may be a bit much.

Given that the allocation is done on the heap, how might 65507 break things? Why is it a bit much?

(The only case where I can imagine a problem is on embedded, very low-memory systems. Users of those systems tend to opt for other other OSC libraries, I believe.)

Correct. More compact OSC libraries exist for microcontrollers.

Perhaps it should be made configurable.

Maybe, but it would be more work to do that and risks come with adding more code. This is a very, very simple change.

kybr commented 4 years ago

Thank you for your reply @marcel303. After looking more closely at this repo, I think it is dead. Perhaps Ross no longer bothers with it?

Is there a way to discover the most popular fork on GitHub?

marcel303 commented 3 years ago

I don't know of any decent way to discover the most lively branch. This is helpful, https://github.com/marcel303/oscpack/network. But having some way to assess the state of other branches quickly would for sure be helpful.

I also think it is dead. The author is still around (on Twitter), so I think he just has other priorities these days.

I've made a fork myself (https://github.com/marcel303/oscpack) and committed all of the fixes I had lying around. They're very similar to the PR's here, but also include some other stuff, like compile fixes for Android/iOS and a fix for a race condition that occurs when quickly creating/destroying endpoints. I intend to fix things as I come across issues, and merge PRs with bug fixes.

I've looked at the other forks (there's not so many) to see what kind of fixes they made. Most are ARM64 fixes (some ifdefs). The only interesting fork is the one by @techtim (https://github.com/techtim/oscpack). He added optional socket reuse, and fixed a volatile -> atomic. Both are changes I also made in my own fork (with the socket reuse always enabled; perhaps I should make it optional too).

LFSaw commented 3 years ago

Hey there,

I don't know if it is useful but I just created a fork of this repo and updated its folder and cmake-config structure to comply with modern cmake standards. The HEAD of my fork can now be used as a submodule in other cmake projects without configuration. I thought I'd share it here. No other fixes were applied.