open-ephys / next-gen-system

work in progress repository for next generation acquisition and closed-loop feedback system
https://open-ephys.atlassian.net/wiki/display/OEW/PCIe+acquisition+board
4 stars 4 forks source link

Facilities for zero-copy sharing of DMA region between multiple processes #5

Closed shlevy closed 6 years ago

shlevy commented 8 years ago

The API should expose a way to have multiple processes share the DMA region without requiring any actual copying. Under the hood this could be implemented by just passing around file descriptors (either through inheritance or unix sockets) and having each process map the file descriptor into its own memory space. Whether the API exposes the file descriptor detail is optional, but IMO exposing it doesn't lose much and gives the user flexibility on exactly how the descriptor is shared between processes.

glopesdev commented 8 years ago

This is an interesting requirement, and could be nice to have, but we should be careful to avoid exposing OS-specific APIs to the top level (e.g. unix sockets), which would make cross-platform support harder.

However, another question is how would this be different from just sharing the pointer to the open device instance between threads? There's nothing stopping you from reading data from the same device from multiple threads, although we should think about the side effects of doing so.

shlevy commented 8 years ago

I'm specifically thinking of the multi-process case, where each thread of control has a separate memory space. Unless you mean "pointer" as something more generic than a memory pointer?

shlevy commented 8 years ago

As for cross-platform: It might be nice to eventually have a cross-platform API for sharing the mapping between processes and also platform specific ones if users want to do more complicated things (e.g. std::thread in the C++ stdlib has a native_handle function). But of course the platform specific one can come later if it's needed.

aacuevas commented 8 years ago

We already discussed this when designing the current API. In int oiOpenPort(oiContext c, int port, int flags=OI_DEFAULT_FLAG) one of the purposes of the flag is to define read/write permissions. While the specifics are still not defined, our idea was that the driver would allow only one writer, or master thread, with the ability to issue control commands and as many reader clients as needed that can just read from the streams using the int oiReadStream(oiContext c, int port, int stream, int nbytes, void *data) function.

We haven't discussed still the details of the concrete implementation, right now we are defining how the API should behave. However we do this, if any platform-specific element is needed it will be hidden under our API, so for the client point of view the system should be completely cross-platform.

shlevy commented 8 years ago

Seems like this is already covered then, cool

aacuevas commented 8 years ago

It's mostly covered. The oiReadStream behaves like the OS read function: it does copy the data in the underlying stream into a local buffer. Calls from different threads access the same buffer without needing additional memory operations but they do copy the data into their local memory space (the same as if you used a POSIX read in different threads with a shared file descriptor, in fact our api is heavily influenced by POSIX).

If what you mean here is a real memory map to the low-level buffer... It could be done, but since the underlying buffer is a circular buffer that it's continuously being filled with data by the DMA engine of the PCIe interface, it would be quite challenging to maintain synchronization for the client, since all the extra facilities the readStream function presents (buffer tracking, overflow error notifications, etc...) would not be present.

shlevy commented 6 years ago

Closing this to clear up my open issues list, please open a new one if interested in pursuing this.

jonnew commented 6 years ago

Reopening to keep immediately visible.

jonnew commented 6 years ago

Closing because I did not fully understand @shlevy's motivation for closing. I will copy this somewhere I can refer to it.