eclipse-iceoryx / iceoryx

Eclipse iceoryx™ - true zero-copy inter-process-communication
https://iceoryx.io
Apache License 2.0
1.65k stars 384 forks source link

Extending iceoryx over PCIe shared memory #915

Open sjames opened 3 years ago

sjames commented 3 years ago

Shared memory over PCIe

We could have multiple SOCs connected together with PCIe. Memory can be shared in the PCIe space.

Love to hear thoughts on how Iceoryx could be used to extend across multiple processors. Maybe running roudi instances in each SOC which perform the co-ordination?

elfenpiff commented 3 years ago

@sjames It should be possible. Here we would have to implement a custom shared memory provider which is handling the the PCIe memory region.

But this would be far from sufficient since we have to orchestrate it in a way that one iceoryx soc is not writing into the memory of another iceoryx soc while it is processing it. Additionally, when one iceoryx publisher would like to transmit data to a subscriber running on a different soc it has to write the chunk into the queue in a threadsafe way. At the moment this is done via lock free queues which is sufficient for inter process communication. In your architecture we have to lock the other soc from working on this piece of memory and later have to signal it that something was received. Usually we use semaphores for that but these won't work either in such a setup.

Could you may provide some insights on how one could lock a memory region or create an inter soc mutex? And how could I signal a process on a different soc via PCIe that an event has happened - similar to a semaphore?

If you have some exemplary code snippets it would help to understand the procedure better.

sjames commented 3 years ago

@elfenpiff thank you for the reply.

I suppose locking the regions shared between multiple SOCs have to be implemented co-cooperatively by using messaging over PCIe.

The idea is really premature and I wanted to put it out here to understand the potential challenges. I really have to think more.

Regards, Sojan