Closed jfriesse closed 7 years ago
At this point I'd note there was already a brainstorm proposal to enroll
a new, systemic way of encoding name
argument to qb_ipc[cs]_create
using a custom URI scheme. One of the reasons was addressing #222 in
due course, amongst others. It could accommodate also switching between
abstract and FS-backed Unix sockets.
That being said, we already have qb
URI scheme provisionally assigned:
http://www.iana.org/assignments/uri-schemes/prov/qb
Perhaps it's time to refresh the idea and turn into something material.
You can force libqb to use sockets on Linux by adding qb { ipc_type: socket }
to corosync.conf. Though as it still mmaps the files I don't know if it will work for a container, I haven't tested it there.
I well remember the discussion about the libqb URI, and while I'm still against such an overengineered solution, I'm certainly not against a more flexible API.
@chrissie-c This is not what I was talking about. ipc_type socket forces libqb to use socket IPC instead of shm IPC (or actually native one). But it doesn't change the way how service socket is created (qb_ipcs_us_publish called by qb_ipcs_run) which is on Linux (or strictly speaking #if defined(QB_LINUX) || defined(QB_CYGWIN)) abstract socket.
Bug is about possibility to create FS based socket i(nstead of abstract one) on Linux.
SHM based client IPC should work just fine for containers as long as /dev/shm is shared between containers and it's possible to get FS based service socket.
Basically, can be closed as solved with #248.
Re URI schemes: such an idea of how to generalize over multiple domains in an extensible and annotable ways seems to be one of the influencing principles for Redox OS:
As can be seen in qb_ipcs_us_publish function, Linux and Cygwin are using abstract sockets, other OSes are using regular named sockets stored in SOCKETDIR directory. Request is about adding flag to allow Linux/Cygwin to also store socket into SOCKETDIR. Also client part has to be changed (qb_ipcc_stream_sock_connect).
Reason is described in http://lists.clusterlabs.org/pipermail/users/2017-February/005029.html .
Of course for such environment it's expected that /dev/shm is shared between VMs.
Even better solution would be to implement TCP version of ipcs. Sadly that would cost a huge amount of work and also it's quite hard to make it nonblocking and atomic (-> whole message is ether send or try again error is returned).