TarsCloud / TarsCpp

C++ language framework rpc source code implementation
BSD 3-Clause "New" or "Revised" License
520 stars 254 forks source link

请问: "ReqInfoQueue" 队列的线程安全问题? #223

Open triump2020 opened 2 years ago

triump2020 commented 2 years ago
     在函数  `ServantProxy::invoke` 中,将请求消息msg 放入 pReq 中,  并没有加锁,这里**表面上**看可能会与客户端网络线程产生data race .  
  但后面有一个 `msg->pObjectProxy->getCommunicatorEpoll()->notify(pSptd->_reqQNo);`  保证了Release 语义;
  而客户端网络线程中中的epoll_wait 保证了Acquire 语义.  请问:我理解的对么?
ruanshudong commented 2 years ago

在一读一写的情况下是安全的

triump2020 commented 2 years ago

虽然这个队列是single producer --single cosumer 的, 如果没加锁,可能还是有内存序/可见性的问题吧

triump2020 commented 2 years ago

我认为还是后面的epoll_ctl 和网络线程中的epoll_wait 起了作用吧.

ruanshudong commented 2 years ago

为啥有问题??

triump2020 commented 2 years ago

epoll_ctl 对应了Realeas 语义, epoll_wait 对应了Acquire 语义, 所以当epoll_wait 返回后,能保证在epoll_ctl 之前对内存的修改可见,所以 ReqInfoQueue 这个队列就可以不用做任何处理 ; 否则 即使是最简单的 单生成者,单消费者队列,也要做相关的内存一致性模型的控制,才能保证data race free.

FanFansfan commented 2 years ago

如果你要讲系统调用有没有内存屏障的作用,那可能是有的。这篇文章讲了被抢占后,实际上store buffer会被排空https://pvk.ca/Blog/2019/01/09/preemption-is-gc-for-memory-reordering/

However, if we go back to the ring buffer example, there is often only one writer per ring. Enqueueing an item in a single-producer ring buffer incurs no atomic, only a release store: the write pointer increment only has to be visible after the data write, which is always the case under the TSO memory model (including x86). Replacing the write pointer in a single-producer ring buffer with an event count where each increment incurs an atomic operation is far from a no-brainer. Can we do better, when there is only one incrementer?

On x86 (or any of the zero other architectures with non-atomic read-modify-write instructions and TSO), we can… but we must accept some weirdness.

但他也说了,spsc的时候,x86的内存模型下,其实不需要屏障的。。which is always the case under the TSO memory model (including x86)。

比起上述的内存可见性讨论,我觉得更奇怪的点是:你用了一个线程间1-1无锁队列,却要求每次通知的时候要mod一下,陷入内核态,这似乎有点矛盾?