Closed ikenchina closed 3 years ago
If EventLoop::queueInLoop()
is called by another thread, this loop will always be waken up. Notice the !isInLoopThread()
condition:
void EventLoop::queueInLoop(Functor cb)
{
{
MutexLockGuard lock(mutex_);
pendingFunctors_.push_back(std::move(cb));
}
if (!isInLoopThread() || callingPendingFunctors_)
{
wakeup();
}
}
callingPendingFunctors_
is only accessed in the loop thread, so no need to make it an atomic.
If
EventLoop::queueInLoop()
is called by another thread, this loop will always be waken up. Notice the!isInLoopThread()
condition:void EventLoop::queueInLoop(Functor cb) { { MutexLockGuard lock(mutex_); pendingFunctors_.push_back(std::move(cb)); } if (!isInLoopThread() || callingPendingFunctors_) { wakeup(); } }
callingPendingFunctors_
is only accessed in the loop thread, so no need to make it an atomic.
but there is no need to check !isInLopThread
, is it a better approach is insteading it by callingPendingFunctors_ ?
I am sure that there are alternative ways to implement the "need to wake up" logic. I believe the current implementation is correct and efficient, also easy to understand. It has no data race, no missed wakeups, nor excess wakeups.
However, I don't think checking callingPendingFunctors_
alone is correct, maybe you will need a different flag like needWakeup_
, which must be properly synchronized if it's going to be checked outside the loop thread.
Thanks.
Yes, the current implementation is correct, but it leads to call wakeup everytime while calling queueInLoop
in another thread.
I think it's enough to check callingPendingFunctors_
alone. Please see snippet below.
precondition : callingPendingFunctors_ is atomic variable.
queueInLoop
is called by another threadvoid EventLoop::queueInLoop(Functor cb)
{
{
MutexLockGuard lock(mutex_);
pendingFunctors_.push_back(std::move(cb));
}
// if (!isInLoopThread() || callingPendingFunctors_)
if (callingPendingFunctors_.load()) // atomic<bool> // 4
wakeup();
}
void EventLoop::loop()
{
while (!quit_)
{
pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_); // 3
callingPendingFunctors_ = false; // 1
for (Channel* channel : activeChannels_)
channel->handleEvent(pollReturnTime_);
eventHandling_ = false;
doPendingFunctors();
}
}
void EventLoop::doPendingFunctors()
{
std::vector<Functor> functors;
{
MutexLockGuard lock(mutex_);
functors.swap(pendingFunctors_);
callingPendingFunctors_ = true; // 2
}
for (const Functor& functor : functors)
functor();
}
@chenshuo ?
In general, I don't think synchronization using a shared atomic boolean flag is a good idea. E It's subtle and hard to understand and maintain, and its benefits is negligible (if any).
Btw, in this case, it's should be called needWakeup, not callingPendingFunctors.
@chenshuo
that is not synchronization. Only pendingFunctors_
should be synchronized.
I think that are two problems:
It's hard to understand and maintain? It's only a happend before case, why did you said that? I don't think the benefit is negligible, muduo is hight performance framework, is it right? If right, why don't improve it?
It is synchronization per se, as a thread's behavior (calling wakeup() or not) depends on the value of a shared variable set by another thread.
I am not sure if it's correct. Even if it is now, the code becomes fragile to future changes, because its correctness is not easily verifiable by machine (sanitizers, unittests, etc.) or human. And I don't think it is worth the effort to save a wakeup() during the window between returning from poll() and before doPendingFunctors().
IMO, in user space applications, atomic variables should only be used in simplest cases like statistic counters (whose value doesn't affect the behavior of the program, like bytes sent/received), where the relaxed memory order is sufficient. If a stronger memory order is needed (e.g. acquire/release), it's often beyond average programmer's ability to analyse its correctness. There might be one or two counterexamples in muduo's code, but let's not add more.
consider the corner case, it will starve if there is no poll event.