ido / libvma-old

Automatically exported from code.google.com/p/libvma
Other
0 stars 1 forks source link

VMA_RX_POLL=-1 VMA_SELECT_POLL=-1 with thread affinity #18

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. I've three TCP sessions, each one have own receiving thread (non-blocking)
2. Each receiving thread have affinity set to the same CPU (i.e. all have 
cpumask = 3 for example)
3. VMA_RX_POLL=-1 VMA_SELECT_POLL=-1
4. Application is on SCHED_FIFO

What is the expected output? What do you see instead?

Seems like everything hangs. If I leave only VMA_RX_POLL=-1 then it might 
occasionally (very rarely) hang as well.

Is it possible to selectively apply different VMA_RX/SELECT_POLL to UDP and TCP?

What version of the product are you using? On what operating system?

VMA 6.6.4 OFED 2.2 RHEL 6.5

Original issue reported on code.google.com by denis.iv...@gmail.com on 8 Jul 2014 at 1:46

GoogleCodeExporter commented 9 years ago
Value of -1 for VMA_RX_POLL and VMA_SELECT_POLL means infinite polling (until 
data arrive) with no going to sleep.
This mean that each thread will use 100% cpu at all time.
If all of them running on the same cpu core, you will experience a "hang like" 
behavior since one thread keep the others from running most of the time.
The best solution is to separate the threads to different cpu cores.
Another solution is to use small positive values for this parameters.

Currently, this parameters cannot be selectively applied to UDP/TCP.
But, there is no real limitation here, the code can be changed relatively easy 
to have different parameters for UDP and TCP.

Original comment by orkmella...@gmail.com on 10 Jul 2014 at 10:18

GoogleCodeExporter commented 9 years ago
Would be great to apply different polling parameters per protocol or per FD 
even.

I've too many TCP connections to assign dedicated CPU core for each but much 
less UDP channels.

Original comment by denis.iv...@gmail.com on 10 Jul 2014 at 10:25