Open pavel-odintsov opened 9 years ago
So I looked on strace and found so much nanosleep calls:
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
nanosleep({0, 100000}, NULL) = 0
There does seem to be a long delay before these NICs reach the "link up" state. I don't know why exactly but we made our selftest routine accept this lag.
Can you start actually passing traffic more quickly with the kernel driver? Could be that we can accelerate the init somehow.
Hello!
Thanks for answer! Will add this cards to blacklist! :)
So I could test it with netmap and share results.
Hello, folks!
I'm using FireHose app for traffic processing.
And I have two NIC models:
And I'm using following code for tests:
My current SnabbSwitch branch is "next". I have checked "master" branch too.
And when I have specified X540-AT2 NIC in --input I have really HUGE time for NIC "Initialization":
But when I have switched to 82599 NIC for --input everything goes really fast:
Do you have any ideas what wrong with X540-AT2?