Closed dennisafa closed 5 years ago
In response to PR creation
Your results will arrive shortly
In response to PR creation
Error: ERROR: Failed to fetch results from nimbnode30
@onvm try it now
@onvm can we pass this pktgen test?
@onvm can we pass this pktgen test?
Your results will arrive shortly
Yay to CI! I tested on cloudlab and got 13422053pps for pktgen. 4573120 rx/tx for 2 speed testers together. I'll test the specific message functionality later, but basic performance looks solid
@onvm can we show dennis our new features?
@onvm can we show dennis our new features?
Your results will arrive shortly
@dennisafa pktgen from CI will work if you merge develop into this branch
@onvm perf?
@onvm perf
@onvm perf
Your results will arrive shortly
@onvm i believe in you
@onvm i believe in you
Your results will arrive shortly
@onvm how's it goin pktgen
@onvm how's it goin pktgen
Your results will arrive shortly
@onvm if you fail I'll catch you now!
@onvm if you fail I'll catch you now!
Your results will arrive shortly
@dennisafa just add the https://github.com/sdnfv/openNetVM/pull/151/files changes into this pr
@dennisafa just add the https://github.com/sdnfv/openNetVM/pull/151/files changes into this pr
oops. got it.
Testing
Your results will arrive shortly
Testing
Error: Failed to parse Speed Tester stats
Testing
Your results will arrive shortly
awesome, im excited CI has mTCP results now. no real performance benefit, just the fact that the semwait call is logically equivalent so having it in both dequeue_messages and dequeue_packets would be extra
On Thu, Jul 25, 2019 at 6:45 PM Kevin Deems notifications@github.com wrote:
@kevindweb approved this pull request.
Let's goooooooo! First successful mTCP CI run. I also have been testing on cloudlab, when I merged develop, and disabled flow table I got 14746136pps Pktgen, and 12895796pps with ft macro enabled, which is normal. Created a 4 speed tester chain with performance of ~17664647 pps tx on all 4, and got almost 52mil pps just running one speed_tester. Also, not a change request, just a question, was there a performance benefit to moving this code out of onvm_nflib_dequeue_packets, or did it just make more logical sense? I ask this because there doesn't seem to be a whole lot changed, except for the condition definition.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sdnfv/openNetVM/pull/149?email_source=notifications&email_token=AH3EIZW6GDMOG5PBQUKBPTTQBIUJ7A5CNFSM4H3KRXKKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOB7UOMZY#pullrequestreview-266921575, or mute the thread https://github.com/notifications/unsubscribe-auth/AH3EIZTUNKSZLJL325TGXLTQBIUJ7ANCNFSM4H3KRXKA .
Adds functionality to sleep when no messages are enqueued onto an nf's message ring.
Summary:
This is functionality I implemented as part of the larger mTCP project, in which an NF constantly receives lots of messages. This works exactly the same way as sleeping on an empty packet ring.
Usage: Run the manager with shared CPU enabled.
Merging notes:
TODO before merging :
Test Plan:
Tested by sending a large influx of messages to an NF running in shared cpu mode. Also tested with multiple NF's on the same core. Checked htop CPU usage to verify proper sleeping when no messages were being sent.
Review:
Review checklist:
Sanity checks, assigned to @koolzz @kevindweb
/onvm
and/examples
directoriesCode style, assigned to @koolzz @kevindweb
Code design, assigned to @koolzz @kevindweb
Performance, assigned to @koolzz @kevindweb
Documentation, assigned to @koolzz @kevindweb