Closed madhura-a closed 4 years ago
That seems odd. A couple questions:
Also, it may depend on how cores are allocated (maybe 1?) for case 1 when you have all the NFs running on one server. If you have the core bottlenecked, then it might be possible that it contributes more delay than even going over the network multiple times to get to the other server and back. Depends on how much work is done on the core...
On Sun, Apr 21, 2019 at 9:03 AM Tim Wood notifications@github.com wrote:
That seems odd. A couple questions:
- How exactly are you measuring latency?
- How are you generating traffic at the first NF?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sdnfv/openNetVM/issues/108#issuecomment-485262940, or mute the thread https://github.com/notifications/unsubscribe-auth/AC6TUJIJW5W2AG6TE67QESTPRSF3ZANCNFSM4HHLKMYA .
-- K. K. Ramakrishnan
For measuring the latency, I have used the following code.
curtime = rte_get_tsc_cycles();
totalTime = (curtime-oldtime)*1000000000 / rte_get_timer_hz();
After sending the first packet, I have calculated the oldtime value using rte_get_tsc_cycles(). Once all the packets arrived, I have executed above code.
For generating the packets, the following piece of code is used.
for(i=0;i<user_count;i++){
struct onvm_pkt_meta* pmeta;
struct ether_hdr *ehdr;
struct rte_mbuf *pkt = rte_pktmbuf_alloc(pktmbuf_pool);
if (pkt == NULL) {
printf("Failed to allocate packets\n");
break;
}
ehdr = (struct ether_hdr *) rte_pktmbuf_append(pkt, packet_size);
rte_eth_macaddr_get(0, &ehdr->s_addr);
for (int j = 0; j < ETHER_ADDR_LEN; ++j) {
ehdr->d_addr.addr_bytes[j] = d_addr_bytes[j];
}
ehdr->ether_type = LOCAL_EXPERIMENTAL_ETHER;
pmeta = onvm_get_pkt_meta(pkt);
pmeta->destination = 1;
pmeta->action = ONVM_NF_ACTION_TONF;
uint8_t *uid = (uint8_t *) rte_pktmbuf_append(pkt, sizeof(uint8_t));
*uid = i;
uint8_t *packet_type = (uint8_t *) rte_pktmbuf_append(pkt, sizeof(uint8_t));
*packet_type = 1;
onvm_nflib_return_pkt(nf_info, pkt);
}
The main function of network function 1 has the above code, it sends the packet to itself. Packet handler of network function 1 decides what to do with the packets.
Yes sir, I have assigned one dedicated core to each network function in both case 1 and case 2.
Closing this as no longer active. If you have further questions, let us know.
Respected Sir,
I need a clarification related to the performance of openNetVM. I have created 6 network functions. I have measured the latency involved in 2 ways,
1) All the network functions running in the same server. 2) Here I have used 3 servers, server 1 has 1 network function, server 2 has 4 network functions and server 3 has one network function.
The packets are generated at one network function, traverse through all the network functions, (network functions process the packets and perform some tasks also) and at the end packets reach back to the original network function. In case 2, they traverse through the NIC as well to reach the destination NF.
What I have observed is, the latency involved in case 2 is slightly less than the latency involved in case 1 (a few nanosecond difference). Is this the expected behavior?