Closed Neetika02 closed 3 years ago
Attaching my c file here:
https://drive.google.com/file/d/1A0K6qVfXMGtF_x6232X6ihADUkoW3tlY/view?usp=drivesdk
When we run this code, it gives a core dump after some time/packets. It seems to me that the ofp_packet_pool is running out and thus core is coming, but I am unable to figure out why the pool is getting exhausted as I have freed the packet enqued from pkt_io_recv in the gtp_pkt_recv function.
The bt is as follows: 71003445 8:2516160784 ofp_pkt_processing.c:887] odp_packet_copy_to_mem failed W 71003445 8:2516160784 ofp_udp_usrreq.c:1178] packet dropped, returning OFP_EIO D 71003445 8:2516160784 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 8:2516160784 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 8:2516160784 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 8:2516160784 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 8:2516160784 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 8:2516160784 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 11:1811779856 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 14:1929089296 ofp_pkt_processing.c:1203] Fragmentation required D 71003445 9:2507706640 ofp_pkt_processing.c:1203] Fragmentation required
(gdb) (gdb) bt
at ofp_pkt_processing.c:814
nh_param=0x0, pkt=0x13833ab00) at ofp_pkt_processing.c:1211
at ofp_pkt_processing.c:300
m=0x13833ab00, inp=0x137237a48) at ofp_udp_usrreq.c:1177
control=<optimized out>, td=0xfffe727a87d8) at ofp_udp_usrreq.c:1409
dest_addr=0xfffe727a8858, addrlen=<optimized out>) at ofp_syscalls.c:482
First, I am sorry that I cannot spend more time on this. I am contributing to ofp in my free time... but I don't have much free time.
static enum ofp_return_code fastpath_local_hook(odp_packet_t pkt, void arg) { ........ if(odp_be_to_cpu_16(uh->uh_dport)==TEST_LPORT) { / do processing like enqueue the packet / return OFP_PKT_PROCESSED; / that means: I have ownership on pkt */ }
return OFP_PKT_CONTINUE; /* that means: stack has ownership and can continue to process it, etc.*/
}
You may use globals and odp_cpu_id() and teid to find the right queue.
pktout: you need to give a pktout to the core 0 (a timeout may trigger sending a packet) - ideally will be to give a pktout to each core even if not used (when pktout_param.op_mode is ODP_PKTIO_OP_MT_UNSAFE)
ofp_udp_pkt_sendto(): my understanding is that the head on the packet have to be modified to point to the beginning of the udp payload. Also, this function takes ownership on the packet ... so you should not free it yourself (line 250).
Hello Bogdan Big thanks for helping me out I had been trying to get this to run since whole last week! Did all changes you suggested And it seems to be working fine now Once again Thanks!
I have another query unrelated to this Any idea about the roadmap of fragmentation n reassembly support for Ipv6 in OFP Also it seems from the code that vrfs are there in Ipv6 but the documentation says not supported.
Great news!! On roadmap topic I let the Nokia and Enea engineers to speak. Guys? The 'IPv6 fragmentation / reassembly' topic was considered some time ago (see https://github.com/OpenFastPath/ofp/issues/44) but IPv6 was not really requested.
Hello,
I have implemented a sample application in OFP refering the UDP Fwd Sock and Echo applications. My application does the following. Creates a control thread, cli thread and dispatcher on cpu 0 as is being done in udp fwd sock. Creates all other available cores as worker threads let's say 15. Out of these use 3 as distributors And the rest 12 as forwarders The distributors recieve pkts from pktio interfaces in direct mt_unsafe mode I have implemented a local udp hook in which we check if the udp dport matches my port then we enqueue the packet to one of the processor thread queues. These queues are plain odp queues. The processor thread recieves the packet and finally forwards it using ofp_udp_sendto()
I am facing memory corruption issues while running this application. While sending the packet from distributor to processor thread I have taken a new packet from ofp_packet_pool using odp_packet_copy (but the packet pointer of the new and old is same when printed). Do I need to use a self defined packet pool for the same?
Also the udp local hook if called from multiple distributor threads simultaneously can create an issue?
Is there any other mechanism by which I can recieve packets with same dip and dport hashed on multiple threads? I don't want them multicasted so can't recieve on same udp port also.