iqiyi / dpvs

DPVS is a high performance Layer-4 load balancer based on DPDK.
Other
3k stars 723 forks source link

stack smashing detectedstack smashing detected #902

Closed DeyunLuo closed 1 year ago

DeyunLuo commented 1 year ago

【OS版本】:ubuntu20.04 kernel 5.4.0-155-generic dpdk-20.11.1 【网卡】:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 【dpvs配置点】 测试fullnat模式

./dpip addr add 192.168.0.9/32 dev dpdk0
./dpip route add 192.168.0.0/24 dev dpdk0
./dpip route add default via 192.168.0.10  dev dpdk0

./dpip route add 10.20.202.0/24 dev dpdk1
./ipvsadm -A -t 192.168.0.9:80 -s rr
./ipvsadm -a -t 192.168.0.9:80 -r 10.20.202.11 -b
这条运行完之后缓冲区溢出,dpvs crash了
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007f4fc7092859 in __GI_abort () at abort.c:79
#2  0x00007f4fc70fd26e in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7f4fc722708f "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:155
#3  0x00007f4fc719faba in __GI___fortify_fail (msg=msg@entry=0x7f4fc7227077 "stack smashing detected") at fortify_fail.c:26
#4  0x00007f4fc719fa86 in __stack_chk_fail () at stack_chk_fail.c:24
#5  0x000055557ff34a4b in dp_vs_service_set (opt=205, user=0x186700898, len=392) at /root/dpvs/dpvs/src/ipvs/ip_vs_service.c:946
#6  0x000055557fe6c239 in sockopt_ctl (arg=0x0) at /root/dpvs/dpvs/src/ctrl.c:1388
#7  0x000055557fe6c3b1 in sockopt_job_func (dummy=0x0) at /root/dpvs/dpvs/src/ctrl.c:1428
#8  0x000055557ffe19b4 in do_lcore_job (job=0x5555809a0200 <sockopt_job>) at /root/dpvs/dpvs/src/scheduler.c:165
#9  0x000055557ffe1b1b in dpvs_job_loop (arg=0x0) at /root/dpvs/dpvs/src/scheduler.c:213
#10 0x000055557ffe1c10 in dpvs_lcore_start (is_master=1) at /root/dpvs/dpvs/src/scheduler.c:245
#11 0x000055557ff71a0a in main (argc=1, argv=0x7fff08105738) at /root/dpvs/dpvs/src/main.c:376
ywc689 commented 1 year ago

dpvs crash 的栈帧信息不全,请打开DEBUG 编译选项(在 config.mk 中配置 CONFIG_DEBUG=y),然后重新编译运行生成完整的crash 栈帧。

ywc689 commented 1 year ago

It seems the same problem as issue #911 . Please track the problem there. Closing this one.