storaged-project / udisks

The UDisks project provides a daemon, tools and libraries to access and manipulate disks, storage devices and technologies.
http://storaged.org/doc/udisks2-api/latest/
Other
339 stars 142 forks source link

stack smashing detected #1198

Closed xsmile closed 11 months ago

xsmile commented 11 months ago

Hello,

after updating udisks from 2.9.4 to 2.10.1 most of the time udisksd fails to run and crashes with a stack smashing error.

The last working commit is ff5fddb16a97d8f1b8475a219566c2ee95f3c349, right before NVMe support was added with b7ccbf2fedb160c0cf6ecc0fc35b2d575c4dc6de, which suggests that this is a libnvme issue.

A similarity to #1152 is the drive model, which is also used by @bulletmark. However, this does not seem to be caused by the sanitize function.

Details

nvme id-cntrl

Click to expand ``` # nvme id-ctrl -H /dev/nvme0n1 NVME Identify Controller: vid : 0x1c5c ssvid : 0x1c5c sn : NS04N638710704H1Q mn : SKHynix_HFS001TD9TNI-L2B0B fr : 11720C10 rab : 4 ieee : ace42e cmic : 0 [3:3] : 0 ANA not supported [2:2] : 0 PCI [1:1] : 0 Single Controller [0:0] : 0 Single Port mdts : 6 cntlid : 0x1 ver : 0x10300 rtd3r : 0x7a120 rtd3e : 0x1e8480 oaes : 0x200 [31:31] : 0 Discovery Log Change Notice Not Supported [27:27] : 0 Zone Descriptor Changed Notices Not Supported [15:15] : 0 Normal NSS Shutdown Event Not Supported [14:14] : 0 Endurance Group Event Aggregate Log Page Change Notice Not Supported [13:13] : 0 LBA Status Information Notices Not Supported [12:12] : 0 Predictable Latency Event Aggregate Log Change Notices Not Supported [11:11] : 0 Asymmetric Namespace Access Change Notices Not Supported [9:9] : 0x1 Firmware Activation Notices Supported [8:8] : 0 Namespace Attribute Changed Event Not Supported ctratt : 0 [19:19] : 0 Flexible Data Placement Not Supported [15:15] : 0 Extended LBA Formats Not Supported [14:14] : 0 Delete NVM Set Not Supported [13:13] : 0 Delete Endurance Group Not Supported [12:12] : 0 Variable Capacity Management Not Supported [11:11] : 0 Fixed Capacity Management Not Supported [10:10] : 0 Multi Domain Subsystem Not Supported [9:9] : 0 UUID List Not Supported [8:8] : 0 SQ Associations Not Supported [7:7] : 0 Namespace Granularity Not Supported [6:6] : 0 Traffic Based Keep Alive Not Supported [5:5] : 0 Predictable Latency Mode Not Supported [4:4] : 0 Endurance Groups Not Supported [3:3] : 0 Read Recovery Levels Not Supported [2:2] : 0 NVM Sets Not Supported [1:1] : 0 Non-Operational Power State Permissive Not Supported [0:0] : 0 128-bit Host Identifier Not Supported rrls : 0 cntrltype : 0 [7:2] : 0 Reserved [1:0] : 0 Controller type not reported fguid : 00000000-0000-0000-0000-000000000000 crdt1 : 0 crdt2 : 0 crdt3 : 0 nvmsr : 0 [1:1] : 0 NVM subsystem Not part of an Enclosure [0:0] : 0 NVM subsystem Not part of a Storage Device vwci : 0 [7:7] : 0 VPD Write Cycles Remaining field is Not valid. [6:0] : 0 VPD Write Cycles Remaining mec : 0 [1:1] : 0 NVM subsystem Not contains a Management Endpoint on a PCIe port [0:0] : 0 NVM subsystem Not contains a Management Endpoint on an SMBus/I2C port oacs : 0x17 [10:10] : 0 Lockdown Command and Feature Not Supported [9:9] : 0 Get LBA Status Capability Not Supported [8:8] : 0 Doorbell Buffer Config Not Supported [7:7] : 0 Virtualization Management Not Supported [6:6] : 0 NVMe-MI Send and Receive Not Supported [5:5] : 0 Directives Not Supported [4:4] : 0x1 Device Self-test Supported [3:3] : 0 NS Management and Attachment Not Supported [2:2] : 0x1 FW Commit and Download Supported [1:1] : 0x1 Format NVM Supported [0:0] : 0x1 Security Send and Receive Supported acl : 3 aerl : 7 frmw : 0x16 [5:5] : 0 Multiple FW or Boot Update Detection Not Supported [4:4] : 0x1 Firmware Activate Without Reset Supported [3:1] : 0x3 Number of Firmware Slots [0:0] : 0 Firmware Slot 1 Read/Write lpa : 0xa [6:6] : 0 Telemetry Log Data Area 4 Not Supported [5:5] : 0 LID 0x0, Scope of each command in LID 0x5, 0x12, 0x13 Not Supported [4:4] : 0 Persistent Event log Not Supported [3:3] : 0x1 Telemetry host/controller initiated log page Supported [2:2] : 0 Extended data for Get Log Page Not Supported [1:1] : 0x1 Command Effects Log Page Supported [0:0] : 0 SMART/Health Log Page per NS Not Supported elpe : 255 [7:0] : 255 (0's based) Error Log Page Entries (ELPE) npss : 4 [7:0] : 4 (0's based) Number of Power States Support (NPSS) avscc : 0x1 [0:0] : 0x1 Admin Vendor Specific Commands uses NVMe Format apsta : 0x1 [0:0] : 0x1 Autonomous Power State Transitions Supported wctemp : 358 [15:0] : 85 °C (358 K) Warning Composite Temperature Threshold (WCTEMP) cctemp : 359 [15:0] : 86 °C (359 K) Critical Composite Temperature Threshold (CCTEMP) mtfa : 0 hmpre : 0 hmmin : 0 tnvmcap : 0 [127:0] : 0 Total NVM Capacity (TNVMCAP) unvmcap : 0 [127:0] : 0 Unallocated NVM Capacity (UNVMCAP) rpmbs : 0 [31:24]: 0 Access Size [23:16]: 0 Total Size [5:3] : 0 Authentication Method [2:0] : 0 Number of RPMB Units edstt : 25 dsto : 1 fwug : 0 kas : 0 hctma : 0x1 [0:0] : 0x1 Host Controlled Thermal Management Supported mntmt : 273 [15:0] : 0 °C (273 K) Minimum Thermal Management Temperature (MNTMT) mxtmt : 357 [15:0] : 84 °C (357 K) Maximum Thermal Management Temperature (MXTMT) sanicap : 0x3 [31:30] : 0 Additional media modification after sanitize operation completes successfully is not defined [29:29] : 0 No-Deallocate After Sanitize bit in Sanitize command Supported [2:2] : 0 Overwrite Sanitize Operation Not Supported [1:1] : 0x1 Block Erase Sanitize Operation Supported [0:0] : 0x1 Crypto Erase Sanitize Operation Supported hmminds : 0 hmmaxd : 0 nsetidmax : 0 endgidmax : 0 anatt : 0 anacap : 0 [7:7] : 0 Non-zero group ID Not Supported [6:6] : 0 Group ID does change [4:4] : 0 ANA Change state Not Supported [3:3] : 0 ANA Persistent Loss state Not Supported [2:2] : 0 ANA Inaccessible state Not Supported [1:1] : 0 ANA Non-optimized state Not Supported [0:0] : 0 ANA Optimized state Not Supported anagrpmax : 0 nanagrpid : 0 pels : 0 domainid : 0 megcap : 0 sqes : 0x66 [7:4] : 0x6 Max SQ Entry Size (64) [3:0] : 0x6 Min SQ Entry Size (64) cqes : 0x44 [7:4] : 0x4 Max CQ Entry Size (16) [3:0] : 0x4 Min CQ Entry Size (16) maxcmd : 0 nn : 1 oncs : 0x5e [8:8] : 0 Copy Not Supported [7:7] : 0 Verify Not Supported [6:6] : 0x1 Timestamp Supported [5:5] : 0 Reservations Not Supported [4:4] : 0x1 Save and Select Supported [3:3] : 0x1 Write Zeroes Supported [2:2] : 0x1 Data Set Management Supported [1:1] : 0x1 Write Uncorrectable Supported [0:0] : 0 Compare Not Supported fuses : 0 [0:0] : 0 Fused Compare and Write Not Supported fna : 0x4 [3:3] : 0 Format NVM Broadcast NSID (FFFFFFFFh) Supported [2:2] : 0x1 Crypto Erase Supported as part of Secure Erase [1:1] : 0 Crypto Erase Applies to Single Namespace(s) [0:0] : 0 Format Applies to Single Namespace(s) vwc : 0x1 [2:1] : 0 Support for the NSID field set to FFFFFFFFh is not indicated [0:0] : 0x1 Volatile Write Cache Present awun : 0 awupf : 0 icsvscc : 1 [0:0] : 0x1 NVM Vendor Specific Commands uses NVMe Format nwpc : 0 [2:2] : 0 Permanent Write Protect Not Supported [1:1] : 0 Write Protect Until Power Supply Not Supported [0:0] : 0 No Write Protect and Write Protect Namespace Not Supported acwu : 0 ocfs : 0 [1:1] : 0 Controller Copy Format 1h Not Supported [0:0] : 0 Controller Copy Format 0h Not Supported sgls : 0 [15:8] : 0 SGL Descriptor Threshold [1:0] : 0 Scatter-Gather Lists Not Supported mnan : 0 maxdna : 0 maxcna : 0 oaqd : 0 subnqn : ioccsz : 0 iorcsz : 0 icdoff : 0 fcatt : 0 [0:0] : 0 Dynamic Controller Model msdbd : 0 ofcs : 0 [0:0] : 0 Disconnect command Not Supported ps 0 : mp:6.3000W operational enlat:5 exlat:5 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:- active_power:- active_power_workload:- ps 1 : mp:2.4000W operational enlat:30 exlat:30 rrt:1 rrl:1 rwt:1 rwl:1 idle_power:- active_power:- active_power_workload:- ps 2 : mp:1.9000W operational enlat:100 exlat:100 rrt:2 rrl:2 rwt:2 rwl:2 idle_power:- active_power:- active_power_workload:- ps 3 : mp:0.0500W non-operational enlat:1000 exlat:1000 rrt:3 rrl:3 rwt:3 rwl:3 idle_power:- active_power:- active_power_workload:- ps 4 : mp:0.0040W non-operational enlat:1000 exlat:9000 rrt:3 rrl:3 rwt:3 rwl:3 idle_power:- active_power:- active_power_workload:- ```

Backtrace without breakpoints

Click to expand ``` (gdb) r Starting program: /usr/lib/udisks2/udisksd [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". udisks-Message: 22:19:32.583: udisks daemon version 2.10.1 starting [New Thread 0x7ffff6dff6c0 (LWP 2360)] [New Thread 0x7ffff65fe6c0 (LWP 2361)] [New Thread 0x7ffff5dfd6c0 (LWP 2362)] [New Thread 0x7ffff55fc6c0 (LWP 2363)] [New Thread 0x7fffe7fff6c0 (LWP 2364)] *** stack smashing detected ***: terminated Thread 1 "udisksd" received signal SIGABRT, Aborted. __pthread_kill_implementation (threadid=, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44 44 return INTERNAL_SYSCALL_ERROR_P (ret) ? INTERNAL_SYSCALL_ERRNO (ret) : 0; (gdb) thread apply all backtrace full Thread 6 (Thread 0x7fffe7fff6c0 (LWP 2364) "probing-thread"): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x00007ffff79da247 in g_cond_wait (cond=0x5555555dab38, mutex=0x5555555dab30) at ../glib/glib/gthread-posix.c:1552 saved_errno = 0 res = sampled = 0 #2 0x00007ffff794c1b4 in g_async_queue_pop_intern_unlocked (queue=0x5555555dab30, wait=1, end_time=-1) at ../glib/glib/gasyncqueue.c:425 retval = __func__ = "g_async_queue_pop_intern_unlocked" #3 0x00007ffff794c21c in g_async_queue_pop (queue=0x5555555dab30) at ../glib/glib/gasyncqueue.c:459 retval = __func__ = "g_async_queue_pop" #4 0x000055555557508b in probe_request_thread_func (user_data=0x555555632560) at /usr/src/debug/udisks2/udisks-2.10.1/src/udiskslinuxprovider.c:292 provider = 0x555555632560 request = dev_initialized = n_tries = #5 0x00007ffff79b29a5 in g_thread_proxy (data=0x5555556248e0) at ../glib/glib/gthread.c:831 thread = 0x5555556248e0 __func__ = "g_thread_proxy" #6 0x00007ffff77ac9eb in start_thread (arg=) at pthread_create.c:444 ret = pd = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737345406736, 7665776663796987126, -440, 0, 140737488342224, 140737077309440, -7665829439760419594, -7665793182512368394}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = #7 0x00007ffff78307cc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Warning: the current language does not match this frame. Thread 5 (Thread 0x7ffff55fc6c0 (LWP 2363) "gdbus"): #0 0x00007ffff7822f6f in __GI___poll (fds=0x7fffe0000b90, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 sc_ret = -516 sc_cancel_oldtype = 0 #1 0x00007ffff79df206 in g_main_context_poll_unlocked (priority=2147483647, n_fds=2, fds=0x7fffe0000b90, timeout=, context=0x7fffec005c30) at ../glib/glib/gmain.c:4653 ret = errsv = poll_func = 0x7ffff79879f0 max_priority = 2147483647 timeout = -1 some_ready = nfds = 2 allocated_nfds = 2 fds = 0x7fffe0000b90 begin_time_nsec = 45622582101 #2 g_main_context_iterate_unlocked.isra.0 (context=0x7fffec005c30, block=block@entry=1, dispatch=dispatch@entry=1, self=) at ../glib/glib/gmain.c:4344 max_priority = 2147483647 timeout = -1 some_ready = nfds = 2 allocated_nfds = 2 fds = 0x7fffe0000b90 begin_time_nsec = 45622582101 #3 0x00007ffff7981b47 in g_main_loop_run (loop=0x7fffec005d60) at ../glib/glib/gmain.c:4551 __func__ = "g_main_loop_run" #4 0x00007ffff7be70bc in gdbus_shared_thread_func (user_data=0x7fffec005c00) at ../glib/gio/gdbusprivate.c:284 data = 0x7fffec005c00 #5 0x00007ffff79b29a5 in g_thread_proxy (data=0x55555560f780) at ../glib/glib/gthread.c:831 thread = 0x55555560f780 __func__ = "g_thread_proxy" #6 0x00007ffff77ac9eb in start_thread (arg=) at pthread_create.c:444 ret = pd = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737345406736, 7665776663796987126, -440, 11, 140737318472000, 140737301692416, -7665788481341672202, -7665793182512368394}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = #7 0x00007ffff78307cc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Thread 4 (Thread 0x7ffff5dfd6c0 (LWP 2362) "pool-udisksd"): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x00007ffff79dac23 in g_cond_wait_until (cond=, mutex=0x5555555e65a0, end_time=) at ../glib/glib/gthread-posix.c:1677 span_arg = {tv_sec = 0, tv_nsec = 499999550} --Type for more, q to quit, c to continue without paging-- now = {tv_sec = 45, tv_nsec = 499410450} span = {tv_sec = , tv_nsec = } sampled = 0 res = success = #2 0x00007ffff794c185 in g_async_queue_pop_intern_unlocked (queue=0x5555555e65a0, wait=1, end_time=45999410) at ../glib/glib/gasyncqueue.c:428 retval = __func__ = "g_async_queue_pop_intern_unlocked" #3 0x00007ffff79b54db in g_thread_pool_wait_for_new_task (pool=0x5555555e8220) at ../glib/glib/gthreadpool.c:274 task = 0x0 task = pool = 0x5555555e8220 #4 g_thread_pool_thread_proxy (data=) at ../glib/glib/gthreadpool.c:339 task = pool = 0x5555555e8220 #5 0x00007ffff79b29a5 in g_thread_proxy (data=0x7fffe8000b90) at ../glib/glib/gthread.c:831 thread = 0x7fffe8000b90 __func__ = "g_thread_proxy" #6 0x00007ffff77ac9eb in start_thread (arg=) at pthread_create.c:444 ret = pd = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737345406736, 7665776663796987126, -440, 0, 140737326865040, 140737310085120, -7665789583537654538, -7665793182512368394}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = #7 0x00007ffff78307cc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Thread 3 (Thread 0x7ffff65fe6c0 (LWP 2361) "pool-spawner"): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x00007ffff79da247 in g_cond_wait (cond=0x5555555da688, mutex=0x5555555da680) at ../glib/glib/gthread-posix.c:1552 saved_errno = 0 res = sampled = 1 #2 0x00007ffff794c1b4 in g_async_queue_pop_intern_unlocked (queue=0x5555555da680, wait=1, end_time=-1) at ../glib/glib/gasyncqueue.c:425 retval = __func__ = "g_async_queue_pop_intern_unlocked" #3 0x00007ffff79b4a2e in g_thread_pool_spawn_thread (data=) at ../glib/glib/gthreadpool.c:311 spawn_thread_data = thread = 0x0 error = 0x0 prgname = name = "pool-udisksd\000\000\000" #4 0x00007ffff79b29a5 in g_thread_proxy (data=0x5555555e8280) at ../glib/glib/gthread.c:831 thread = 0x5555555e8280 __func__ = "g_thread_proxy" #5 0x00007ffff77ac9eb in start_thread (arg=) at pthread_create.c:444 ret = pd = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737345406736, 7665776663796987126, -440, 2, 140737488345456, 140737318477824, -7665790683586153226, -7665793182512368394}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = #6 0x00007ffff78307cc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Thread 2 (Thread 0x7ffff6dff6c0 (LWP 2360) "gmain"): #0 0x00007ffff7822f6f in __GI___poll (fds=0x5555555e5e10, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 sc_ret = -516 sc_cancel_oldtype = 0 #1 0x00007ffff79df206 in g_main_context_poll_unlocked (priority=2147483647, n_fds=2, fds=0x5555555e5e10, timeout=, context=0x5555555e5c00) at ../glib/glib/gmain.c:4653 ret = errsv = poll_func = 0x7ffff79879f0 max_priority = 2147483647 timeout = -1 some_ready = nfds = 2 allocated_nfds = 2 fds = 0x5555555e5e10 begin_time_nsec = 45634940393 #2 g_main_context_iterate_unlocked.isra.0 (context=context@entry=0x5555555e5c00, block=block@entry=1, dispatch=dispatch@entry=1, self=) at ../glib/glib/gmain.c:4344 max_priority = 2147483647 timeout = -1 some_ready = nfds = 2 --Type for more, q to quit, c to continue without paging-- allocated_nfds = 2 fds = 0x5555555e5e10 begin_time_nsec = 45634940393 #3 0x00007ffff797f112 in g_main_context_iteration (context=0x5555555e5c00, may_block=may_block@entry=1) at ../glib/glib/gmain.c:4414 retval = #4 0x00007ffff797f162 in glib_worker_main (data=) at ../glib/glib/gmain.c:6574 #5 0x00007ffff79b29a5 in g_thread_proxy (data=0x5555555ddef0) at ../glib/glib/gthread.c:831 thread = 0x5555555ddef0 __func__ = "g_thread_proxy" #6 0x00007ffff77ac9eb in start_thread (arg=) at pthread_create.c:444 ret = pd = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737345406736, 7665776663796987126, -440, 2, 140737488345424, 140737326870528, -7665791781487168266, -7665793182512368394}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = #7 0x00007ffff78307cc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 Thread 1 (Thread 0x7ffff7362880 (LWP 2357) "udisksd"): #0 __pthread_kill_implementation (threadid=, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44 tid = ret = 0 pd = old_mask = {__val = {0}} ret = #1 0x00007ffff77ae8a3 in __pthread_kill_internal (signo=6, threadid=) at pthread_kill.c:78 #2 0x00007ffff775e668 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 ret = #3 0x00007ffff77464b8 in __GI_abort () at abort.c:79 save_stage = 1 act = {__sigaction_handler = {sa_handler = 0x20, sa_sigaction = 0x20}, sa_mask = {__val = {0 }}, sa_flags = 0, sa_restorer = 0x0} #4 0x00007ffff7747390 in __libc_message (fmt=fmt@entry=0x7ffff78be2fc "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:150 ap = {{gp_offset = 16, fp_offset = 0, overflow_arg_area = 0x7fffffff98c0, reg_save_area = 0x7fffffff9850}} fd = 2 list = nlist = cp = #5 0x00007ffff783eb4b in __GI___fortify_fail (msg=msg@entry=0x7ffff78be314 "stack smashing detected") at fortify_fail.c:24 #6 0x00007ffff783fe56 in __stack_chk_fail () at stack_chk_fail.c:24 #7 0x00007ffff782c3e9 in __GI___ioctl (fd=, request=) at ../sysdeps/unix/sysv/linux/ioctl.c:43 args = {{gp_offset = 0, fp_offset = 0, overflow_arg_area = 0x0, reg_save_area = 0x0}} r = #8 0x0000000000000000 in () ```

EDIT:

The relevant function call is bd_nvme_find_ctrls_for_ns(). Disabling it greatly reduces the amount of crashes. The remaining ones might point to another issue.

Looking at libblockdev, the crash takes place while calling nvme_scan(). libnvme calls nvme_ns_identify_descs().

Backtrace with breakpoint at nvme_ns_identify_descs()

Click to expand ``` Starting program: /usr/lib/udisks2/udisksd [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". udisks-Message: 14:45:41.965: udisks daemon version 2.10.1 starting [New Thread 0x7ffff6dff6c0 (LWP 6211)] [New Thread 0x7ffff65fe6c0 (LWP 6212)] [New Thread 0x7ffff5dfd6c0 (LWP 6213)] [New Thread 0x7ffff55fc6c0 (LWP 6214)] [New Thread 0x7fffe7fff6c0 (LWP 6215)] Thread 1 "udisksd" hit Breakpoint 1, nvme_ns_identify_descs (n=n@entry=0x555555655420, descs=descs@entry=0x7fffffffaad0) at ../src/nvme/tree.c:2205 2205 { (gdb) n 2207 return nvme_identify_ns_descs(nvme_ns_get_fd(n), nvme_ns_get_nsid(n), descs); (gdb) n *** stack smashing detected ***: terminated Thread 1 "udisksd" received signal SIGABRT, Aborted. __pthread_kill_implementation (threadid=, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44 44 return INTERNAL_SYSCALL_ERROR_P (ret) ? INTERNAL_SYSCALL_ERRNO (ret) : 0; (gdb) info stack #0 nvme_ns_identify_descs (n=n@entry=0x555555655450, descs=descs@entry=0x7fffffffaad0) at ../src/nvme/tree.c:2207 #1 0x00007ffff4769f7b in nvme_ns_init (n=n@entry=0x555555655450) at ../src/nvme/tree.c:2432 #2 0x00007ffff476a41d in nvme_ns_open (name=name@entry=0x55555565d460 "nvme0n1") at ../src/nvme/tree.c:2479 #3 0x00007ffff476a4c9 in __nvme_scan_namespace (sysfs_dir=0x5555556540d0 "/sys/class/nvme/nvme0", name=name@entry=0x555555655143 "nvme0n1") at ../src/nvme/tree.c:2539 #4 0x00007ffff476a597 in nvme_ctrl_scan_namespace (r=r@entry=0x5555556542b0, c=c@entry=0x555555655180, name=0x555555655143 "nvme0n1") at ../src/nvme/tree.c:2574 #5 0x00007ffff476a6e0 in nvme_ctrl_scan_namespaces (r=r@entry=0x5555556542b0, c=c@entry=0x555555655180) at ../src/nvme/tree.c:1693 #6 0x00007ffff476a899 in nvme_scan_ctrl (r=0x5555556542b0, name=0x555555654b03 "nvme0") at ../src/nvme/tree.c:2061 #7 0x00007ffff4767611 in nvme_scan_topology (r=r@entry=0x5555556542b0, f=f@entry=0x0, f_args=f_args@entry=0x0) at ../src/nvme/tree.c:151 #8 0x00007ffff4767795 in nvme_scan (config_file=config_file@entry=0x0) at ../src/nvme/tree.c:234 #9 0x00007ffff4783ebb in bd_nvme_find_ctrls_for_ns (ns_sysfs_path=0x55555564eb10 "/sys/devices/pci0000:00/0000:00:02.1/0000:01:00.0/nvme/nvme0/nvme0n1", subsysnqn=0x555555654e40 "nqn.2014.08.org.nvmexpress:1c5c1c5cNS04N638710704H1Q SKHynix_HFS001TD9TNI-L2B0B", host_nqn=0x0, host_id=0x0, error=) at nvme-fabrics.c:464 #10 0x00007ffff7f6055a in bd_nvme_find_ctrls_for_ns (ns_sysfs_path=ns_sysfs_path@entry=0x55555564eb10 "/sys/devices/pci0000:00/0000:00:02.1/0000:01:00.0/nvme/nvme0/nvme0n1", subsysnqn=subsysnqn@entry=0x555555654e40 "nqn.2014.08.org.nvmexpress:1c5c1c5cNS04N638710704H1Q SKHynix_HFS001TD9TNI-L2B0B", host_nqn=host_nqn@entry=0x0, host_id=host_id@entry=0x0, error=error@entry=0x0) at plugin_apis/nvme.c:1150 #11 0x000055555557a3a0 in find_drive (object_manager=object_manager@entry=0x5555556200a0 [GDBusObjectManagerServer], block_device=, out_drive=out_drive@entry=0x7fffffffcdb8) at udiskslinuxblock.c:263 #12 0x000055555557eab2 in udisks_linux_block_update (block=0x555555654550 [UDisksLinuxBlock], object=) at udiskslinuxblock.c:1138 #13 0x0000555555577c8b in block_device_update (object=, uevent_action=, _iface=) at udiskslinuxblockobject.c:499 #14 0x0000555555577d56 in update_iface (object=object@entry=0x555555650950, uevent_action=uevent_action@entry=0x5555555b885b "add", has_func=has_func@entry=0x55555557738d , connect_func=connect_func@entry=0x555555577393 , update_func=update_func@entry=0x555555577c7c , skeleton_type=0x555555650e20 [UDisksLinuxBlock/UDisksBlockSkeleton/GDBusInterfaceSkeleton], _interface_pointer=0x5555556509a0) at udiskslinuxblockobject.c:473 #15 0x00005555555783aa in udisks_linux_block_object_uevent (object=object@entry=0x555555650950 [UDisksLinuxBlockObject], action=action@entry=0x5555555b885b "add", device=device@entry=0x0) at udiskslinuxblockobject.c:910 #16 0x0000555555578791 in udisks_linux_block_object_constructed (_object=0x555555650950 [UDisksLinuxBlockObject]) at udiskslinuxblockobject.c:244 #17 0x00007ffff7aa1e63 in g_object_new_internal (class=0x555555650750, params=0x7fffffffd080, n_params=2) at ../glib/gobject/gobject.c:2296 #18 0x00007ffff7aa3f0b in g_object_new_internal (n_params=2, params=0x7fffffffd080, class=0x555555650750) at ../glib/gobject/gobject.c:2562 #19 g_object_new_valist (object_type=, first_property_name=first_property_name@entry=0x5555555b83c0 "daemon", var_args=var_args@entry=0x7fffffffd350) at ../glib/gobject/gobject.c:2584 #20 0x00007ffff7aa429e in g_object_new (object_type=, first_property_name=first_property_name@entry=0x5555555b83c0 "daemon") at ../glib/gobject/gobject.c:2057 #21 0x0000555555577fac in udisks_linux_block_object_new (daemon=daemon@entry=0x55555560cea0 [UDisksDaemon], device=device@entry=0x55555564b0e0 [UDisksLinuxDevice]) at udiskslinuxblockobject.c:327 #22 0x000055555557640d in handle_block_uevent_for_block (provider=provider@entry=0x55555562e840 [UDisksLinuxProvider], action=action@entry=0x5555555b885b "add", device=device@entry=0x55555564b0e0 [UDisksLinuxDevice]) at udiskslinuxprovider.c:1235 #23 0x00005555555764fb in handle_block_uevent (provider=provider@entry=0x55555562e840 [UDisksLinuxProvider], action=action@entry=0x5555555b885b "add", device=device@entry=0x55555564b0e0 [UDisksLinuxDevice]) at udiskslinuxprovider.c:1410 #24 0x0000555555576576 in udisks_linux_provider_handle_uevent (provider=provider@entry=0x55555562e840 [UDisksLinuxProvider], action=action@entry=0x5555555b885b "add", device=0x55555564b0e0 [UDisksLinuxDevice]) at udiskslinuxprovider.c:1439 #25 0x00005555555765ad in do_coldplug (provider=provider@entry=0x55555562e840 [UDisksLinuxProvider], udisks_devices=udisks_devices@entry=0x55555563a950 = {...}) at udiskslinuxprovider.c:549 #26 0x0000555555576765 in udisks_linux_provider_start (_provider=0x55555562e840 [UDisksLinuxProvider]) at udiskslinuxprovider.c:756 #27 0x0000555555574e1d in udisks_provider_start (provider=0x55555562e840 [UDisksLinuxProvider]) at udisksprovider.c:183 #28 0x0000555555572c7b in udisks_daemon_constructed (object=0x55555560cea0 [UDisksDaemon]) at udisksdaemon.c:414 #29 0x00007ffff7aa1e63 in g_object_new_internal (class=0x55555560c3e0, params=0x7fffffffd7e0, n_params=5) at ../glib/gobject/gobject.c:2296 #30 0x00007ffff7aa3f0b in g_object_new_internal (n_params=5, params=0x7fffffffd7e0, class=0x55555560c3e0) at ../glib/gobject/gobject.c:2562 #31 g_object_new_valist (object_type=, first_property_name=first_property_name@entry=0x5555555b894f "connection", var_args=var_args@entry=0x7fffffffdab0) at ../glib/gobject/gobject.c:2584 #32 0x00007ffff7aa429e in g_object_new (object_type=, first_property_name=first_property_name@entry=0x5555555b894f "connection") at ../glib/gobject/gobject.c:2057 #33 0x0000555555573648 in udisks_daemon_new (connection=0x55555560ad40 [GDBusConnection], disable_modules=0, force_load_modules=0, uninstalled=0, enable_tcrypt=0) at udisksdaemon.c:597 #34 0x000055555557225a in on_bus_acquired (connection=, name=, user_data=) at main.c:63 #35 0x00007ffff7bef7f8 in connection_get_cb (source_object=, res=0x55555560ac60, user_data=0x5555555e2040) at ../glib/gio/gdbusnameowning.c:506 #36 0x00007ffff7b88ce4 in g_task_return_now (task=0x55555560ac60 [GTask]) at ../glib/gio/gtask.c:1371 #37 0x00007ffff7b8cbfd in g_task_return (type=, task=0x55555560ac60 [GTask]) at ../glib/gio/gtask.c:1440 #38 g_task_return (task=0x55555560ac60 [GTask], type=) at ../glib/gio/gtask.c:1397 #39 0x00007ffff7beae63 in bus_get_async_initable_cb (source_object=0x55555560ad40 [GDBusConnection], res=, user_data=0x55555560ac60) at ../glib/gio/gdbusconnection.c:7516 #40 0x00007ffff7b88ce4 in g_task_return_now (task=0x55555560b580 [GTask]) at ../glib/gio/gtask.c:1371 #41 0x00007ffff7b88d1d in complete_in_idle_cb (task=0x55555560b580) at ../glib/gio/gtask.c:1385 #42 0x00007ffff798af19 in g_main_dispatch (context=0x5555555e9bd0) at ../glib/glib/gmain.c:3476 #43 0x00007ffff79e92b7 in g_main_context_dispatch_unlocked (context=0x5555555e9bd0) at ../glib/glib/gmain.c:4284 #44 g_main_context_iterate_unlocked.isra.0 (context=0x5555555e9bd0, block=block@entry=1, dispatch=dispatch@entry=1, self=) at ../glib/glib/gmain.c:4349 #45 0x00007ffff798bb47 in g_main_loop_run (loop=0x5555555e9d60) at ../glib/glib/gmain.c:4551 #46 0x00005555555723f3 in main (argc=, argv=) at main.c:184 (gdb) ptype n type = struct nvme_ns { struct list_node entry; struct list_head paths; struct nvme_subsystem *s; struct nvme_ctrl *c; int fd; __u32 nsid; char *name; char *generic_name; char *sysfs_dir; int lba_shift; int lba_size; int meta_size; uint64_t lba_count; uint64_t lba_util; uint8_t eui64[8]; uint8_t nguid[16]; unsigned char uuid[16]; enum nvme_csi csi; } * (gdb) print *n $4 = {entry = {next = 0x0, prev = 0x0}, paths = {n = {next = 0x0, prev = 0x0}}, s = 0x0, c = 0x0, fd = 14, nsid = 1, name = 0x555555655500 "nvme0n1", generic_name = 0x555555655520 "ng0n1", sysfs_dir = 0x0, lba_shift = 9, lba_size = 512, meta_size = 0, lba_count = 2000409264, lba_util = 2000409264, eui64 = "\000\000\000\000\000\000\000", nguid = '\000' , uuid = '\000' , csi = NVME_CSI_NVM} (gdb) ptype descs type = struct nvme_ns_id_desc { __u8 nidt; __u8 nidl; __le16 rsvd; __u8 nid[]; } * (gdb) print *descs $3 = {nidt = 0 '\000', nidl = 0 '\000', rsvd = 0, nid = 0x7fffffffaad4 ""} ```

Backtrace with breakpoint at nvme_submit_passthru()

Click to expand ``` Starting program: /usr/lib/udisks2/udisksd [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". udisks-Message: 21:32:58.928: udisks daemon version 2.11.0 starting [New Thread 0x7ffff6dff6c0 (LWP 105927)] [New Thread 0x7ffff65fe6c0 (LWP 105928)] [New Thread 0x7ffff5dfd6c0 (LWP 105929)] [New Thread 0x7ffff55fc6c0 (LWP 105930)] [New Thread 0x7fffe7fff6c0 (LWP 105931)] ... (gdb) c Continuing. Thread 1 "udisksd" hit Breakpoint 1, nvme_submit_passthru (fd=14, ioctl_cmd=ioctl_cmd@entry=3225964097, cmd=cmd@entry=0x7fffffff9a00, result=0x0) at ../src/nvme/ioctl.c:139 139 err = ioctl(fd, ioctl_cmd, cmd); (gdb) info stack #0 nvme_submit_passthru (fd=14, ioctl_cmd=ioctl_cmd@entry=3225964097, cmd=cmd@entry=0x7fffffff9a00, result=0x0) at ../src/nvme/ioctl.c:139 #1 0x00007ffff475fcec in nvme_submit_admin_passthru (fd=, cmd=cmd@entry=0x7fffffff9a00, result=) at ../src/nvme/ioctl.c:233 #2 0x00007ffff475fdee in nvme_identify (args=args@entry=0x7fffffff9a60) at ../src/nvme/ioctl.c:446 #3 0x00007ffff47649c4 in nvme_identify_cns_nsid (fd=, cns=cns@entry=NVME_IDENTIFY_CNS_NS_DESC_LIST, nsid=nsid@entry=1, data=data@entry=0x7fffffffaad0) at ../src/nvme/ioctl.h:478 #4 0x00007ffff4767660 in nvme_identify_ns_descs (descs=0x7fffffffaad0, nsid=1, fd=) at ../src/nvme/ioctl.h:676 #5 nvme_ns_identify_descs (n=n@entry=0x5555556561d0, descs=descs@entry=0x7fffffffaad0) at ../src/nvme/tree.c:2205 #6 0x00007ffff476772e in nvme_ns_init (n=n@entry=0x5555556561d0) at ../src/nvme/tree.c:2424 #7 0x00007ffff47677b3 in nvme_ns_open (name=name@entry=0x55555565e1e0 "nvme0n1") at ../src/nvme/tree.c:2467 #8 0x00007ffff476785f in __nvme_scan_namespace (sysfs_dir=0x555555655bd0 "/sys/class/nvme/nvme0", name=name@entry=0x555555655ec3 "nvme0n1") at ../src/nvme/tree.c:2527 #9 0x00007ffff4767eb9 in nvme_ctrl_scan_namespace (r=r@entry=0x555555654ed0, c=c@entry=0x555555655f00, name=0x555555655ec3 "nvme0n1") at ../src/nvme/tree.c:2560 #10 0x00007ffff4767ff6 in nvme_ctrl_scan_namespaces (r=r@entry=0x555555654ed0, c=c@entry=0x555555655f00) at ../src/nvme/tree.c:1693 #11 0x00007ffff47681b2 in nvme_scan_ctrl (r=r@entry=0x555555654ed0, name=0x555555655893 "nvme0") at ../src/nvme/tree.c:2060 #12 0x00007ffff4768398 in nvme_scan_topology (r=r@entry=0x555555654ed0, f=f@entry=0x0, f_args=f_args@entry=0x0) at ../src/nvme/tree.c:151 #13 0x00007ffff476851c in nvme_scan (config_file=config_file@entry=0x0) at ../src/nvme/tree.c:234 #14 0x00007ffff4784173 in bd_nvme_find_ctrls_for_ns (ns_sysfs_path=0x55555564f7d0 "/sys/devices/pci0000:00/0000:00:02.1/0000:01:00.0/nvme/nvme0/nvme0n1", subsysnqn=0x555555654e60 "nqn.2014.08.org.nvmexpress:1c5c1c5cNS04N638710704H1Q SKHynix_HFS001TD9TNI-L2B0B", host_nqn=0x0, host_id=0x0, error=) at nvme-fabrics.c:462 #15 0x00007ffff7f6055a in bd_nvme_find_ctrls_for_ns (ns_sysfs_path=ns_sysfs_path@entry=0x55555564f7d0 "/sys/devices/pci0000:00/0000:00:02.1/0000:01:00.0/nvme/nvme0/nvme0n1", subsysnqn=subsysnqn@entry=0x555555654e60 "nqn.2014.08.org.nvmexpress:1c5c1c5cNS04N638710704H1Q SKHynix_HFS001TD9TNI-L2B0B", host_nqn=host_nqn@entry=0x0, host_id=host_id@entry=0x0, error=error@entry=0x0) at plugin_apis/nvme.c:1150 #16 0x000055555557a3f4 in find_drive (object_manager=object_manager@entry=0x555555620dd0 [GDBusObjectManagerServer], block_device=, out_drive=out_drive@entry=0x7fffffffcdb8) at udiskslinuxblock.c:263 #17 0x000055555557eb06 in udisks_linux_block_update (block=0x5555556552e0 [UDisksLinuxBlock], object=) at udiskslinuxblock.c:1138 #18 0x0000555555577cdf in block_device_update (object=, uevent_action=, _iface=) at udiskslinuxblockobject.c:499 #19 0x0000555555577daa in update_iface (object=object@entry=0x5555556516e0, uevent_action=uevent_action@entry=0x5555555b88eb "add", has_func=has_func@entry=0x5555555773e1 , connect_func=connect_func@entry=0x5555555773e7 , update_func=update_func@entry=0x555555577cd0 , skeleton_type=0x555555651bb0 [UDisksLinuxBlock/UDisksBlockSkeleton/GDBusInterfaceSkeleton], _interface_pointer=0x555555651730) at udiskslinuxblockobject.c:473 #20 0x00005555555783fe in udisks_linux_block_object_uevent (object=object@entry=0x5555556516e0 [UDisksLinuxBlockObject], action=action@entry=0x5555555b88eb "add", device=device@entry=0x0) at udiskslinuxblockobject.c:910 #21 0x00005555555787e5 in udisks_linux_block_object_constructed (_object=0x5555556516e0 [UDisksLinuxBlockObject]) at udiskslinuxblockobject.c:244 #22 0x00007ffff7aa1e63 in g_object_new_internal (class=0x5555556514e0, params=0x7fffffffd080, n_params=2) at ../glib/gobject/gobject.c:2296 #23 0x00007ffff7aa3f0b in g_object_new_internal (n_params=2, params=0x7fffffffd080, class=0x5555556514e0) at ../glib/gobject/gobject.c:2562 #24 g_object_new_valist (object_type=, first_property_name=first_property_name@entry=0x5555555b8450 "daemon", var_args=var_args@entry=0x7fffffffd350) at ../glib/gobject/gobject.c:2584 #25 0x00007ffff7aa429e in g_object_new (object_type=, first_property_name=first_property_name@entry=0x5555555b8450 "daemon") at ../glib/gobject/gobject.c:2057 #26 0x0000555555578000 in udisks_linux_block_object_new (daemon=daemon@entry=0x55555560dea0 [UDisksDaemon], device=device@entry=0x55555564bda0 [UDisksLinuxDevice]) at udiskslinuxblockobject.c:327 #27 0x0000555555576461 in handle_block_uevent_for_block (provider=provider@entry=0x55555562f500 [UDisksLinuxProvider], action=action@entry=0x5555555b88eb "add", device=device@entry=0x55555564bda0 [UDisksLinuxDevice]) at udiskslinuxprovider.c:1235 #28 0x000055555557654f in handle_block_uevent (provider=provider@entry=0x55555562f500 [UDisksLinuxProvider], action=action@entry=0x5555555b88eb "add", device=device@entry=0x55555564bda0 [UDisksLinuxDevice]) at udiskslinuxprovider.c:1413 #29 0x00005555555765ca in udisks_linux_provider_handle_uevent (provider=provider@entry=0x55555562f500 [UDisksLinuxProvider], action=action@entry=0x5555555b88eb "add", device=0x55555564bda0 [UDisksLinuxDevice]) at udiskslinuxprovider.c:1442 #30 0x0000555555576601 in do_coldplug (provider=provider@entry=0x55555562f500 [UDisksLinuxProvider], udisks_devices=udisks_devices@entry=0x55555563b610 = {...}) at udiskslinuxprovider.c:549 #31 0x00005555555767b9 in udisks_linux_provider_start (_provider=0x55555562f500 [UDisksLinuxProvider]) at udiskslinuxprovider.c:756 #32 0x0000555555574e1d in udisks_provider_start (provider=0x55555562f500 [UDisksLinuxProvider]) at udisksprovider.c:183 #33 0x0000555555572c7b in udisks_daemon_constructed (object=0x55555560dea0 [UDisksDaemon]) at udisksdaemon.c:414 #34 0x00007ffff7aa1e63 in g_object_new_internal (class=0x55555560d3e0, params=0x7fffffffd7e0, n_params=5) at ../glib/gobject/gobject.c:2296 #35 0x00007ffff7aa3f0b in g_object_new_internal (n_params=5, params=0x7fffffffd7e0, class=0x55555560d3e0) at ../glib/gobject/gobject.c:2562 #36 g_object_new_valist (object_type=, first_property_name=first_property_name@entry=0x5555555b89df "connection", var_args=var_args@entry=0x7fffffffdab0) at ../glib/gobject/gobject.c:2584 #37 0x00007ffff7aa429e in g_object_new (object_type=, first_property_name=first_property_name@entry=0x5555555b89df "connection") at ../glib/gobject/gobject.c:2057 #38 0x0000555555573648 in udisks_daemon_new (connection=0x55555560bd40 [GDBusConnection], disable_modules=0, force_load_modules=0, uninstalled=0, enable_tcrypt=0) at udisksdaemon.c:597 #39 0x000055555557225a in on_bus_acquired (connection=, name=, user_data=) at main.c:63 #40 0x00007ffff7bef7f8 in connection_get_cb (source_object=, res=0x55555560bc60, user_data=0x5555555e3040) at ../glib/gio/gdbusnameowning.c:506 #41 0x00007ffff7b88ce4 in g_task_return_now (task=0x55555560bc60 [GTask]) at ../glib/gio/gtask.c:1371 #42 0x00007ffff7b8cbfd in g_task_return (type=, task=0x55555560bc60 [GTask]) at ../glib/gio/gtask.c:1440 #43 g_task_return (task=0x55555560bc60 [GTask], type=) at ../glib/gio/gtask.c:1397 #44 0x00007ffff7beae63 in bus_get_async_initable_cb (source_object=0x55555560bd40 [GDBusConnection], res=, user_data=0x55555560bc60) at ../glib/gio/gdbusconnection.c:7516 #45 0x00007ffff7b88ce4 in g_task_return_now (task=0x55555560c580 [GTask]) at ../glib/gio/gtask.c:1371 #46 0x00007ffff7b88d1d in complete_in_idle_cb (task=0x55555560c580) at ../glib/gio/gtask.c:1385 #47 0x00007ffff798af19 in g_main_dispatch (context=0x5555555eabd0) at ../glib/glib/gmain.c:3476 #48 0x00007ffff79e92b7 in g_main_context_dispatch_unlocked (context=0x5555555eabd0) at ../glib/glib/gmain.c:4284 #49 g_main_context_iterate_unlocked.isra.0 (context=0x5555555eabd0, block=block@entry=1, dispatch=dispatch@entry=1, self=) at ../glib/glib/gmain.c:4349 #50 0x00007ffff798bb47 in g_main_loop_run (loop=0x5555555ead60) at ../glib/glib/gmain.c:4551 #51 0x00005555555723f3 in main (argc=, argv=) at main.c:184 (gdb) print *cmd $9 = {opcode = 6 '\006', flags = 0 '\000', rsvd1 = 0, nsid = 1, cdw2 = 0, cdw3 = 0, metadata = 0, addr = 140737488333520, metadata_len = 0, data_len = 4096, cdw10 = 3, cdw11 = 0, cdw12 = 0, cdw13 = 0, cdw14 = 0, cdw15 = 0, timeout_ms = 0, result = 0} (gdb) n *** stack smashing detected ***: terminated Thread 1 "udisksd" received signal SIGABRT, Aborted. __pthread_kill_implementation (threadid=, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44 44 return INTERNAL_SYSCALL_ERROR_P (ret) ? INTERNAL_SYSCALL_ERRNO (ret) : 0; (gdb) info stack #0 __pthread_kill_implementation (threadid=, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44 #1 0x00007ffff77dd8a3 in __pthread_kill_internal (signo=6, threadid=) at pthread_kill.c:78 #2 0x00007ffff778d668 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 #3 0x00007ffff77754b8 in __GI_abort () at abort.c:79 #4 0x00007ffff7776390 in __libc_message (fmt=fmt@entry=0x7ffff78ed2fc "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:150 #5 0x00007ffff786db4b in __GI___fortify_fail (msg=msg@entry=0x7ffff78ed314 "stack smashing detected") at fortify_fail.c:24 #6 0x00007ffff786ee56 in __stack_chk_fail () at stack_chk_fail.c:24 #7 0x00007ffff785b3e9 in __GI___ioctl (fd=, request=) at ../sysdeps/unix/sysv/linux/ioctl.c:43 #8 0x0000000000000000 in () ```
xsmile commented 11 months ago

Another issue with with nvme effects-log that might be related:

Command output

Click to expand ``` # nvme effects-log /dev/nvme0 -v -b opcode : 06 flags : 00 rsvd1 : 0000 nsid : 00000001 cdw2 : 00000000 cdw3 : 00000000 data_len : 00001000 metadata_len : 00000000 addr : 7fff5e0f6f00 metadata : 0 cdw10 : 00000000 cdw11 : 00000000 cdw12 : 00000000 cdw13 : 00000000 cdw14 : 00000000 cdw15 : 00000000 timeout_ms : 00000000 result : 00000000 err : 0 latency : 4409 us opcode : 06 flags : 00 rsvd1 : 0000 nsid : 00000001 cdw2 : 00000000 cdw3 : 00000000 data_len : 00001000 metadata_len : 00000000 addr : 7fff5e0f7f00 metadata : 0 cdw10 : 00000003 cdw11 : 00000000 cdw12 : 00000000 cdw13 : 00000000 cdw14 : 00000000 cdw15 : 00000000 timeout_ms : 00000000 result : 00000000 err : 0 latency : 284 us malloc(): invalid size (unsorted) ```

Backtrace from another run

Click to expand ``` Reading symbols from /bin/nvme... (No debugging symbols found in /bin/nvme) (gdb) r effects-log /dev/nvme0 Starting program: /usr/bin/nvme effects-log /dev/nvme0 [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () (gdb) thread apply all backtrace full Thread 1 (Thread 0x7ffff7d05800 (LWP 7515) "nvme"): #0 0x0000000000000000 in ?? () No symbol table info available. #1 0x00007ffff7f76aea in nvme_submit_passthru (fd=, ioctl_cmd=ioctl_cmd@entry=3225964097, cmd=cmd@entry=0x7fffffffb710, result=0x0) at ../src/nvme/ioctl.c:141 start = {tv_sec = 0, tv_usec = 0} end = {tv_sec = 0, tv_usec = 0} err = 0 #2 0x00007ffff7f76cec in nvme_submit_admin_passthru (fd=, cmd=cmd@entry=0x7fffffffb710, result=) at ../src/nvme/ioctl.c:233 No locals. #3 0x00007ffff7f76dee in nvme_identify (args=args@entry=0x7fffffffb770) at ../src/nvme/ioctl.c:446 cdw10 = cdw11 = cdw14 = cmd = {opcode = 6 '\006', flags = 0 '\000', rsvd1 = 0, nsid = 1, cdw2 = 0, cdw3 = 0, metadata = 0, addr = 140737488340960, metadata_len = 0, data_len = 4096, cdw10 = 3, cdw11 = 0, cdw12 = 0, cdw13 = 0, cdw14 = 0, cdw15 = 0, timeout_ms = 0, result = 0} #4 0x00007ffff7f7b9c4 in nvme_identify_cns_nsid (fd=, cns=cns@entry=NVME_IDENTIFY_CNS_NS_DESC_LIST, nsid=nsid@entry=1, data=data@entry=0x7fffffffc7e0) at ../src/nvme/ioctl.h:478 args = {result = 0x0, data = 0x7fffffffc7e0, args_size = 48, fd = 4, timeout = 0, cns = NVME_IDENTIFY_CNS_NS_DESC_LIST, csi = NVME_CSI_NVM, nsid = 1, cntid = 0, cns_specific_id = 0, uuidx = 0 '\000'} #5 0x00007ffff7f7e660 in nvme_identify_ns_descs (descs=0x7fffffffc7e0, nsid=1, fd=) at ../src/nvme/ioctl.h:676 No locals. #6 nvme_ns_identify_descs (n=n@entry=0x55555566df40, descs=descs@entry=0x7fffffffc7e0) at ../src/nvme/tree.c:2205 No locals. #7 0x00007ffff7f7e72e in nvme_ns_init (n=n@entry=0x55555566df40) at ../src/nvme/tree.c:2424 ns = {nsze = 2000409264, ncap = 2000409264, nuse = 2000409264, nsfeat = 0 '\000', nlbaf = 0 '\000', flbas = 0 '\000', mc = 0 '\000', dpc = 0 '\000', dps = 0 '\000', nmic = 0 '\000', rescap = 0 '\000', fpi = 0 '\000', dlfeat = 0 '\000', nawun = 0, nawupf = 0, nacwu = 0, nabsn = 0, nabo = 0, nabspf = 0, noiob = 0, nvmcap = '\000' , npwg = 0, npwa = 0, npdg = 0, npda = 0, nows = 0, mssrl = 0, mcl = 0, msrc = 0 '\000', rsvd81 = 0 '\000', nulbaf = 0 '\000', rsvd83 = "\000\000\000\000\000\000\000\000", anagrpid = 0, rsvd96 = "\000\000", nsattr = 0 '\000', nvmsetid = 0, endgid = 0, nguid = "\254\344.\000\005^j\366.\344\254\000\000\000\000\001", eui64 = "\254\344.\000\005^", , lbaf = {{ms = 0, ds = 9 '\t', rp = 0 '\000'}, {ms = 0, ds = 0 '\000', rp = 0 '\000'} }, lbstm = 0, vs = '\000' } buffer = "\001\b\000\000\254\344.\000\005^j\366\002\020\000\000\254\344.\000\005^j\366.\344\254\000\000\000\000\001", '\000' descs = 0x7fffffffc7e0 flbas = ret = 0 #8 0x00007ffff7f7e7b3 in nvme_ns_open (name=name@entry=0x55555566de90 "nvme0n1") at ../src/nvme/tree.c:2467 n = 0x55555566df40 fd = 4 #9 0x00007ffff7f7e85f in __nvme_scan_namespace (sysfs_dir=0x55555566d270 "/sys/class/nvme/nvme0", name=name@entry=0x55555566dc13 "nvme0n1") at ../src/nvme/tree.c:2527 n = path = 0x55555566df10 "/sys/class/nvme/nvme0/nvme0n1" ret = blkdev = 0x55555566de90 "nvme0n1" #10 0x00007ffff7f7eeb9 in nvme_ctrl_scan_namespace (r=r@entry=0x55555566d660, c=c@entry=0x55555566dc30, name=0x55555566dc13 "nvme0n1") at ../src/nvme/tree.c:2560 n = _n = __n = #11 0x00007ffff7f7eff6 in nvme_ctrl_scan_namespaces (r=r@entry=0x55555566d660, c=c@entry=0x55555566dc30) at ../src/nvme/tree.c:1693 namespaces = 0x55555566deb0 i = 0 ret = 1 #12 0x00007ffff7f7f1b2 in nvme_scan_ctrl (r=r@entry=0x55555566d660, name=0x555555675703 "nvme0") at ../src/nvme/tree.c:2060 h = s = 0x55555566db40 c = 0x55555566dc30 path = 0x55555566d270 "/sys/class/nvme/nvme0" hostnqn = hostid = subsysnqn = subsysname = 0x55555566d290 "nvme0" ret = #13 0x00007ffff7f7f398 in nvme_scan_topology (r=r@entry=0x55555566d660, f=f@entry=0x0, f_args=f_args@entry=0x0) at ../src/nvme/tree.c:151 c = subsys = 0x1219d6cf0d8ba600 ctrls = 0x55555566d1f0 i = 0 num_subsys = num_ctrls = 1 ret = #14 0x00007ffff7f7f51c in nvme_scan (config_file=0x0) at ../src/nvme/tree.c:234 r = 0x55555566d660 #15 0x000055555556e068 in ?? () No symbol table info available. #16 0x00005555555af8a5 in ?? () No symbol table info available. #17 0x000055555556713b in ?? () No symbol table info available. --Type for more, q to quit, c to continue without paging-- #18 0x00007ffff7d6acd0 in ?? () from /usr/lib/libc.so.6 No symbol table info available. #19 0x00007ffff7d6ad8a in __libc_start_main () from /usr/lib/libc.so.6 No symbol table info available. #20 0x00005555555672c5 in ?? () No symbol table info available. ```
tbzatek commented 11 months ago

We've covered most of the places in libblockdev by https://github.com/storaged-project/libblockdev/pull/969

Looking at libblockdev, the crash takes place while calling nvme_scan(). libnvme calls nvme_ns_identify_descs().

Ha, that reminds me that libnvme is actually doing some calls when scanning the tree. Let me have a look at that...

tbzatek commented 11 months ago

Could you please test https://github.com/linux-nvme/libnvme/pull/727 ? Make sure you have https://github.com/storaged-project/libblockdev/pull/969

tbzatek commented 11 months ago

Another issue with with nvme effects-log that might be related:

Right, same issue, the tree scan within libnvme. Majority of other calls should've been fixed by https://github.com/linux-nvme/nvme-cli/pull/2051, released as nvme-cli-2.6.

xsmile commented 11 months ago

Could you please test linux-nvme/libnvme#727 ? Make sure you have storaged-project/libblockdev#969

Both udisksd and nvme effects-log do not produce the stack smashing error anymore. Thanks for the quick fix.

tbzatek commented 11 months ago

Nice, thanks for testing!

monperrus commented 2 months ago

Suffering from this bug on Ubuntu 23 Mantic. Unclear how to workaround it without upgrading to Ubuntu Noble.

Would anybody have a workaround? Thanks!

tbzatek commented 2 months ago

Suffering from this bug on Ubuntu 23 Mantic. Unclear how to workaround it without upgrading to Ubuntu Noble.

What versions of libnvme, libblockdev and udisks are you running? I believe we've fixed all the cases. Of course you need to be running the fixed versions.

monperrus commented 2 months ago

On Ubuntu 23 Mantic:

libblockdev3     3.0.3-1           
libnvme1         1.5-3             
udisks2          2.10.1-1ubuntu1.1