MyHush / hush3

Hush: Speak And Transact Freely
https://myhush.org
Other
15 stars 13 forks source link

Segmentation fault, core dumped #9

Closed TheTrunk closed 5 years ago

TheTrunk commented 5 years ago

After sending using z_sendmany and performing some cli calls (not entirely sure which one may be the cause of it) during and right after z_sendmany is running/has succesfully finished a segmentation fault occurs resulting in daemon crash and potentially corruption of data and wallet

TheTrunk commented 5 years ago

A following list of calls is being performed that result in segfault z_sendmany z_getoperationstatus z_getoperationstatus z_getoperationstatus getconnectioncount getblockcount getblockhash getblock getinfo z_listaddresses listreceivedbyaddress -> segfault occurs

TheTrunk commented 5 years ago

I can confirm that the issue is coming from listreceivedbyaddress. Not running this command after sending did not result in segfault. @leto

himu007 commented 5 years ago

Core dump happens when I send funds from z addr to t addr using Swing wallet (HUSHmate). Here is the backtrace data

Thread 12 "zcash-httpworke" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffcee37700 (LWP 19828)]
0x0000555555bf0bc2 in CBlockIndex::GetHeight (this=0x0) at ./chain.h:364
364         return this->chainPower.nHeight;
(gdb) backtrace
#0  0x0000555555bf0bc2 in CBlockIndex::GetHeight (this=0x0) at ./chain.h:364
#1  ListReceived (params=..., fByAccounts=fByAccounts@entry=false) at wallet/rpcwallet.cpp:1627
#2  0x0000555555bf23cc in listreceivedbyaddress (params=..., fHelp=<optimised out>)
    at wallet/rpcwallet.cpp:1742
#3  0x000055555590a5a9 in CRPCTable::execute (this=<optimised out>, 
    strMethod="listreceivedbyaddress", params=...) at rpc/server.cpp:830
#4  0x00005555559bccb3 in HTTPReq_JSONRPC (req=0x7fffb40034b0) at httprpc.cpp:154
#5  0x000055555588e7e3 in boost::detail::function::void_function_invoker2<bool (*)(HTTPRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), void, HTTPRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>::invoke (
    function_ptr=..., a0=<optimised out>, a1=...)
    at /home/user/hush3/depends/x86_64-unknown-linux-gnu/share/../include/boost/function/function_template.hpp:118
#6  0x00005555559c31cf in boost::function2<void, HTTPRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>::operator() (a1=..., a0=<optimised out>, 
    this=<optimised out>)
    at /home/user/hush3/depends/x86_64-unknown-linux-gnu/share/../include/boost/function/function_template.hpp:759
#7  HTTPWorkItem::operator() (this=<optimised out>) at httpserver.cpp:51
#8  0x00005555559c0da7 in WorkQueue<HTTPClosure>::Run (this=0x555556f07820) at httpserver.cpp:137
#9  HTTPWorkQueueRun (queue=0x555556f07820) at httpserver.cpp:359
#10 0x00005555559c2bea in boost::_bi::list1<boost::_bi::value<WorkQueue<HTTPClosure>*> >::operator()<void (*)(WorkQueue<HTTPClosure>*), boost::_bi::list0> (a=<synthetic pointer>..., f=<optimised out>, 
    this=<optimised out>)
    at /home/user/hush3/depends/x86_64-unknown-linux-gnu/share/../include/boost/bind/bind.hpp:259
#11 boost::_bi::bind_t<void, void (*)(WorkQueue<HTTPClosure>*), boost::_bi::list1<boost::_bi::value<WorkQueue<HTTPClosure>*> > >::operator() (this=<optimised out>)
    at /home/user/hush3/depends/x86_64-unknown-linux-gnu/share/../include/boost/bind/bind.hpp:1294
---Type <return> to continue, or q <return> to quit---
list1<boost::_bi::value<WorkQueue<HTTPClosure>*> > > >::run (this=<optimised out>) at /home/user/hush3/depends/x86_64-unknown-linux-gnu/share/../include/boost/thread/detail/thread.hpp:116
#13 0x0000555555d069c2 in thread_proxy ()
#14 0x00007ffff75b76db in start_thread (arg=0x7fffcee37700) at pthread_create.c:463
#15 0x00007ffff64eb88f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
himu007 commented 5 years ago

In my HUSH3.conf file I have rpcworkqueue=256 if that helps.

leto commented 5 years ago

This has been fixed, thanks everybody for debugging.

Core issue was bug in my recent "dpowminconfs" code, fixed in KMD and HUSH now