divinity76 / cacdrive

harddrive emulator using cloudatcost's "cloud storage" as a storage backend.
The Unlicense
9 stars 4 forks source link

Error compiling on Debian 9 #5

Open Ammar7347 opened 5 years ago

Ammar7347 commented 5 years ago

Hi,

I'm getting the following error while compiling:

/tmp/ccO0xafa.o: In function ecurl_mime_init(void*)': main.cpp:(.text+0xc1f): undefined reference tocurl_mime_init' /tmp/ccO0xafa.o: In function ecurl_mime_addpart(curl_mime_s*)': main.cpp:(.text+0xd04): undefined reference tocurl_mime_addpart' /tmp/ccO0xafa.o: In function ecurl_mime_data(curl_mimepart_s*, char const*, unsigned long)': main.cpp:(.text+0xeec): undefined reference tocurl_mime_data' /tmp/ccO0xafa.o: In function ecurl_mime_name(curl_mimepart_s*, char const*)': main.cpp:(.text+0x100c): undefined reference tocurl_mime_name' /tmp/ccO0xafa.o: In function ecurl_mime_filename(curl_mimepart_s*, char const*)': main.cpp:(.text+0x112c): undefined reference tocurl_mime_filename' /tmp/ccO0xafa.o: In function Downloadcacapi::upload_multi[abi:cxx11](std::vector<Downloadcacapi::Upload_multi_arg, std::allocator<Downloadcacapi::Upload_multi_arg> >)': main.cpp:(.text+0x3601): undefined reference tocurl_mime_free' /tmp/ccO0xafa.o: In function Downloadcacapi::upload(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': main.cpp:(.text+0x3eee): undefined reference tocurl_mime_free' collect2: error: ld returned 1 exit status

divinity76 commented 5 years ago

my best guess is that you did not add "-lcurl" as an argument to g++,

what is the full command line you executed when you got that error?

Ammar7347 commented 5 years ago

I used g++ src/main.cpp -std=c++17 -lcurl -lpthread -O2 and also tried with "c++16"

divinity76 commented 5 years ago

what do you get if you run

g++ -v src/main.cpp -std=c++17 -lcurl -lpthread -O2 2>&1 | pastebinit

?

Ammar7347 commented 5 years ago

http://paste.debian.net/1054673/

divinity76 commented 5 years ago

ok, i think i know the problem.

your libcurl headers (curl.h & co) are version >= 7.56.0, but the actual runtime library you link against (libcurl.so) is much older, older than version 7.56.0.

somehow you got 2 different versions of libcurl installed at the same time, and they are colliding.

what do you get if you run

find / -iname "*curl*.so" 2>/dev/null

? (ps, the command may take several minutes to complete)

Ammar7347 commented 5 years ago

/root/curl/curl-7.62.0/lib/.libs/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so /usr/local/lib/libcurl.so

divinity76 commented 5 years ago

what do you get if you run

g++ -v src/main.cpp -std=c++17 -l/root/curl/curl-7.62.0/lib/.libs/libcurl.so -lpthread -O2 2>&1 | pastebinit

?

Ammar7347 commented 5 years ago

http://paste.debian.net/1054678/

divinity76 commented 5 years ago

how about

g++ -v src/main.cpp -std=c++17 -L /root/curl/curl-7.62.0/lib/.libs/ -lcurl -lpthread -O2 2>&1 | pastebinit

?

Ammar7347 commented 5 years ago

http://paste.debian.net/1054681/

divinity76 commented 5 years ago

ok, how about

g++ src/main.cpp -std=c++17 -L /root/curl/curl-7.62.0/lib/.libs/ -lcurl -lpthread -O2

?

Ammar7347 commented 5 years ago

That worked thanks!

One last question. What count number should I need to use in order for 987 GB of storage? Because the one I used just creates 6 GB for some reason.

Edit: Now I'm getting the following error:

root@vm2953:~/cacdrive# sudo ./a.out config.conf ./a.out(+0xea75)[0x560231ccca75] ./a.out(+0x4b7c)[0x560231cc2b7c] /lib/x86_64-linux-gnu/libc.so.6(libc_start_main+0xe7)[0x7f28a52f2b97] ./a.out(+0x4dba)[0x560231cc2dba] ./a.out:src/main.cpp:1511: unable to open nbd path: /dev/nbd1 : No such file or directory shutting down, cleaning up.. thread doing the cleanup: 139812563625856 root@vm2953:~/cacdrive# sudo modprobe nbd root@vm2953:~/cacdrive# sudo ./a.out config.conf starting 3 worker thread(s)... done. pausing mainthread.. nbdthread waiting for all workers (0/3) to be become ready. (this usually takes a long time - some problem @CAC login system) worker #2 ready. worker #1 ready. upload queue emptied. worker #3 ready. all workers (3/3) ready, nbdthread starting NBD_DO_IT. ./a.out(+0xf074)[0x55b1369a1074] ./a.out(+0x12f8d)[0x55b1369a4f8d] ./a.out(+0x14612)[0x55b1369a6612] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7f601586d6db] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f6014ff588f] ./a.out:src/main.cpp:2509: failed to seek to -2121531792 in sector file sectorindex.sec: Unknown error -1 shutting down, cleaning up.. thread doing the cleanup: 140050523387648 request socket shutting down, worker exiting. ./a.out(+0x5505)[0x55b136997505] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7f6015878890] /lib/x86_64-linux-gnu/libpthread.so.0(pthread_rwlock_rdlock+0x12)[0x7f60158722c2] /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(CRYPTO_THREAD_read_lock+0x9)[0x7f601412f8a9] /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(+0x1387c0)[0x7f60140be7c0] /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(ERR_lib_error_string+0x4e)[0x7f60140bebae] /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(ERR_error_string_n+0x66)[0x7f60140bed96] /usr/lib/x86_64-linux-gnu/libcurl.so.4(+0x5cf83)[0x7f6015ae1f83] /usr/lib/x86_64-linux-gnu/libcurl.so.4(+0x602ef)[0x7f6015ae52ef] /usr/lib/x86_64-linux-gnu/libcurl.so.4(+0x6113a)[0x7f6015ae613a] /usr/lib/x86_64-linux-gnu/libcurl.so.4(+0x12f62)[0x7f6015a97f62] /usr/lib/x86_64-linux-gnu/libcurl.so.4(+0x1f886)[0x7f6015aa4886] /usr/lib/x86_64-linux-gnu/libcurl.so.4(+0x35456)[0x7f6015aba456] /usr/lib/x86_64-linux-gnu/libcurl.so.4(curl_multi_perform+0xb4)[0x7f6015abb3f4] /usr/lib/x86_64-linux-gnu/libcurl.so.4(curl_easy_perform+0xf4)[0x7f6015ab1274] ./a.out(+0xa28f)[0x55b13699c28f] ./a.out(+0xaf33)[0x55b13699cf33] ./a.out(+0xcd0d)[0x55b13699ed0d] ./a.out(+0xce1d)[0x55b13699ee1d] ./a.out(+0x147d9)[0x55b1369a67d9] ./a.out:src/main.cpp:1891: received shutdown signal 11 (Segmentation fault) from PID 24 / UID 0. shutting down.. : Operation now in progress

http://paste.ubuntu.com/p/Mr8Mz45d6x/

divinity76 commented 5 years ago

That worked thanks!

nice. the issue was that you have multiple versions of libcurl, and g++ tried to link to
/usr/lib/x86_64-linux-gnu/libcurl.so or /usr/local/lib/libcurl.so when it should have linked to your /root/curl/curl-7.62.0/lib/.libs/libcurl.so library. (and they're incompatible different versions of libcurl.)

Edit: Now I'm getting the following error:

dang, that's an actual bug, can you create a new issue ( https://github.com/divinity76/cacdrive/issues/new ) with that log? (cacdrive somehow thought position "-2121531792" was a valid offset in the sector index file!)

What count number should I need to use in order for 987 GB of storage? Because the one I used just creates 6 GB for some reason.

if we say that 987 GB is 987,000,000,000 bytes, and each count is 4096 bytes, 987000000000/4096 = 240966796.875,

maybe try

dd status=progress if=/dev/zero of=sectorindex.sec bs=25 count=240966796

and warning, the sectorindex file itself will consume 6GB (but cacdrive will be 987GB), it will also take a long time to create the sectorindex file. maybe you could use a sparse file instead (dangerous if your main harddrive goes out of space, but the sectorindex file will be created in a second or 2), eg:

dd if=/dev/zero of=sectorindex.sec bs=1 count=0 seek=6024169900

(again, this method is dangerous if you run out of space on your main harddrive, but the sector index file will be created in a second using this method.)

Ammar7347 commented 5 years ago

Thanks for the detailed explanation of everything.

dang, that's an actual bug, can you create a new issue ( https://github.com/divinity76/cacdrive/issues/new ) with that log? (cacdrive somehow thought position "-2121531792" was a valid offset in the sector index file!)

Unfortunately, the hard drive of my dedicated server failed and it was in RAID 0 setup. I was running proxmox on it and miraculously I was able to recover most of the VMs as I had backups offsite. I'll try to replicate this issue on the same server and if I come across the same error again, I'll do what you said and submit a new issue with the logs.

Edit: So, I was able to replicate that bug and I've created a new issue for that with the logs.