Your project seems quite interesting. However, when I want to reproduce the result, I am always facing segmentation fault/double-free problems for insert/mix workload for ALEX. I am strictly following the build process by using the script build.sh. Could you kindly help to check if there is any issue there? Or there is any step I did wrong? Thanks a lot.
BTW, I modified the default value of insert_count to be less than the total_count so that the number of bulk loading keys is larger than 0. And my dataset is downloaded from the GRE project.
Here is a GDB snippet FYI.
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff70e37f1 in __GI_abort () at abort.c:79
#2 0x00007ffff712c837 in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7ffff7259a7b "%s\n") at ../sysdeps/posix/libc_fatal.c:181
#3 0x00007ffff71338ba in malloc_printerr (str=str@entry=0x7ffff725b7a8 "double free or corruption (!prev)") at malloc.c:5342
#4 0x00007ffff713ae5c in _int_free (have_lock=0, p=0x555555798f90, av=0x7ffff748ec40 <main_arena>) at malloc.c:4311
#5 __GI___libc_free (mem=0x555555798fa0) at malloc.c:3134
#6 0x0000555555573962 in alex::Alex<unsigned long, unsigned long, alex::AlexCompare, std::allocator<std::pair<unsigned long, unsigned long> >, true>::insert_disk_all(unsigned long, unsigned long, long long*, long long*, long long*, long long*, int*) ()
#7 0x000055555555e28a in test_insert(int, char*, char*, char*, int, int, int) ()
#8 0x0000555555557548 in main ()
Hi,
Your project seems quite interesting. However, when I want to reproduce the result, I am always facing segmentation fault/double-free problems for insert/mix workload for ALEX. I am strictly following the build process by using the script
build.sh
. Could you kindly help to check if there is any issue there? Or there is any step I did wrong? Thanks a lot.My command is:
./benchmark --keys_file=/path/to/GRE/datasets/books --op_type=insert --index_file=index_file --total_count=400000 --has_size=1
BTW, I modified the default value of insert_count to be less than the total_count so that the number of bulk loading keys is larger than 0. And my dataset is downloaded from the GRE project.
Here is a GDB snippet FYI.