Closed GovanifY closed 7 years ago
test_sdb.c:54:16: warning: incompatible integer to pointer conversion passing 'int' to parameter of type 'const char *' [-Wint-conversion]
sdb_set (db, i-1, i, 0);
^~~
../../src/sdb.h:146:31: note: passing argument to parameter 'key' here
int sdb_set(Sdb*, const char *key, const char *data, ut32 cas);
^
test_sdb.c:54:21: warning: incompatible integer to pointer conversion passing 'int' to parameter of type 'const char *' [-Wint-conversion]
sdb_set (db, i-1, i, 0);
^
../../src/sdb.h:146:48: note: passing argument to parameter 'data' here
int sdb_set(Sdb*, const char *key, const char *data, ut32 cas);
^
2 warnings generated.
thats why it segfaults :P
fixed your test case and its not crashing at all this is not a valid test for your issue
also you are not checking if the resulting list is sorted or not
I thougt we didn't had in that case as long as it didn't crashed. Argh back to reproducing then, sorry >_>
i am sorting 10 million items in this SdbList without any stack exhaustion or crash problem
diff --git a/test/unit/test_sdb.c b/test/unit/test_sdb.c
index 95f29f6..4d76430 100644
--- a/test/unit/test_sdb.c
+++ b/test/unit/test_sdb.c
@@ -50,11 +50,12 @@ static int __cmp_asc(const void *a, const void *b) {
bool test_sdb_list_big(void) {
Sdb *db = sdb_new (NULL, NULL, false);
int i;
- for ( i=1; i < 100000; i++) {
- sdb_set (db, i-1, i, 0);
+ for ( i=0; i < 10000; i++) {
+ sdb_num_set (db, sdb_fmt (0, "%d", i), i + 1, 0);
}
SdbList *list = sdb_foreach_list (db, true);
- ls_sort(list, __cmp_asc);
+ ls_sort (list, __cmp_asc);
+// TODO: verify if its sorted
sdb_free (db);
mu_end;
}
Patch applied, any idea on how to reproduce the issue then?
Also confirm it works here too even though that was to be expected: test_sdb_list_big OK
Got a segfault at 10M here, will try to reproduce at lower count and add here. Tooked a lot of time tho
10 million items. The hash is a table of lists, maybe some stats to see how many items are in each bucket may help to identify the issue
On 25 Mar 2017, at 22:30, GovanifY notifications@github.com wrote:
Got a segfault at 10M here, will try to reproduce at lower count and add here. Tooked a lot of time tho
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
But maybe we have reached a limit in mmap
On 25 Mar 2017, at 22:30, GovanifY notifications@github.com wrote:
Got a segfault at 10M here, will try to reproduce at lower count and add here. Tooked a lot of time tho
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Test is pretty slow in its current form but works™
There is a chance, for mmap, I'll check that later, thanks for pointing out
Working on this btw
Fixed in master. I lowered the amount to sort to 500000
just to save time when testing.
Test again please and tell me.
For now this unit test triggers a crash, related to radare/radare2#7079