knighton / sparsehash

Automatically exported from code.google.com/p/sparsehash
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

sparse_hash_set: too much memory consumption #72

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
Code:

#include "google/dense_hash_set"
#include "google/sparse_hash_set"

int main()
{
//  typedef google::dense_hash_set<size_t> TSet
  typedef google::sparse_hash_set<size_t> TSet;
  const int size = 1000000;
  TSet set(size);
//  set.set_empty_key(-1);
  set.set_deleted_key(-2);
  for (int i = 0; i < size; ++i) set.insert(i);
  return 0;
}

What is the expected output? What do you see instead?
valgrind ./a.out (dense)
total heap usage: 1 allocs, 1 frees, 16,777,216 bytes allocated

valgrind ./a.out (sparse)
total heap usage: 1,000,001 allocs, 1,000,001 frees, 196,697,008 bytes allocated

10 times more mem!

What version of the product are you using? On what operating system?
Google Hash 1.11 amd64
Linux DE3UNP-503726 2.6.32-26-generic #48-Ubuntu SMP Wed Nov 24 10:14:11 UTC 
2010 x86_64 GNU/Linux
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
Intel Core 2 Duo 3 GHz
g++ -O3

Original issue reported on code.google.com by bulovya...@gmail.com on 30 Jun 2011 at 5:01

GoogleCodeExporter commented 9 years ago
I find it very suspicious that the dense_hash_set implementation only had one 
allocation.  It must have had to realloc when growing the hashtable!  Can you 
douuble check your test?

Also, I don't know what valgrind means by "196,697,008 bytes allocated", but I 
assume it has no relationship to the high-level memory mark (if you allocate 
and free 100 bytes 10000000 times, how much memory will valgrind say is 
allocated)?  So this is not necessarily a problematic result.  Also, I don't 
know what valgrind does if you say realloc(100) and libc is able to reallocate 
without actually allocating more memory.  That would affect the numbers 
reported for sparse_hash_map a lot.

Original comment by csilv...@gmail.com on 30 Jun 2011 at 10:25

GoogleCodeExporter commented 9 years ago
>>I find it very suspicious that the dense_hash_set implementation only had one 
allocation.
it's because of TSet set(size);

>>196,697,008
yes, you're right it's due to alloc/dealloc cycle, please close the issue. :-) 
sorry.

Original comment by bulovya...@gmail.com on 30 Jun 2011 at 10:51

GoogleCodeExporter commented 9 years ago

Original comment by csilv...@gmail.com on 1 Jul 2011 at 12:29