Magdi / sparsehash

Automatically exported from code.google.com/p/sparsehash
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

Is it possible to use memory mapping to load / save a dense hash map? #78

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?

If a dense hash map contains a large number of key-value pairs (both POD), is 
it possible to use a memory mapping file to load / save it?

The data eats memory a log, and I need to load them in multiple processes. If 
memory mapping is supported, the footprint can be shared between processes, 
which saves a lot of memory.

I'm not familiar with dense hash map's implementation, just ask here for 
whether it is possible, or someone has tried this.

Thank you in advance.

What is the expected output? What do you see instead?

What version of the product are you using? On what operating system?

Please provide any additional information below.

Original issue reported on code.google.com by huas...@gmail.com on 14 Jan 2012 at 3:45

GoogleCodeExporter commented 9 years ago
An interesting idea!  You'd have a read-only hashtable then, is that right?

This is not possible right now.  I'm wondering if you could do something 
similar using shared memory (which I recognize is a lot more finicky to work 
with) but even that I think would be pretty tricky, and involve writing a 
custom allocator.

If you end up trying to write something like this, I'd be happy to look at a 
patch!  I don't know how intrusive such a change would be.  In practice, you'd 
probably want just to mmap the table array (in 
densehashtable.h:dense_hashtable), and have each process own its own copy of 
all the other data (everything else is small).  Then you'd want to write your 
own version of Serialize() and Unserialize() that mmap in the table or 
something.

Original comment by csilv...@gmail.com on 17 Jan 2012 at 7:24

GoogleCodeExporter commented 9 years ago
If the table array value type is POD (without pointer type), it's straight 
forward to save the table array's memory across processes using either mmap or 
shared memory.

I'll try to implement Serialize() and Unserialize() with mmap first, and if 
success, put the patch here. Please help me review it then.

Original comment by huas...@gmail.com on 18 Jan 2012 at 4:16

GoogleCodeExporter commented 9 years ago
Hi 
Have you considered to handle concurrent-write issue? Or Does it just support 
read?

Original comment by baibaic...@gmail.com on 18 Jan 2012 at 5:43

GoogleCodeExporter commented 9 years ago
why it will wrong
when iter is large 10000000
my english is bad 
hope someone can help me 

Original comment by pw...@sina.com on 13 Jul 2012 at 9:44

Attachments: