RJ / www.metabrew.com

Static site generation for my blog
0 stars 0 forks source link

On bulk loading data into Mnesia #5

Open RJ opened 3 years ago

RJ commented 3 years ago

Written on 10/13/2008 20:08:49

URL: http://www.metabrew.com/article/on-bulk-loading-data-into-mnesia

RJ commented 3 years ago

Comment written by Jacob Perkins on 10/14/2008 01:46:36

You can tweak some settings to get rid of most of those overload messages. I used to get them all the time, before I set the following system level config options:

{mnesia, [{dc_dump_limit, 40}, {dump_log_write_threshold, 50000}]}

Increasing the dump_log_write_threshold means the transaction log is dumped less often, and increasing the dc_dump_limit means the disk table is dumped from the log more often.

YMMV,
Jacob

RJ commented 3 years ago

Comment written by fred flintstone on 04/02/2009 17:36:24

Maybe it's something to do with table locking (LockKind)? The mnesia reference book mentions this somewhere.

RJ commented 3 years ago

Comment written by Per Melin on 05/12/2009 23:38:35

Are you doing all this from the shell?

Handling large datasets in the shell has been known to be a bad idea for performance.

I tried the ets trickery with 1.5 million records on my MacBook Pro. Took me 30 seconds from the shell and 19 second when I put it in a module (not including the conversion to disc_copies).

RJ commented 3 years ago

Comment written by Weclizeclourl on 08/05/2009 15:39:57

That's not my fault.
---------------------------------------
signature: prilosec 20 mg fge55fe9e9e9f8fufjfjfjfffex

RJ commented 3 years ago

Comment written by Guest on 09/09/2010 20:19:45

Any followup?