Closed mayankgaur019 closed 6 years ago
Yes, call unqlite_commit() periodically say in each 100000 insertions so that working buffers can be released.
Thanks for replying . I am reading data from a file which contains millions of key, so do i need to maintain a counter for lines so once i reach a max value say 100000 , i will call unqlite_commit(). Am i correct?, if wrong please tell me exact way .
Yes, a simple loop counter should be fine. Once it reach the threshold value (i.e. 100000 in our case), commit everything to disk and reset back this counter to zero.
I did the same thing. I have a file which has more than 10 lakhs of key. but it still taking approx 400mb of memory.
400MB seems correct for million keys unless you disable transactions, you should decrease memory usage
I am new to unqlite. I don't exact way to store million data in one go . when I am trying to insert million data its taking too much memory until, I am not calling "unqlite_close". Any help or sample code to insert millions data will be helpful for me