warp-contracts / warp-contracts-lmdb

A lmdb based cache implementation for Warp SDK
MIT License
4 stars 3 forks source link

cache size too big #15

Open kevin-zhangzh opened 1 year ago

kevin-zhangzh commented 1 year ago

When I upgraded leveldb to lmdb, the cache file became very large, with a size of 9G(file:data.mdb). What could be the reason for this?

kevin-zhangzh commented 1 year ago

here is my test code,the contract is bAR, ./redstone/state/data.mdb will gradually grow more and more large

import { WarpFactory } from 'warp-contracts';
import path from "path";
import {LmdbCache} from "warp-contracts-lmdb";

const smartweave = WarpFactory
    .forMainnet()
    .useStateCache(new LmdbCache({inMemory: false, dbLocation: path.join(__dirname, 'redstone/state')}))
    .useContractCache(new LmdbCache({inMemory: false, dbLocation: path.join(__dirname, 'redstone/contracts')}))

const contractTxId = 'VFr3Bk-uM-motpNNkkFg4lNW1BMmSfzqsVO551Ho4hA'; //bAR
async function updateState() {

  try {
    const result = await smartweave.contract(contractTxId)
        .setEvaluationOptions({
          allowBigInt: true,
          allowUnsafeClient: true,
          internalWrites:false
        })
        .readState();
    console.log('res: ', 'get success');
  } catch (error) {
    console.log('readState error:', error, 'contractId:', contractTxId);
  }
}

//  1 minute
const delay = 60000; // 1minute
updateState().then(() => {
  setTimeout(function run() {
    updateState().then(() => {
      setTimeout(run, delay);
    });
  }, delay);
});
ppedziwiatr commented 1 year ago

@janekolszak , could you please guide through the process of

  1. removing the entries
  2. rewriting the db
  3. using new settings (max/min entries)

I believe this should be added into readme.

janekolszak commented 1 year ago

Hi @kevin-zhangzh! So as @ppedziwiatr said there are couple of things that you can do:

balthazar commented 1 year ago

Experiencing the same issue, where a singular run of the state evaluation yields about 20mb in db size. Re-evaluating the state (after waiting a bit) essentially seem to duplicate that data, even using min & max entries to 1. The rewrite script does work in bringing down the size to where it should be, but would be great if it could be done without having to run such a process.

@ppedziwiatr mentioned a possible use of better-sqlite3 to replace lmdb, so will keep an eye out for that

ppedziwiatr commented 1 year ago

We're (i.e. @Tadeuchi ) working on the better-sqlite3 implementation. I guess fighting with lmdb makes no sense. Will let you know when it will be ready!

nicolasembleton commented 12 months ago

@ppedziwiatr is this still being developed? I'd like to try this feature but wondering if it is going to survive or not.