Open kevin-zhangzh opened 1 year ago
here is my test code,the contract is bAR, ./redstone/state/data.mdb will gradually grow more and more large
import { WarpFactory } from 'warp-contracts';
import path from "path";
import {LmdbCache} from "warp-contracts-lmdb";
const smartweave = WarpFactory
.forMainnet()
.useStateCache(new LmdbCache({inMemory: false, dbLocation: path.join(__dirname, 'redstone/state')}))
.useContractCache(new LmdbCache({inMemory: false, dbLocation: path.join(__dirname, 'redstone/contracts')}))
const contractTxId = 'VFr3Bk-uM-motpNNkkFg4lNW1BMmSfzqsVO551Ho4hA'; //bAR
async function updateState() {
try {
const result = await smartweave.contract(contractTxId)
.setEvaluationOptions({
allowBigInt: true,
allowUnsafeClient: true,
internalWrites:false
})
.readState();
console.log('res: ', 'get success');
} catch (error) {
console.log('readState error:', error, 'contractId:', contractTxId);
}
}
// 1 minute
const delay = 60000; // 1minute
updateState().then(() => {
setTimeout(function run() {
updateState().then(() => {
setTimeout(run, delay);
});
}, delay);
});
@janekolszak , could you please guide through the process of
I believe this should be added into readme.
Hi @kevin-zhangzh! So as @ppedziwiatr said there are couple of things that you can do:
purge()
- this will remove old interactions tied to the contract. The underlying cache file won't get smaller though because of how lmdb works.purge()
and then a script ./tools/rewrite.sh --input --output
- this will just rewrite the cache file, duplicate it. After this you can replace the cache. Underlying cache file will get much smaller.Experiencing the same issue, where a singular run of the state evaluation yields about 20mb in db size. Re-evaluating the state (after waiting a bit) essentially seem to duplicate that data, even using min & max entries to 1. The rewrite script does work in bringing down the size to where it should be, but would be great if it could be done without having to run such a process.
@ppedziwiatr mentioned a possible use of better-sqlite3 to replace lmdb, so will keep an eye out for that
We're (i.e. @Tadeuchi ) working on the better-sqlite3 implementation. I guess fighting with lmdb makes no sense. Will let you know when it will be ready!
@ppedziwiatr is this still being developed? I'd like to try this feature but wondering if it is going to survive or not.
When I upgraded leveldb to lmdb, the cache file became very large, with a size of 9G(file:data.mdb). What could be the reason for this?