CodeHowlerMonkey / hitfactor.info

HitFactor.info -- Howler Monkey Classifiers
https://www.hitfactor.info
11 stars 4 forks source link

Optimize In-Memory JSON Array/Objects Issues #33

Closed CodeHowlerMonkey closed 5 months ago

CodeHowlerMonkey commented 7 months ago

Currently everything is stored and pre-hydrated on server start in-memory, often duplicating the same data multiple times for easier access (e.g. classifier runs by shooter or by division/classifier).

This causes 2 main problems:

  1. Server requires quite a lot of RAM to run
  2. It takes a noticeable time to hydrate the data, multiple minutes on M1 Pro Macbook, resulting in serious loss of productivity, when constantly changing the API and reloading the server.

We need to find a way to optimize both of these things, but the second one is more important, at least until we get to really high RAM usage, which will force us to change hosting (right now app idles at slightly above 4GB with legacy scores, and around 2 without).

Some ideas how to achieve this:

  1. Sequelize + sqlite in memory, with proper DB tables design and basically rewrite of the whole model layer. HUGE EFFORT, HIGH CHANCE OF REGRESSIONS, if you can do this -- you're basically a legit senior backend dev.
  2. RxDB, has in memory, is basically nosql, but has schemas, maybe can be hacked together moderately easy-ish. Higher Middle Dev Level
  3. Just make GB+ JSON files work, JSONStream, await-sync, etc. Anything that will simply save shit to disk and cache it there. Won't help with memory, probably will make it worse. But should make development faster.
  4. Any other ideas?
CodeHowlerMonkey commented 5 months ago

not an issue anymore with mongo, Koyeb was downgraded to 512MB instance, and sits pretty comfortably below 256Mb there:

Image