dhasenan / damask

MUD server in D, with procedural generation
Apache License 2.0
2 stars 0 forks source link

User persistence #3

Closed dhasenan closed 8 years ago

dhasenan commented 8 years ago

I want to be able to create a character, restart the MUD, and then log in as the character I previously created.

dhasenan commented 8 years ago

Option 1: flat files, one per user. The flat file contains the user's entire state.

This is convenient for debugging. For backups and maintenance, it means there's a lot to deal with -- it'll be constrained to one folder, though.

dhasenan commented 8 years ago

Option 2: use a SQL database.

SQL is battle-hardened. I can maintain two tables: one containing user, password hash, and entity id; one containing components indexed by entity id. Then logging in is a simple matter of querying the users table, validating credentials, and running one scan on the components table.

The components table would also have a type field with the class name, making it easy to deserialize.

SQLite has fewest dependencies. Postgres has a couple bindings and JSON support in case I need to query against entity data -- though SQLite also has similar JSON extensions.

dhasenan commented 8 years ago

Option 3: use a document database, like Mongo or Couch.

This makes it simpler to work with JSON, since these systems typically work with JSON.

It's much harder for most people to install a database system in addition to the MUD. Like some random Windows user -- look at the install instructions for mongo. Couch is a lot easier to install on Windows.

dhasenan commented 8 years ago

Option 4: use a key/value store. Simpler but no querying. Might result in multiple files.

dhasenan commented 8 years ago

Tried leveldb a while ago and didn't like. Don't want to wrap json everywhere. SQLite has as good JSON support as Postgres. I feel out of control of DB performance with Mongo. Flat files are mostly inferior to SQLite.

That leaves CouchDB and SQLite.

CouchDB doesn't yet have good bindings. (It has not entirely crud bindings.) It's also another thing to install. I want as turnkey an operation as possible.

There are mixed reports at SQLite's handling of large tables.

dhasenan commented 8 years ago

How much data am I handling?

I need to persist up to the full world. I want to support 1 million rooms, with an average of 1 NPC / 3 rooms, 10 items/NPC. Each room needs unique text -- about 1kb. Each room needs doodads -- about 500 bytes. Each NPC needs a description, but we can probably use fixed templates and drop it down to like 100 bytes on average. Same with items. So we're talking a couple gigs right here.

We'll likely have a bunch of state data for each, but that won't more than double it. So maybe 5GB.

Players -- each one accounts for a decent chunk of data, maybe as much as 50KB. To match rooms and NPCs, we're talking a hundred thousand registered users.

Reports claim that SQLite tends to slow once a table reaches ~7GB and has indexes. We have three obvious options: shard by zone, save less state, and index less. As a practical matter, I'm only going to index by entity id and maybe type, and that's the primary key.

I think SQLite will be fine.

dhasenan commented 8 years ago

Fixed with the merge of the sqlite branch.