apollo-rsps / apollo

An open-source Java game server suite designed to be lightweight, fast, and secure.
ISC License
184 stars 141 forks source link

A PlayerSerializer that uses a database #175

Open Major- opened 8 years ago

Major- commented 8 years ago

We want a PlayerSerializer that uses a database (postgres would be good, but making it implementation-independent would be nicer, if feasible). This is a must for good performance, and because the BinarySerializer is really just a hack left over from the original release.

Promises commented 7 years ago

do you want a dedicated setting file?

garyttierney commented 7 years ago

I think we do want some kind of centralized configuration file to hold all server settings.

Promises commented 7 years ago

I have been doing some work on a postgre Serializer, it stores data in data/sql.xml, but I think Serializers should be plugins. That would make more sense for server owners that want forum integration.

Do you guys agree? or am I not seeing something?

garyttierney commented 7 years ago

@Major-

Thoughts on moving all data/*.xml files to one namespaced server.toml file?

@Promises

I dislike this idea. Plugins are meant for game logic, not arbitrary server extensions. We already have a means for configuring serializers by putting an implementation on the classpath and editing the config file. What does forum integration require that isn't solved by that? Granted at the moment we have no way to pass serializer specific configuration to the implementation. This is an issue which that should be solved.

On the topic of your postgres serializer: how are you actually querying the database? i.e., writing SQL queries or an ORM/JPA. I'm not sure what we're going to need here and we'll need to think about the schema to allow plugins to store attribute data.

Major- commented 7 years ago

Thoughts on moving all data/*.xml files to one namespaced server.toml file?

ya

Promises commented 7 years ago

@garyttierney by following the documentation on postres site, I'm using pure SQL commands

garyttierney commented 7 years ago

@Major- thoughts on that? Do we want this Postgres specific or do we want a JDBC serializer?

Major- commented 7 years ago

@Promises

by following the documentation on postres site, I'm using pure SQL commands

Don't forget to do this safely (i.e. beware SQL injection) as an attacker could craft e.g. a malicious username.

Do we want this Postgres specific or do we want a JDBC serializer?

Hm. A db-agnostic implementation is nice for obvious reasons, but a postgres one would allow us to use postgres-specific features that aren't in the SQL standard, which might allow for better performance. Maybe the eventual goal would be both (assuming there are useful non-standard features in postgres).

Regardless of this I'm not sure if just writing raw SQL to a postgres connection is the way to go.

Promises commented 7 years ago

@Major- I completely agree with you, and as soon as I get this fully working, I will find a better way to speak to postgres.

Promises commented 7 years ago

A connection to the postgres server should be made on startup, wouldnt be smart to start a new connection on every save/load.

also, here are the tables i am making:

    ResultSet rs = md.getTables(null, null, "users", null);
    if (!rs.next()) {
        stmt = c.createStatement();
        sql = "CREATE TABLE USERS" +
            "(USERNAME CHAR(50) PRIMARY KEY NOT NULL," +
            "PASSWORD TEXT NOT NULL)";
        stmt.executeUpdate(sql);
        stmt.close();
        System.out.println("Table USERS Created");
    } else {
        System.out.println("Table USERS Exists");
    }

    rs = md.getTables(null, null, "details", null);
    if (!rs.next()) {
        stmt = c.createStatement();
        sql = "CREATE TABLE DETAILS" +
            "(USERNAME CHAR(50) PRIMARY KEY NOT NULL," +
            "MEMBERS INT NOT NULL," +
            "POS_X INT NOT NULL,"+
            "POS_Y INT NOT NULL,"+
            "POS_Z INT NOT NULL,"+
            "SKILLS TEXT NOT NULL,"+
            "APPEARANCE TEXT NOT NULL,"+
            "ENERGY INT NOT NULL,"+
            "PRAYER_POINTS INT NOT NULL ,"+
            "RIGHTS INT NOT NULL)";
        stmt.executeUpdate(sql);
        stmt.close();
        System.out.println("Table DETAILS Created");
    } else {
        System.out.println("Table DETAILS Exists");
    }

    rs = md.getTables(null, null, "items", null);
    if (!rs.next()) {
        stmt = c.createStatement();
        sql = "CREATE TABLE ITEMS" +
            "(USERNAME CHAR(50) PRIMARY KEY NOT NULL," +
            "EQUIPMENT TEXT NOT NULL," +
            "INVENTORY TEXT NOT NULL,"+
            "BANK TEXT NOT NULL)";
        stmt.executeUpdate(sql);
        stmt.close();
        System.out.println("Table ITEMS Created");
    } else {
        System.out.println("Table ITEMS Exists");
    }

    rs = md.getTables(null, null, "settings", null);
    if (!rs.next()) {
        stmt = c.createStatement();
        sql = "CREATE TABLE SETTINGS" +
            "(USERNAME CHAR(50) PRIMARY KEY NOT NULL," +
            "AUTO_RETALIATE INT NOT NULL," +
            "FIGHT_MODE INT NOT NULL,"+
            "MOUSE_BUTTONS INT NOT NULL,"+
            "CHAT_EFFECTS INT NOT NULL,"+
            "SPLIT_PM INT NOT NULL,"+
            "ACCEPT_AID INT NOT NULL,"+
            "RUN_TOGGLED INT NOT NULL,"+
            "SCREEN_BRIGHTNESS INT NOT NULL ,"+
            "MUSIC_VOLUME INT NOT NULL ,"+
            "EFFECT_VOLUME INT NOT NULL)";
        stmt.executeUpdate(sql);
        stmt.close();
        System.out.println("Table SETTINGS Created");
    } else {
        System.out.println("Table SETTINGS Exists");
    }

}
cubeee commented 7 years ago

Doesn't look very flexible, JPA with proper migrations would be easier and safer even though you might lose some pgsql stuff.

Promises commented 7 years ago

moving it to jpa

garyttierney commented 7 years ago

I agree with @cubeee; the schema presented here isn't very flexible. That said, I don't think that JPA is 100% the way to go. Something a bit more suitable in terms of schema, in my opinion, would be:

CREATE TYPE inventory_type AS ENUM ('equipment', 'inventory', 'bank');
CREATE TYPE sex_type AS ENUM ('male', 'female');

CREATE TABLE players (
  id SERIAL PRIMARY KEY,
  username VARCHAR(255),
  password VARCHAR(255),
  privilege_level INTEGER,
  CONSTRAINT uniq_player_username UNIQUE(username)
);

CREATE TABLE player_appearances (
  id SERIAL PRIMARY KEY,
  player_id INTEGER references players(id),
  sex sex_type,
  styles INTEGER ARRAY[7],
  colors INTEGER ARRAY[5],
  CONSTRAINT uniq_player_appearance UNIQUE (player_id)
);

CREATE TABLE player_coordinates (
  id SERIAL PRIMARY KEY,
  player_id INTEGER references players(id),
  x INTEGER,
  y INTEGER,
  z INTEGER,
  CONSTRAINT uniq_player_coords UNIQUE (player_id)
);

CREATE TABLE player_inventories (
  id SERIAL PRIMARY KEY,
  player_id INTEGER references players(id),
  type inventory_type,
  CONSTRAINT uniq_inventory_type UNIQUE(player_id, type)
);

CREATE TABLE player_inventory_items (
  id SERIAL PRIMARY KEY,
  inventory_id INTEGER references player_inventories(id),
  slot INTEGER,
  item_id INTEGER,
  quantity INTEGER,
  CONSTRAINT uniq_inventory_slot UNIQUE(inventory_id, slot)  
);

CREATE TABLE player_skills (
  id SERIAL PRIMARY KEY,
  player_id INTEGER references players(id),
  skill_id INTEGER,
  experience FLOAT,
  current_level INTEGER,
  CONSTRAINT uniq_player_skill UNIQUE (player_id, skill_id)
);

BEGIN TRANSACTION;
INSERT INTO players (id, username, password, privilege_level) VALUES (1, 'test', 'test', 1);
INSERT INTO player_coordinates (player_id, x, y, z) VALUES (1, 3200, 3200, 0);
INSERT INTO player_appearances (player_id, sex, styles, colors) VALUES (
  1, 'male', '{0, 0, 0, 0, 0, 0, 0}', '{0, 0, 0, 0, 0}'
);
INSERT INTO player_skills (player_id, skill_id, experience, current_level) VALUES(1, 0, 50, 1);
INSERT INTO player_inventories (id, player_id, type) VALUES (1, 1, 'inventory');
INSERT INTO player_inventory_items (inventory_id, slot, item_id, quantity) VALUES (1, 0, 995, 5000);
INSERT INTO player_inventories (id, player_id, type) VALUES (2, 1, 'bank');
INSERT INTO player_inventory_items (inventory_id, slot, item_id, quantity) VALUES (2, 0, 995, 30000);
COMMIT;

Attributes are a hard problem to tackle. We can use an EAV style schema to deal with those.

Joshua-F commented 7 years ago

@garyttierney I use MySQL so I'm not too sure how different it would be with Postgres, but I personally have a table that stores every attribute(variables as I call them) with key being a string and the value is just a BLOB. I only support saving the main datatype(int, long, string, etc) just to simplify that a bit. Converting int to 4 bytes and then converting 4 byte array back on login. I'd be curious how this would be implemented to see if it would be an improvement over how I do things currently.

My orig idea, which from briefly looking up what EAV is, was also to use a string to store the value. I decided to not do that since storing a number like 2147000000 as a string is 10 bytes compared to storing it as raw bytes and only using 4.

I will point out that even while I use BLOB as my storage type, I'm still able to run queries against values. Example being if I wanted to store boss kill counts and have highscores for it, I'm still able to do so. The selection part just isn't insanely pretty, but it's doable.

garyttierney commented 7 years ago

@Joshua-F

I use MySQL so I'm not too sure how different it would be with Postgres, but I personally have a table that stores every attribute(variables as I call them) with key being a string and the value is just a BLOB. I only support saving the main datatype(int, long, string, etc) just to simplify that a bit. Converting int to 4 bytes and then converting 4 byte array back on login. I'd be curious how this would be implemented to see if it would be an improvement over how I do things currently.

So you have a schema like follows, I assume:

CREATE TABLE player_attributes (
  id SERIAL PRIMARY KEY,
  player_id INTEGER REFERENCES players(id),
  key VARCHAR(255), -- VARCHAR vs TEXT so we can put indexes on this?
  value BLOB,
  CONSTRAINT uniq_player_attribute UNIQUE (player_id, key)
);

I'm not enthused about the BLOB, but it's a trivial implementation detail which I'm not entirely opposed to. What I would be happier with is a schema like this:

CREATE TYPE attribute_type AS ENUM ('text', 'integer', 'float');

CREATE TABLE attribute (
   id SERIAL PRIMARY KEY,
   name TEXT UNIQUE,
   type attribute_type,
   CONSTRAINT uniq_attrib_name UNIQUE (name)
);

CREATE TABLE player_attribute (
   attribute_id INTEGER REFERENCES attribute(id),
   player_id INTEGER REFERENCES player(id),
   value BLOB,
   PRIMARY KEY (attribute_id, player_id)
);

Where we can catalogue the varying types of attributes up front without having the hard dependency on attribute names. This isn't really something we can do at the moment with the way we register new attributes.

I will point out that even while I use BLOB as my storage type, I'm still able to run queries against values. Example being if I wanted to store boss kill counts and have highscores for it, I'm still able to do so. The selection part just isn't insanely pretty, but it's doable.

This is a much bigger problem than the schema itself. How should we allow 3rd party clients to interface with game data in the first place? Should we include some API that can be called, or should we leave it up to users to decide if they want to parse binary files / interface with the SQL server in use? I feel like the former is the better idea since we'd have one standard output format to work with which 3rd party apps could consume.

My orig idea, which from briefly looking up what EAV is, was also to use a string to store the value. I decided to not do that since storing a number like 2147000000 as a string is 10 bytes compared to storing it as raw bytes and only using 4.

While disk utilization is something we should certainly be aware of, I don't think it's significant enough to worry about 6 wasted bytes at this stage. If/when we have a nicely designed and well thought out solution we can think of the resource implications then.

Joshua-F commented 7 years ago

@garyttierney

So you have a schema like follows, I assume:

Yup, that's pretty much how my schema looks.

I'm interested to see how splitting them into different tables would go as well, currently I load all attributes into a Map<String, Attribute> and just get the information about it every time I call Player#getAttribute(key). That is also how I get the data type of the attributes when a player logs in as well, since the Attribute itself holds the data about the data type, and the player has a Map<String, Object> that it fetches the values from with the Player#getAttribute(key) call. I use a generic return type, but it can only do so much. I also load all of my attributes from TOML files, which is kinda of taking place for that other table I suppose.

While disk utilization is something we should certainly be aware of, I don't think it's significant enough to worry about 6 wasted bytes at this stage. If/when we have a nicely designed and well thought out solution we can think of the resource implications then.

I'd agree with this too at the current stage, but I feel like it is something that needs to be kept in mind so people don't use more storage than they need.

garyttierney commented 7 years ago

I use a generic return type, but it can only do so much.

I'm not too worried about type-safety at this granularity at the moment.

I also load all of my attributes from TOML files, which is kinda of taking place for that other table I suppose.

No can do. Plugins can define new attributes rather arbitrarily. However, it's something I'd like to evaluate (and how plugins could interface with it).

Joshua-F commented 7 years ago

No can do. Plugins can define new attributes rather arbitrarily. However, it's something I'd like to evaluate (and how plugins could interface with it).

My plugins are able to have configs for attributes, it's just a matter of loading those while also loading a plugin.

OmarAssadi commented 7 years ago

Added a (very) small bounty of $5.00 to this issue. Maybe someone else who would like to see this will contribute.

current bounty