lykahb / groundhog

This library maps datatypes to a relational model, in a way similar to what ORM libraries do in OOP. See the tutorial https://www.schoolofhaskell.com/user/lykahb/groundhog for introduction
http://hackage.haskell.org/package/groundhog
176 stars 39 forks source link

Revamp lists implementation #55

Open lykahb opened 8 years ago

lykahb commented 8 years ago

Description

Consider a simple datatype

data Entity { somevalue :: String, somelist :: [String] }

Current schema

Entity references list table. List values table references list table. Triggers remove list row from the list row on entity update/delete.

CREATE TABLE entity (id INTEGER, somevalue VARCHAR, somelist INTEGER); CREATE TABLE List#String (id INTEGER PRIMARY KEY); CREATE TABLE List#String#values (id INTEGER REFERENCES List#String(id), ord INTEGER, value VARCHAR); CREATE TRIGGER ... ON DELETE ... CREATE TRIGGER ... ON UPDATE ...

Proposed schema

List table is eliminated. Each list has its own table that references entity id or one of its unique keys.

CREATE TABLE entity (id INTEGER, somevalue VARCHAR); CREATE TABLE entity#somelist (key INTEGER REFERENCES entity (id) ON DELETE CASCADE, ord INTEGER, value VARCHAR);

This opens the way for maps and other complex data structures.

Questions

How can the new schema help solving N+1 query problem? What is the behavior if the list is inside of an embedded datatype?

Roadmap

The newly added converters may help with this if they do side effects and take entity and its autokey as arguments. A simple way to distinguish between pure and non-pure converter in compile time (TH generation) would be to create two newtypes.

To avoid breaking changes, Groundhog needs to support both schemas for a while. The new schema will be used only if converter is specified. Also, there needs to be a way to do automatic migration from the old schema.