Closed martyychang closed 6 years ago
Expanding the scope of this a bit, I think three tables should be created in this case.
Table Name | Description |
---|---|
public.users |
Commonwealth users, who will someday log into a web app portal |
public.entities |
Entities for which finances are managed within Commonwealth. One user may manage finances for multiple entities, and one entity may be managed by multiple users. |
fidelityrewards.transactions |
Transactions downloaded from the Fidelity Rewards site and imported into the database, with a link back to an entity to which the transactions apply and also a link back to the user who imported the transactions |
users
table created.
-- Table: public.users
-- DROP TABLE public.users;
CREATE TABLE public.users
(
id uuid NOT NULL,
first_name character varying(40) COLLATE pg_catalog."default",
last_name character varying(40) COLLATE pg_catalog."default" NOT NULL,
username character varying(255) COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT users_pkey PRIMARY KEY (id),
CONSTRAINT username UNIQUE (username)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.users
OWNER to postgres;
Looks like I'd messed up. The uuid
type doesn't actually auto-generate an ID for me. I found a blog post that mentioned the use of the bigserial
data type. After deleting the old id
column and creating a new one with the bigserial
data type, I was able to insert a record into the users
table by supplying just the three text values.
I created an entities
table with a foreign key relationship for the owner_id
column against users.id
, noting that all entities have to be owned by at least one user, with the intent of preventing entities from ever falling into a state where no users have access to the entity.
-- Table: public.entities
-- DROP TABLE public.entities;
CREATE TABLE public.entities
(
name character varying(255) COLLATE pg_catalog."default" NOT NULL,
id bigint NOT NULL DEFAULT nextval('entities_id_seq'::regclass),
owner_id bigint NOT NULL,
CONSTRAINT entities_pkey PRIMARY KEY (id),
CONSTRAINT owner_id FOREIGN KEY (owner_id)
REFERENCES public.users (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE RESTRICT
NOT VALID
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.entities
OWNER to postgres;
Looks like the file downloaded from Fidelity Rewards has just five columns.
Source CSV Column | Target transactions Column |
Comments |
---|---|---|
Date | transaction_date |
See Investopedia |
Transaction | type |
Only two expected values, "DEBIT" and "CREDIT" |
Name | name |
Something descriptive |
Memo | memo |
No idea what this is |
Amount | amount |
What's interesting here is that consistently when type is "DEBIT" the amount is negative, whereas when type is "CREDIT" the amount is positive. Based on this it seems type is really just a reflection of whether the amount in the transaction is positive or negative. |
Below are a few sample rows showing the raw downloaded data.
Date,Transaction,Name,Memo,Amount
6/28/2018,DEBIT,WOLFGANG PIER CFC BOS EAST BOST,24692168178100768670947; 05812;,-14.76
6/22/2018,CREDIT,TARGET 00028225 BOSTON,74164078172091012639468; 05411;,26.33
As a note, rather than deal with multi-currency right now I'm going to assume all transactions in USD
Looks like arbitrary precision numbers are the preferred type for dealing with money.
The type
numeric
can store numbers with a very large number of digits. It is especially recommended for storing monetary amounts and other quantities where exactness is required. Calculations withnumeric
values yield exact results where possible, e.g. addition, subtraction, multiplication. However, calculations onnumeric
values are very slow compared to the integer types, or to the floating-point types described in the next section.
Awesome! I was able to successfully import the CSV file into the PostgreSQL database using the Import/Export wizard, after modifying the sequence of columns to match exactly the columns in order that were in the CSV file. The resulting command executed as shown by pgAdmin is as follows.
/Library/PostgreSQL/10/bin/psql" --command " "\\copy fidelityrewards.transactions (transaction_date, type, name, memo, amount) FROM '/Users/martyc/Downloads/download.csv' CSV HEADER QUOTE '\"' ESCAPE '''';"
As a practical matter since for testing right now I don't care to populate the entity_id
or the created_by_id
columns manually, I created the columns but allowed them to be null.
Until I figure out a better way to track the database config, for now I'm just going to track the SQL scripts to create the tables as they currently exist in my local database.
To get started, we need a basic schema set up in the database to support the import of transactions downloaded from Fidelity Rewards Visa Signature Credit Card.
Acceptance criteria
Transactions downloaded in CSV format from the Fidelity Rewards Visa Signature Credit Card website can be imported into a table in the PostgreSQL database, linked back to a user.