The database has been growing table by table, but now it's a little unwieldy. There are 24 tables of raw data. These can't be used out of the box without running a query to connect Research ID to the data. There are 11 queries that can be used out of the box. They all start with q_.
To make things more user-friendly, I'm going to split the database into an l2t database with queries that are ready-to-use for each test type and study. So the EVT table is this database will have Study, ResearchID, and the usual EVT scores ready to go. I will make a backend database with the raw data that populates the queries for the l2t database. This will basically break all my previous scripts and queries, but it'll be worth it.
The database has been growing table by table, but now it's a little unwieldy. There are 24 tables of raw data. These can't be used out of the box without running a query to connect Research ID to the data. There are 11 queries that can be used out of the box. They all start with
q_
.To make things more user-friendly, I'm going to split the database into an
l2t
database with queries that are ready-to-use for each test type and study. So the EVT table is this database will have Study, ResearchID, and the usual EVT scores ready to go. I will make abackend
database with the raw data that populates the queries for thel2t
database. This will basically break all my previous scripts and queries, but it'll be worth it.