features/ with files that return bundles of features according to the feature protocol
Protocols
Statement
CSVs with:
statement, elicitation, committer
Feature
Files that produce ratings with:
name, version, value, type
Checks
Statements
[ ] Add any missing or expired features?
[x] Is source information valid?
[x] Duplicate statements?
[ ] Are the properties added correctly?
[ ] Is the table valid? Are all features type correct?
[ ] Are the output files the same as they should be? Go ahead and merge : Add commit with correct file (or fail)
Processing
Create a branch and it gets a new CSV with the branch name (fancy)
User adds a new row to some CSV or a new CSV in raw_statements
get all files in raw_statements
run checks
compile output files
Cases
new feature: go through every existing statement and get the new feature for it
updated feature: go through every existing statement and get the updated feature for it
new statement: go through every feature and get it for that statement
Ignoring for now
We won't deal with validating new features or working out how to run new features dynamically for now. For now let's assume that we trust how features are designed and we will set up their run mechanism manually on a per-feature basis.
Files
statements.csv
features/statement_group_#.csv
raw_statements/GPT_statements_from_tuesday.csv
features/
with files that return bundles of features according to the feature protocolProtocols
Statement
CSVs with:
statement
,elicitation
,committer
Feature
Files that produce ratings with:
name
,version
,value
,type
Checks
Statements
Processing
raw_statements
raw_statements
Cases
Ignoring for now
We won't deal with validating new features or working out how to run new features dynamically for now. For now let's assume that we trust how features are designed and we will set up their run mechanism manually on a per-feature basis.
Tests
@amirrr — think about this please.