This PR attempts to define the data with data models.
A conversion tool is also added with tests.
The main features of this PR are:
Define data models for the allocation tables
Add methods to allow reading and writing these models in:
csv format: This is used for data entry and contains the source of truth
json format: This is used for the rest endpoint
Add a conversion tool that allows converting from the csv format directly into json.
Add tests and a github workflow to ensure this always works
I changed the definition of the json files a bit. Mainly because I was not sure what the structure was from reading them. For example, each allocation table has the name "-". That seems possibly unintentional? Also, it has a list of services. It's unclear if we want to group services for the same allocations or leave them. Perhaps when we get to the visualizations we can change this a bit.
How to review this PR
The only substantial change here is the addition of data models. The rest is kind of straightforward conversion and CI/CD that can always be changed later. I suggest we focus on the data models as well as the procedure to run the conversion, and so long as tests pass, trust that the rest works fine.
See the data models in the next section.
Data Models
@dataclass
class AllocationTable(DataClassJSONMixin):
name: str
allocations: List[Allocation]
metadata: Optional[Dict[str, str]]
@dataclass
class Allocation(DataClassJSONMixin):
band: FrequencyBand
service: AllocationService
@dataclass
class FrequencyBand(DataClassJSONMixin):
lower_frequency: float
upper_frequency: float
@dataclass
class AllocationService(DataClassJSONMixin):
applications: Optional[List[str]]
category: Optional[str]
# Do we need this for now?
footnotes: Optional[List[str]]
region: str
service: Optional[str]
sub_table: Optional[str]
Running the conversion
To run the conversion, just call:
./scripts/run_converter.sh
Checklist
[x] test out reading one file
[x] write AllocationTable class
[x] walk whole directory tree and read in all files
[x] add tests
[x] add converter to json
[x] more tests
For the Future
We could consider adding a github workflow that automatically loads the data from here into a pull request into earth frequencies viewer whenever it is detected that the csv data has changed
we should add the footnotes. I am ignoring them for now
we should add conversion to html files
rename CSV references to TSV (tab separated values)
Description
This PR attempts to define the data with data models. A conversion tool is also added with tests.
The main features of this PR are:
How to review this PR
The only substantial change here is the addition of data models. The rest is kind of straightforward conversion and CI/CD that can always be changed later. I suggest we focus on the data models as well as the procedure to run the conversion, and so long as tests pass, trust that the rest works fine. See the data models in the next section.
Data Models
Running the conversion
To run the conversion, just call:
Checklist
For the Future