Maps and example resources for use in FHIR version transformation
The FHIR Mapping Language (FML) is an informational standard providing a specification of an interpreted language that can transform from one version of a resource to another.
This repository will provide maps and example inputs and output to be used in the transformation of UK NHS FHIR extensions using FML.
The following sections describe the structure and naming conventions used for the maps and inputs.
The top level of this folder will contain multiple folders named using general project names, following the convention of <profile>-to-<profile>
.
e.g.
./resources/careconnect-to-ukcore
NOTE: The tutorial folder is an exception to this rule.
Within this folder, there will be multiple folders which will be named using FHIR resource names
e.g.
./resources/careconnect-to-ukcore/medicationrequest
At the next layer the folders here will be called
This folder will contain examples to transform using a map. The naming of this file MUST follow this convention
<Type name>-<Type>-<[Version number]to[Version number]>_<Padded sequential number>.<json | xml>
e.g. MedicationStatusReason-Extension-3to4_000.json
NOTE: The tests use the names to match input with map, and transformed output with the expected output.
This folder will contain the expected output of a transform. When the workflow pipeline performs a transformation, it will compare it with the corresponding expected file. The naming convention that MUST be followed is the same as the input, i.e.
<Type name>-<Type>-<[Version number]to[Version number]>_<Padded sequential number>.<json | xml>
e.g. MedicationStatusReason-Extension-3to4_000.json
This folder will contain the maps for the transformation. The file name convention for this is
<Base input file name>.map
where
<Base input file name>
= <Type name>-<Type>-<[Version number]to[Version number]>
i.e. the input file name without the padded number
e.g. MedicationStatusReason-Extension-3to4.map
By following these naming conventions, it will simplify the code of the tests for picking up and processing the files, and also make it clear from the output what has been processed in terms of success or failure.
The tope level of this folder will contain tests written using pytest. A lib folder has been added for common functions. The naming convention to follow here is
test-<test name>.py
and the tests should be implemented using pytest.
The ./tests/requirements.txt is to handle the python dependencies in the workflow pipeline.
The test in (./tests/test-resource-transforms.py) will do the following
(1) The validator_cli seems to have non zero exits in some of its threads, so asserting on return code when using subprocess wasn't reliable. Also testing the contents of stderr also didn't given consistent results. The output file however is only ever created on success.
(2) The json of the expected and the transform output is sorted by key and tested for equality.
TODO: Currently this has only be eyeballed in terms of expected data. A script to test that all data that should been carried over was carried, and indicate any that was lost would be a useful check. At that point it might make sense to indicate different types of failure, i.e. expected and unexpected.
Using github actions, it will be possible to drop in new input and output examples that can be automically checked on a push or PR using the latest version of the validator_cli (as mentioned in the previous section).
This will be useful for updating/refactoring maps's, but also for ensuring that the latest version of the validator_cli.jar is used to validate the maps.
The workflow defined in .github/workflows/validate-transforms.yml will do the following
A simple tutorial along with details of community documentation can be found here