This project provides the source for the SQL on FHIR v2.0 Implementation Guide.
SQL on FHIR is a specification that defines a standard way to define portable, tabular projections of FHIR data.
The FHIR® standard is a great fit for RESTful and JSON-based systems, helping make healthcare data liquidity real. This spec aims to take FHIR usage a step further, making FHIR work well with familiar and efficient SQL engines and surrounding ecosystems.
We do this by creating simple, tabular views of the underlying FHIR data that are tailored to specific needs. Views are defined with FHIRPath expressions in a logical structure to specify things like column names and unnested items.
Check the existing implementations page or register your own.
Check out the interactive playground.
Content as markdown is now found in input/pagecontent. Also see sushi-config.yaml for additional settings, including configuration for the menu.
This is a Sushi project and can use HL7 IG Publisher to build locally:
./scripts/_updatePublisher.sh
to get the latest IG publishersushi
if you don't have it already with: npm i fsh-sushi
./scripts/_genonce.sh
to generate the IGRun open output/index.html
to view the IG website
http-server
...Building tests, see test README
This specification contains a set of tests in the /tests
directory,
which are set of test case files, each covering one aspect of implementation.
A test case is represented as JSON document with title
and description
attributes,
a set of fixtures (FHIR resources) as the resources
attribute, and an array of test objects.
A test object has a unique title
, a ViewDefinition as the view
attribute, and and expected set of resulting
rows in the expect
attribute.
Test cases are organized as individual JSON documents within the /tests
directory. Each test case file is structured to include a combination of
attributes that define the scope and expectations of the test. The main
components of a test case file are:
title
attribute): A brief, descriptive title that summarizes the aspect of the
implementation being tested.description
attribute): A detailed explanation of what the test case aims to
validate, including any relevant context or specifications that the test is
based on.['4.0.1', '5.0.0']
.
This applies to all FHIR resources in the test suite. The version numbers come
from this ValueSet and
can only include "Release" versions.resources
attribute): A set of FHIR resources that serve as
input data for the test. These fixtures are essential for setting up the test
environment and conditions.tests
attribute): An array of objects, each representing a unique test
scenario within the case. Every test object includes:
title
attribute): A unique, descriptive title for the test object, differentiating
it from others in the same test case.tags
attribute): A list of strings that categorize the test
case into relevant groups. This attribute helps in organizing and
filtering test cases based on their scope or focus. See Reserved Tags.view
attribute): Specifies the ViewDefinition being
tested. This attribute outlines the expected data view or transformation
applied to the input fixtures.expect
attribute): An array of rows that represent
the expected outcome of the test. This attribute is crucial for validating
the correctness of the implementation against the defined expectations.Below is an abstract representation of what a test case file might look like:
{
// unique name of test
'title': 'title',
'description': '...',
'fhirVersion': ['5.0.0', '4.0.1'],
// fixtures
'resources': [
{'resourceType': 'Patient', 'id': 'pt-1'},
{'resourceType': 'Patient', 'id': 'pt-2'}
]
'tests': [
...
{
'title': 'title of test case',
'tags': ['shareable'],
// ViewDefintion
'view': {
'select': [
{'column': [{'name': 'id', 'path': 'id'}]}
]},
// expected result
'expect': [
{'id': 'pt-1'},
{'id': 'pt-2'}
]
}
...
]
}
The following tags are reserved for categorizing test cases based on their applicability to profiles within the core specification:
To ensure comprehensive validation and interoperability, it is recommended for implementers to integrate the test suite contained in this repository directly into their projects. This can be achieved efficiently by adding this repository as a git submodule to your project.
Furthermore, implementers are advised to develop a test runner based on the following guidelines to execute the test cases and generate a test report. This process is essential for verifying the implementation against the specified test cases.
The test runner should be designed to automate the execution of test cases and generate a comprehensive test report. Follow these steps to implement your test runner:
evaluate(test.view, testcase.resources)
.expect
attribute of the test object.The test runner should produce a test_report.json
file containing the results of the test executions. The structure of the test
report is a map where:
tests
list,tests
list has a name
and a result
field, reporting
whether the name
test passed
or not. Beside passed
, the result
map
may also have a reason
text field describing why the test did not pass.
Here is an example://example test_report.json
{
"logic.json": {
"tests": [
{
"name": "filtering with 'and'",
"result": {
"passed": true
}
},
{
"name": "filtering with 'or'",
"result": {
"passed": false,
"reason": "skipped"
}
},
...
]
},
...
}
After running the test suite and generating a test_report.json
file with the
outcomes of your implementations test runs, the next step is to make these
results accessible for review and validation. Publishing your test report to a
publicly accessible HTTP server enables broader visibility and verification of
your implementations compliance with the specifications. This guide outlines
the process of publishing your test report and registering your implementation.
You can validate the structure of your test report file using the test report JSON schema.
Choose a Hosting Service: Select an HTTP server or a cloud storage service (such as AWS S3, Google Cloud Storage, or Microsoft Azure Blob Storage) that supports setting CORS (Cross-Origin Resource Sharing) policies. This is crucial for enabling the test report to be accessed from different origins.
Upload Your Test Report:
test_report.json
is ready for publication.Enable CORS:
https://fhir.github.io
. This typically involves
setting a CORS policy that includes this origin.[
{
"AllowedOrigins": ["https://fhir.github.io"],
"AllowedMethods": ["GET"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3000
}
]
Verify Access:
test_report.json
can be accessed
from a browser without encountering CORS errors. You can do this by
attempting to fetch the report from a webpage hosted on
https://fhir.github.io
or using developer tools in your browser.Once your test report is published and accessible, the final step is to register
your implementation in the
test_report/public/implementations.json
file. This file serves as a registry of available implementations and their test
results, facilitating discovery and comparison.
Format of implementations.json
:
implementations.json
file is a JSON document that lists implementations along with URLs to their test reports.test_report.json
.Add Your Implementation:
implementations.json
if necessary.{
"name": "YourImplName",
"description": "<description>",
"url": "<link-to-the-site>",
"testResultsUrl": "<link-to-test-results>"
},
Submit Your Changes:
implementations.json
file.By following these steps, you'll not only make your test results publicly available, you'll also contribute to a collective resource that benefits the entire FHIR implementation community. Your participation helps in demonstrating interoperability and compliance with the specifications, fostering trust and collaboration among developers and organizations.