Closed MothyTim closed 2 years ago
@MothyTim
Thank you for raise this issue to us.
Inferno test 5.3.01 uses the first output.url in the Status Complete Response of 5.2.05.
Please verify if that url is TLS 1.2 only.
~ Yunwei
It sounds like this is similar to #154 and https://github.com/onc-healthit/inferno-program/issues/304
Are you comfortable running an SSL test using an online 3rd party test tool (e.g. https://www.ssllabs.com/ssltest/)? Or putting the host here (note this is a public forum)? Perhaps your use of curl is technically allowing a 1.1 connection, but then AWS is just sending 'AccessDenied` over that connection, instead of rejecting it per best practices outlined in RFC-8996.
When I run that against our server, which is configured to outright reject < TLS 1.2, I get the following:
% curl https://inferno.healthit.gov/reference-server/r4 --tlsv1.1 --tls-max 1.1
curl: (35) error:1400442E:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert protocol version
(to be honest though, I'm not that familiar with curl, and also haven't tried this outside of my network's firewall)
Also, since we didn't answer this question in the previous 2 comments, your use of the Bulk Data spec sounds correct. This is a TLS-level issue, not an issue with the application-level spec.
I ran https://www.ssllabs.com/ssltest/analyze.html against the aws s3 bucket URL and it resulted in:
This means that AWS S3 service itself allows 1.0 and 1.1. But, the bucket policy itself does not permit such access. Can this be improved in the inferno tool to check the bucket itself?
@arscan Per your comment: "Perhaps your use of curl is technically allowing a 1.1 connection, but then AWS is just sending 'AccessDenied` over that connection, instead of rejecting it per best practices outlined in RFC-8996." Does that beg the question of the AWS is less secure than advertised when one sets up a bucket policy that is only supposed to allow 1.2 ?
Been working with @MothyTim on this. Indeed, AWS S3 - at the grand service level - supports TLS 1.0, 1.1, and 1.2 connectivity, currently, and they will continue to do so until they deem one of those protocols too insecure and no longer useful for their entire customer base - which means its not likely they'll change for any given customer: https://docs.aws.amazon.com/AmazonS3/latest/userguide/network-isolation.html
So S3 supports connections with those TLS versions - i.e. it will allow you to negotiate a connection using any of those protocols. The bucket policy takes place once that connection is established, and a method is called - say GetObject to download data. The policy evaluates various parameters surrounding that request, one of which is the TLS version being leveraged, and will respond in kind to the request. The bit of policy we have in place to police this looks something like:
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
],
"Condition": {
"NumericNotEquals": {
"s3:TlsVersion": "1.2"
}
}
}
With this policy in place, given a properly signed and valid URL, with curl we can get at our data - it defaults to the highest possible TLS so it's leveraging 1.2 in this case (can be validated with --verbose if needed)
curl 'https://mybucket.s3.amazonaws.com/bulkfhir/GUIDHERE/objectname?keysandsuch'
{"resourceType":"Organization","id":"12345","meta":{"profile":["http://hl7.org/fhir/us/core/StructureDefinition/us-core-organization"]},"identifier":[{"system":"http://hl7.org/fhir/sid/us-npi","value":"1111111112"}],"active":true,"name":"test_header","telecom":[{"system":"phone","use":"work"}],"address":[{"line":["123 Main St."],"city":"MyCity","state":"CA","postalCode":"92001-1234","country":"USA"}]}
A follow on test with curl, configured to not use higher than TLS 1.1 ( and verified with --verbose that it is indeed connecting with TLS 1.1)
curl 'https://mybucket.s3.amazonaws.com/bulkfhir/GUIDHERE/objectname?keysandsuch' --tlsv1.1 --tls-max 1.1
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>KNHPQ4B1DGFXXXX</RequestId><HostId>LGGIbp5mFmZqzkLBIBlcXXXXX/RSuP/1U40XhEkDKPwHHdBIO1J3dUUvXXXX/pc5qsenXXXX=</HostId></Error>
So here you can see that the connection via TLS 1.1 did happen, curl issued a GET request, and then per the bucket policy S3 responds with a status 403 message with some detail. Again, yes, TLS1.1 is allowed from a connection standpoint, but s3 is protecting against actually getting any data based on the bucket policy. So the question is, as written, is the connectivity piece of this the key, and we cannot leverage S3 in this way, or is the key outcome trying to prevent data access over tls1.0/1.1 and the testing tool should be updated to reflect that?
Hi @MothyTim & @stevier5000 --
Thanks for providing additional details. I suspected this is how S3 is handling this in this case. And you are saying there is no way that an AWS S3 customer can configure the behavior of mybucket.s3.amazonaws.com
to reject TLS1.0/1.1 connections altogether?
I'm surprised this hasn't come up yet, because there was a good amount of attention put towards ensuring storage services (and S3 specifically) are allowed to be used for these bulk data download files without a proxy for authorization, so needing a proxy anyhow to terminate TLS would make no sense. But perhaps you are the first to get this far using this method for bulk data files in the (g)(10) context.
As far as our justification for our approach, ONC provided a clarification to the (g)(10) Standardized API criterion in the CCG regarding support for clients that use < TLS 1.2:
Technical outcome – Establish a secure and trusted connection with an application that requests data for system scopes in accordance with the implementation specification adopted in § 170.215(a)(4).
Clarifications:
- Connections below TLS version 1.2 must be denied.
Our interpretation is that 'Connection' here is protocol-level connection (and not an abstract connection), and thus it made sense to follow the requirements provided in the referenced current best practice RFC that describes how servers MUST close those connections instead of accepting them. Note that ONC has not referenced that RFC, but seems like the logical place for us to look for implementation guidance when writing tests.
It is also a lot easier to check that the connection is immediately closed by the host than it is to try to evaluate whether or not content being sent over TLS 1.1 is real content, or just a message indicating a failure. Ease of testing shouldn't have a bearing on how we are constraining systems in this context, but it certainly makes it cheaper and faster to test, which is a really nice attribute. I suspect it will be hard to find client libraries that are maintained that allow use of these old standards eventually.
Regardless, it sounds like we could be overly strict here, as you have pointed out, and perhaps all that matters is that none of the content is sent over the TLS 1.1 connection.
Could you submit a Question to ONC's feedback portal asking for further clarification, and feel free to reference this GitHub issue?
If anyone else has experience successfully configuring AWS S3 to pass these tests (or alternatively have had similar problems failing on other commonly used infrastructure providers, forcing a workaround), it would be helpful to hear about in this thread.
@arscan Thanks for your response.
Q: "And you are saying there is no way that an AWS S3 customer can configure the behavior of mybucket.s3.amazonaws.com to reject TLS1.0/1.1 connections altogether?" Answer: No not the we can find - AWS allows at the bucket "layer" with bucket policy. Ironically just last week AWS decided to take action on this for all services: https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/ Unfortunately we must wait a year. Sigh.
Q: "needing a proxy anyhow to terminate TLS would make no sense." I agree it is suboptimal. That being said we have a companion product in our company the uses AWS API gateway to service the requests for the resource URLs and it did not hit this issue That was our original design and we went the more efficient route of signed URLs.
I will submit this information to ONC feedback portal.
I am unclear of business relationships here. Are you in an organization that maintains responsibility for Inferno? My management wants me to purse all paths since we are preparing the entire company for 20C Cures certification. I believe you clearly understand the issue at hand. If you are playing the role of responsible party for Inferno would a call/meeting be helpful in this process?
Thanks, Tim
Ironically just last week AWS decided to take action on this for all services: https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/ Unfortunately we must wait a year. Sigh.
Wow, very timely. "Now is the right time to retire TLS 1.0 and 1.1, because increasing numbers of customers have requested this change to help simplify part of their regulatory compliance..." -- yup! It would be a lot easier for us if this didn't take a year to do.
I am unclear of business relationships here. Are you in an organization that maintains responsibility for Inferno? My management wants me to purse all paths since we are preparing the entire company for 20C Cures certification. I believe you clearly understand the issue at hand. If you are playing the role of responsible party for Inferno would a call/meeting be helpful in this process?
Yes, Inferno is developed and maintained by MITRE, and is sponsored/funded by ONC. When an issue with a test failure is raised, we determine if it is something that is clearly wrong with the test tool implementation, and if that is the case we will fix the tool. In cases where there is ambiguity within a spec, we raise that to appropriate standards body, which would hopefully get authoritatively clarified. For times like this, where there is a question about the validity of the test approach and is dependent on interpretation of policy and intended outcomes of the regulation, we ask ONC to weigh in.
If you are available, we are hosting our monthly public call for Inferno this Wednesday at 1PM ET, goto link here: https://global.gotomeeting.com/join/774619365 (link also in the 'Announcements' box on https://inferno.healthit.gov) We usually cover updates we've made the most recent release, provide a preview of what can be expected in the next release, and highlight any new issues that have come up that we would like feedback on. I'll put this on the agenda.
@arscan A colleague and I in the waiting room for the monthly meeting (we believe) and are not able to join or there is some other issue. We will wait some more see if it starts up.
Hi @MothyTim -- having technical difficulties over here getting in, thanks for your patience!
k - we will continue to try.
FYI - I tried a bunch of times....then redownloaded and eventually it worked.
Hi @MothyTim --
ONC has provided further clarification on this in the ONC (g)(10) CCG:
TLS version 1.2, or above must be enforced for the appropriate connections
and
Health IT developers are encouraged but not required to follow TLS Best Current Practice (BCP 195) for TLS version enforcement, referenced in section 6.1.0.3 of the HL7® 4.0.1 Fast Healthcare Interoperability Resources Specification (FHIR®) Release 4, October 30, 2019, which recommends TLS 1.2, or higher to be used for all production data exchange and limits support for lower versions of TLS. To meet ONC Certification requirements, Health IT developers must document how the Health IT Module enforces TLS version 1.2, or above to meet the API documentation requirements at § 170.315(g)(10)(viii) and API Transparency Conditions at 45 CFR 170.404(a)(2).
So denying at the connection level is encouraged but not required. We'll update our tests as soon as we can. Thanks for raising the issue to ONC & your patience!
Hi, We are having an issue with passing this 1 5.3.01 test case in the bulk data Inferno tool.
The implementation we have set up returns AWS S3 signed URLs for the resource links in the Bulk Data Status request response and used for the subsequent fetching of the resources. The AWS S3 bucket has a policy to only permit TLS 1.2 version. As demonstrated with curl it properly rejects 1.1 access:
curl '' --tlsv1.1 --tls-max 1.1
<?xml version="1.0" encoding="UTF-8"?>
AccessDenied
Using curl with 1.2 properly downloads the resource data.
But the inferno tool fails the test:
error: | Server incorrectly allowed TLS 1.0 connection. error: | Server incorrectly allowed TLS 1.1 connection.
Is inferno only checking the AWS host and not the bucket and it's policy?
The HL7 clearly states the AWS signed URLs are compliant since our implementation sets requiredAccessToken to false.
From specs: https://hl7.org/fhir/uv/bulkdata/STU1.0.1/export/index.html
requiredAccessToken: indicates whether downloading the generated files requires a bearer access tokenValue SHALL be true if both the file server and the FHIR API server control access using OAuth 2.0 bearer tokens. Value MAY be false for file servers that use access-control schemes other than OAuth 2.0, such as downloads from Amazon S3 bucket URLs or verifiable file servers within an organization's firewall.
Is there an issue with Inferno certification tool not properly handling the AWS S3 scenario for 5.3.01?
All the other tests are passing and the data is properly returned in the subsequent tests using the signed URLs.
Regards, Tim