oasis-tcs / csaf

OASIS CSAF TC: Supporting version control for Work Product artifacts developed by members of TC, including prose specifications and secondary artifacts like meeting minutes and productivity code
https://github.com/oasis-tcs/csaf
Other
138 stars 37 forks source link

Proposal: Guidance on how and where to find CSAF documents #152

Closed tschmidtb51 closed 2 years ago

tschmidtb51 commented 3 years ago

We should provide some guidance how and where a publisher should provide the security advisories. This will also help the users to find the security advisories at a publisher’s / vendor’s website. Moreover, it aids in making the format machine processable.

Therefore, I suggest to define several ~conformance targets~ levels:

Update: I removed CSAF translator and CSAF modifier from here and pasted them in #140 since they base on the definition of CSAF producers. Therefore, they are not related to the question where to find CSAF documents.

Edit: I removed the term conformance targets as there was some confusion. To clarify: This is independent from #140 - it does not specify anything at the document level.

tschmidtb51 commented 3 years ago

This should also be linked to #140 .

tschmidtb51 commented 3 years ago

Here are some idea for the requirements in general (which should be divide for the levels as stated above):

  1. A publisher must provide valid CSAF documents. The file name shall be the value of document/tracking/id in lower case where non-printable characters and non-ASCII characters have been replaced with the character underscore. The file extension must be .json. (See also: 2. Example: File names)

  2. The URL https://[www.]domain.tld/.well-known/security/data/csaf/latest/ must link to a folder where CSAF documents which use the latest version of CSAF are located. CSAF documents of version \ should also be accessible at https://[www.]domain.tld/.well-known/security/data/csaf/<version>/. The CSAF documents must be located within folders named <YYYY>/ where \ is the year given in the value of document/tracking/initial_release_date. The index.txt file within .well-known/security/data/csaf/latest/ must provide a list of all file names of CSAF documents which are located in the sub-directories with their filenames. Directory listing should be used. Within the .well-known/security/data/csaf/latest/ there should be a rss.xml file which provides a RSS feed with the latest changes of CSAF documents which are located in the sub-directories. The file changes.csvmust contain the filename as well as the value of document/tracking/current_release_date without a heading; lines must be sorted by the latter one. Example: content of changes.csv

    2020/example_company_-_2020-yh4711.json, "2020-07-01T10:09:07Z"
    2018/example_company_-_2018-yh2312.json, "2020-07-01T10:09:01Z"
    2019/example_company_-_2019-yh3234.json, "2019-04-17T15:08:41Z"
    2018/example_company_-_2018-yh2312.json, "2019-03-01T06:01:00Z"

    All files shall have a cryptographic hash (e.g. SHA-512 or SHA-3) as well as an OpenPGP signature file which are provided under the same file name which is extended by the appropriate extension. Example: File names

    Value of document/tracking/id: "Example Company - 2019-YH3234"
    File name of CSAF document: example_company_-_2019-yh3234.json
    File name of hash file: example_company_-_2019-yh3234.json.sha512
    File name of signature file: example_company_-_2019-yh3234.json.asc
  3. The publisher should locate other formats of the advisories in adjacent folder where csaf in the path is replaced by the corresponding format name:

    CSAF document: https://[www.]domain.tld/.well-known/security/data/csaf/latest/2019/example_company_-_2019-yh3234.json
    Text file: https://[www.]domain.tld/.well-known/security/data/txt/2019/example_company_-_2019-yh3234.txt
    HTML file: https://[www.]domain.tld/.well-known/security/data/html/2019/example_company_-_2019-yh3234.html
    PDF file: https://[www.]domain.tld/.well-known/security/data/pdf/2019/example_company_-_2019-yh3234.pdf
    CVRF 1.2 document: https://[www.]domain.tld/.well-known/security/data/cvrf/1.2/2019/example_company_-_2019-yh3234.xml

    This applies only for those formats which are provided by the publisher. Edit: For HTML files: The url might be an alias or redirection.

  4. The publisher may include a DNS record csaf.data.security.domain.tld where the path /latest/on that server provides the same result as described above.

  5. The publisher must provide his latest OpenPGP keys used to sign advisories in a directory named as their fingerprint. The URL https://[www.]domain.tld/.well-known/security/data/pgp/latest must link to the most recent one. It should be signed by the previous one to establish a chain of trust. The OpenPGP keys may also be included in an appropriate DNS record.

  6. CSAF documents must not be available at site which do not provide transport layer security.

  7. CSAF providers shall provide some meta information (publisher, contact details, pgp key for contact, API support, agreement for collection by aggregator, ... maybe also CVD policy) at https://[www.]domain.tld/.well-known/security/data/metadata.json.

  8. CSAF documents which are labeled TLP:WHITE must be accessible freely

  9. CSAF documents which are labeled TLP:AMBER or TLP:RED must be protected from unauthorized access and should not be stored under https://[www.]domain.tld/.well-known/security/data/csaf/latest/ but https://[www.]domain.tld/.well-known/security/tlp/data/csaf/latest/.

tschmidtb51 commented 3 years ago

To provide better insights please find below a table with the requirements and the roles:

Requirement CSAF publisher CSAF provider CSAF trusted provider CSAF aggregator
publish valid CSAF documents (1a) must must must must
file name restriction document/tracking/id.json (1b) must must must must
URL .well-known/security/data/csaf/latest/ (2a) may must must must (only own advisories, must not others)
URL with \ (2b) may should should should (only own advisories, must not others)
\ folder per year (2c) may must must must
list as index.txt file (2d) should must must must
directory listing (2e) may should should should
RSS feed (2f) may may should should
changes.csv (2g) may must must must
integrity & signature (2h) of own may may must must
corresponding path (3) may should should should
DNS path (4) may may may may
Provide own pgp keys (5a) may may must must
own PGP key URL (5b) may may must must
Chain of trust (5c) may may should should
DNS own pgp (5d) may may may may
TLS enforced (6) must must must must
own metadata.json (7) may may must must
TLP:White (8) must must must must
>= TLP:Amber (9) at different url should must must must
>= TLP:Amber (9) at specified url may must must must
other advisories under .well-known/security/aggregator/<issuer>/ must not must not should not must
own advisories under .well-known/security/aggregator/<aggregator>/ must not must not should not must
provide API may may may may (was: ~must~)
integrity & signature of others should not should not should not should
provide keys and metadata for others should not should not should not should
Callback & Push to inform about new advisories may may may should
provide list of items changed may may should must
provide list of all known issuers of advisories (name + url) must not must not must not must

Edit: The numbers in brackets refer to the description in https://github.com/oasis-tcs/csaf/issues/152#issuecomment-733804674

Edit: Removed the mandatory API and made it optional. Edit: Aggregator now must provide a list of all advisory issuers he knows.

tschmidtb51 commented 3 years ago

Please also see the email thread at the OASIS Open mailing list.

tschmidtb51 commented 3 years ago

I think there is a misunderstanding, which I would like to clear up:

The different levels "CSAF publisher", "CSAF provider" and "CSAF trusted provider" were not meant to be used to dictate the issuer of a CSAF document what rules he/she has to follow. It should provide some guidance on what we think is the easiest way to help the customer to use the CSAF documents. It is not meant to put additional force on the issuers just to add convienence for the CSAF aggregators. The CSAF aggregator are only there to support the adoption of the standard: They provide information in a standardized way so that it is usable for the end users. However, they can always collect the advisories from the issuers directly.

The requirement sets given in "CSAF publisher", "CSAF provider" and "CSAF trusted provider" should not hinder any company in adopting the standard. They won't be referenced at the document-level conformance target section. So you can always provide valid CSAF documents without fulfilling any of the requirement levels below.

Please find the reasoning below: CSAF publisher: This should be an easy step for anyone wanting to adopt the standard (and it could also be used as a marketing instrument as well). So I think the requirements are low - correct me if I'm wrong:

I think anyone who wants to adopt the standard can fulfil these requirements. If you already have an infrastructure for publishing advisories, you will most likely easily tick all the boxes.

CSAF provider: This set of requirements is meant to aid in automation. I know that many IT and some OT (Operational Technology) vendors are publishing advisories for years and have their structure and URL. However, I've seen during CVD processes that no all vendors are already prepared to deal with vulnerabilities in their products. Therefore, I think we should take the opportunity to lead them directly to a set of requirements that allows native automation. Moreover, when I look into the German community of asset owners - they would like to have such a URL format to be able to download the advisories automatically and check them against their asset lists. At the moment, this is mostly a manual process and which is resource- and time-consuming.

CSAF trusted provider: This set of requirements is meant to add security to the automation. It places the end users in a position where they don't have to trust a third-party anymore that the advisory wasn't changed. They can check it themselves. Edit: Since it’s with regards to signatures and checksum verifications, probably “Verifiable” may be more appropriate than “trusted“.

The CSAF Aggregator is a special case: You might use it to collect all your advisories from there - a one-stop-shop. Nevertheless, what I think is more important: The aggregator does all the management around finding new CSAF sources and providing them. He also provides the CSAF documents from CSAF publishers at a standardized location, which makes it easier for the end users to adopt the standard.

I strongly disagree with the idea of a central aggregator of CSAF documents because it comes with all sorts of problems: First of all: We add a single point of failure into the system. Then: Who is responsible? Who decides which vendor gets listed? Who maintains it? Is the content legal in all states? What about takedown-requests? What about censorship? I just named a few. Therefore, I think we should stick to a decentralized concept.

I totally agree that we need to make the standard approachable to new vendors and projects. However, I think we should not lose sight of the end users. If the standard requires a lot of work there it will not be adopted either since the market is missing. BSI is willing to support the adoption by making proof-of-concepts and tools available.

As mentioned on slide 16 these are open questions that, I think at least, we should not address in CSAF 2.0. SBOM might be able to solve the unique product identifier problem at some point. However, that problem is a completely different story.

sthagen commented 3 years ago

CSAF trusted provider: This set of requirements is meant to add security to the automation. It places the end users in a position where they don't have to trust a third-party anymore that the advisory wasn't changed. They can check it themselves. Edit: Since it’s with regards to signatures and checksum verifications, probably “Verifiable” may be more appropriate than “trusted“.

I still wonder if we need some document level specification for that „role“.

  1. the CSAF json is hashed and signed - this requires a convention at least to be automateable on the consumer side - do we need a new mime type?
  2. the signature „means“ something trustworthy for the consumer - there I understand the role adjectives trusted and verifiable
  3. edit: do we want to provide hashes and signatures on canonicalized JSON or do we assume mere frozen accidentals (I guess the latter) - transport of text crossing end-of-line convention boundaries might invalidate the signedness ...

In summary I think for that additional end-to-end „authenticated integrity capability“ we require changes to the document schema or at least some processing specification where we then can attach the conformance clauses.

mprpic commented 3 years ago

@tschmidtb51 Thank you for the additional details (both here and in our conversation yesterday)! As long as the different roles are communicated as guidance for any CSAF adopters, this will be a nice way to get them up to speed on best practices. The proposal for an aggregator also makes sense and I hope some national CERTs will pick this up.

I still have some reservations about URL requirements (https://[www.]domain.tld/.well-known/security/data/csaf/latest/) because I know how hard it is to change existing infrastructure of a website at a large company. This may be easier for adopters who are just now starting to publish security metadata but not so much for vendors who have been doing it for over a decade and have troves of customers expecting certain URL patterns. I wonder if that requirements could be relaxed to supporting redirects to existing locations (obviously any aggregators would have to account for that and follow redirects in their HTTP requests).

Related to this is also the fact that some vendors may implement serving CSAFs not as documents but as REST API endpoints. Red Hat too publishes CVRF as both individual files and API responses:

Expecting specific API endpoints to conform to a well-known URL would then make them inconsistent with the rest of the API URLs and most likely difficult to implement.

tlimmer commented 3 years ago

I still have some reservations about URL requirements (https://[www.]domain.tld/.well-known/security/data/csaf/latest/) because I know how hard it is to change existing infrastructure of a website at a large company. This may be easier for adopters who are just now starting to publish security metadata but not so much for vendors who have been doing it for over a decade and have troves of customers expecting certain URL patterns. I wonder if that requirements could be relaxed to supporting redirects to existing locations (obviously any aggregators would have to account for that and follow redirects in their HTTP requests).

I agree here. I would go even further, as I do not see the advantage of having a fixed path inside the URL - the main domain of a company is usually not clear. So it always will need to be kept in a database, and the difference between storing domain.tld or https://psirt.domain.tld/advisories/csaf/ is insignificant. The only important aspect from my point of view should be, that this URL is never or very rarely changed.

tschmidtb51 commented 3 years ago

I still have some reservations about URL requirements (https://[www.]domain.tld/.well-known/security/data/csaf/latest/) because I know how hard it is to change existing infrastructure of a website at a large company. This may be easier for adopters who are just now starting to publish security metadata but not so much for vendors who have been doing it for over a decade and have troves of customers expecting certain URL patterns. I wonder if that requirements could be relaxed to supporting redirects to existing locations (obviously any aggregators would have to account for that and follow redirects in their HTTP requests).

@mprpic: I see your point. However, if we specify that this could be a redirect then we need to make sure that it works the same way as the URL which does not redirect. Otherwise, we make it nearly impossible for the end user to automate things. What do you think about using rewrite instead? See an example below (NGINX syntax):

location /.well-known/security/data/csaf/latest/ {
    rewrite ^/.well-known/security/data/csaf/latest/(.*)$ /path/to/advisories-and-listings/$1 break;
    return  403;
}

If we add this to the standard, companies can still use their original patterns. For Red Hat it would look like this (assuming that you publish the security advisories under https://www.redhat.com/security/data/csaf/<YYYY>/<document/tracking/id>.json)

location /.well-known/security/data/csaf/latest/ {
    rewrite ^/.well-known/security/data/csaf/latest/(.*)$ /security/data/csaf/$1 break;
    return  403;
}

A request to https://www.redhat.com/.well-known/security/data/csaf/latest/2021/rhsa-2021-0250.json would be internally rewritten to https://www.redhat.com/security/data/csaf/2021/rhsa-2021-0250.json.

Would that be a reasonable compromise?

tschmidtb51 commented 3 years ago

@tlimmer: Please see the suggested compromise above.

I agree here. I would go even further, as I do not see the advantage of having a fixed path inside the URL - the main domain of a company is usually not clear. So it always will need to be kept in a database, and the difference between storing domain.tld or https://psirt.domain.tld/advisories/csaf/ is insignificant. The only important aspect from my point of view should be, that this URL is never or very rarely changed.

I disagree. The whole point of the URL question is to aid in automation. As an end user I can simply scan all domains to see where security advisories are published as CSAF documents. Therefore, you can put in a list of your vendors and once they start to support CSAF you get the files as soon as you request the URL. Noone has to wait for the vendor to announce it (sometimes the relevant people won't see the announcement at all) or until the vendor get listed at a 3rd party website. Moreover, it adds trust: If you request that URL you can be sure that it was actually setup be the issuer of the advisory (respectively someone with access to the web server of the issuer. If you can't trust them how could you the advisory which you download from (a "non-standard" URL at) that website?). How can an end user be sure that the URL which he found on a 3rd party website is actually legit? There is no need for trusting a 3rd party with the URL approach.

sthagen commented 3 years ago

Trust via URL path parts makes me feel dizzy 😜 If I am Mallory, helping small business software vendors / open source projects on say gitlab maintain their web presence with a valid cert because I can do HTML, CSS, and JS just fine, I could setup that trust path without telling them. When the owners do not know of the convention, what does the presence of such a fixed URL path mean for the consumers of advisories they retrieve from there?

I see this offering as guidance for implementers making it easier for consumers during discovery phase, but trust I would not associate.

tlimmer commented 3 years ago

Some answers to the discussion about the fixed URL path:

tlimmer commented 3 years ago

I would suggest another alternative that might fit quite well: making use of the IETF Draft for security.txt, that already offers a standard way of retrieving vulnerability-related information from a vendor's web site (https://tools.ietf.org/html/draft-foudil-securitytxt-10). The file must be placed by web-based services in the location https://example.com/.well-known/security.txt, and the format is extensible. We could add an additional field in there, such as:

CSAF: https://psirt.domain.tld/advisories/csaf/

What do you think?

sthagen commented 3 years ago

I would suggest another alternative that might fit quite well: making use of the IETF Draft for security.txt, that already offers a standard way of retrieving vulnerability-related information from a vendor's web site (https://tools.ietf.org/html/draft-foudil-securitytxt-10). The file must be placed by web-based services in the location https://example.com/.well-known/security.txt, and the format is extensible. We could add an additional field in there, such as:

CSAF: https://psirt.domain.tld/advisories/csaf/

What do you think?

This is Stefan: I second this proposal.

tschmidtb51 commented 3 years ago

We could add an additional field in there, such as:

CSAF: https://psirt.domain.tld/advisories/csaf/

In general, I like the idea - we should not reinvent the wheel. However, I struggle at the moment identifying the implications:

@tlimmer: Did you already check whether the people specifying the "security.txt" are open to this idea? How would that impact our schedule? The draft you mentioned expired today...

nightwatchcyber commented 3 years ago

The draft won't expire while it is under IESG review. However, we cannot add it to the draft as it is today, this would have to wait a few weeks/months until the IANA registry is created and the draft is published as an RFC.

tlimmer commented 3 years ago

The draft won't expire while it is under IESG review. However, we cannot add it to the draft as it is today, this would have to wait a few weeks/months until the IANA registry is created and the draft is published as an RFC.

@nightwatchcyber : Thanks for answering directly!

It's important to know that you are open to the idea, and we will get the opportunity to add our own field to the document. So we will wait until the IANA registry is created!

mprpic commented 3 years ago

One alternative solution for a lot of the requirements noted in https://github.com/oasis-tcs/csaf/issues/152#issuecomment-734960060 is to recommend the use of the ROLIE protocol for the discovery and publication of CSAF documents:

https://tools.ietf.org/html/rfc8322

In simple words, the protocol defines a way to publish and consume a feed of all security-relevant documents from a vendor. The discovery of the feed file itself is still an unsolved problem but a proposal exists to use DNS Services Discovery for this:

https://tools.ietf.org/html/draft-banghart-mile-rolie-discovery-00

The following tool exists to generate and consume ROLIE feeds:

https://github.com/rolieup/golie

Red Hat currently publishes a ROLIE feed (in both XML and JSON) for all OVAL definition files and SCAP content:

https://www.redhat.com/security/data/oval/v2/ https://www.redhat.com/en/blog/red-hat-adopts-rolie-protocol-automated-exchange-security-compliance-assets

I'd be happy to prepare a demo of the protocol and the tooling that exists to support it if there is interest in this approach. @santosomar, could you add this to the agenda for the next TC meeting?

tschmidtb51 commented 2 years ago

I suggest to close the ticket.