GSA / fedramp-automation

FedRAMP Automation
https://www.fedramp.gov/using-the-fedramp-oscal-resources-and-templates/
Other
293 stars 89 forks source link

Check for CUI marking on all digital authorization package content #836

Closed aj-stein-gsa closed 2 weeks ago

aj-stein-gsa commented 1 month ago

Constraint Task

As a digital authorization package maintainer, to meet FedRAMP requirements, avoid passbacks, and properly identify data security requirements while doing so, I want to know all my documents in the package have properly selected the correct data classification or return a passback error.

Intended Outcome

Goal

Identify package documents that require a CUI marking (so all but catalogs and profiles; those are inherently not controlled, they go everywhere)

Syntax

VALID:

<system-security-plan>
  <metadata>
    <prop name="marking" value="cui"/>
  </metadata>   
</system-security-plan>
<assessment-plan>
  <metadata>
    <prop name="marking" value="cui"/>
  </metadata>   
</assessment-plan>

INVALID:

<system-security-plan>
  <metadata>
    <!-- no marking prop at all -->
  </metadata>   
</system-security-plan>
<assessment-plan>
  <metadata>
    <!-- no marking prop at all -->
  </metadata>   
</assessment-plan>

Syntax Type

This is optional core OSCAL syntax.

Allowed Values

There are only NIST-defined allowed values.

Metapath(s) to Content

/(assessment-plan|assessment-results|plan-of-action-and-milestones|system-security-plan)/metadata/prop[@name="marking"]/@value

Purpose of the OSCAL Content

Dependencies

N/A

Acceptance Criteria

Other information

Part of epic #804.

aj-stein-gsa commented 2 weeks ago

@DimitriZhurkin please review the checklist tomorrow and be ready to confirm if this issue, now that PRs are closed, is ready to ship.

DimitriZhurkin commented 2 weeks ago

Technically, unit tests exist only for SSP. Work on SAP, SAR, and POA&M hasn't started yet.

Rene2mt commented 2 weeks ago

@aj-stein-gsa and @DimitriZhurkin the constraint itself is the same (https://github.com/GSA/fedramp-automation/pull/875/files#diff-d8b799dfbfbf5f5ac64efe9def7e0cf21d38ded8ebc763b4c266aaa59060f63e) for the models, but as Dimitri noted, we'd have to update the unit tests and create additional sample test files for SAP, SAR & POA&M. Unit test file would looks something like this for marking-PASS.yaml:

# Driver for the valid marking constraint unit test.
test-case:
  name: The valid marking constraint unit test.
  description: Test that the SSP, SAP, SAR, and POA&M metadata contains the "marking" property.
  content: 
    - ../content/ssp-all-VALID.xml
    - ../content/sap-all-VALID.xml
    - ../content/sar-all-VALID.xml
    - ../content/poam-all-VALID.xml
  expectations:
    - constraint-id: marking
      result: pass

And we'd have to do something similar for marking-FAIL.yaml. Should we do that now, or scope this issue to just the SSP which is already done, and create separate issues for adding unit test vectors for the SAP, SAR, and POA&M?

aj-stein-gsa commented 2 weeks ago

Should we do that now, or scope this issue to just the SSP which is already done, and create separate issues for adding unit test vectors for the SAP, SAR, and POA&M?

If we need to consider the scope of work to be everything but the constraint @test logic, so docs and examples, which makes the work effort more clear and faster? I don't have a strong opinion on this one, so I can just pick if others do not. 😄

aj-stein-gsa commented 2 weeks ago

Per conversation in a backlog meeting at the time of this writing, for low-risk cross-model //metadata constraints, we can accept not explicitly adding per-model tests for such requirements that are also covered in generic documentation for cross-model requirements, I am happy to accept the included tests and examples as-is and the work is ready to release.