opencontrol / atos

Introduction to ATOs
https://atos.open-control.org
Creative Commons Zero v1.0 Universal
8 stars 2 forks source link

Questions (FAQs?) after a first read #21

Closed cmoscardi closed 5 years ago

cmoscardi commented 5 years ago

Hi, I took a look over the https://atos.open-control.org/ page. It was super helpful! Here are the main open questions I still have. I think they fall onto the really basic/high-level overview side of things... I actually feel like I have some understanding of the SSP and the mechanics of the process itself.

For context: I came across this guide by way of @afeld ... my program team at Census is confused about exactly when/how we need an ATO for a couple of projects. We hadn't gotten a straight answer on this question, though were recently linked a policy document that described some of the ATO types in Commerce. It helped clarify 3. below because Commerce has something called a "time-bound" ATO for potentially pilot purposes.

  1. At what point in the development of a system do you need an ATO? You can develop/prototype internally without one? Are you relying on some other ATO (or perhaps a platform-level ATO) during that time? Does a system prototype that involves confidential/sensitive data in the prototyping phase (e.g. a machine learning model's specs derived from sensitive data) change these parameters?
  2. I didn't fully understand the "when to start" tip (tip 1 under "General"). Specifically, what is a "security boundary"? The NIST glossary definition (which is currently down for me, so can't link) is hilariously unclear. This confusion may also be due to my not understanding when exactly you need an ATO.
  3. How does the "Impact" of a system change as it scales? For example, at Census we're thinking of initially piloting a new data ingestion tool in a low-stakes way. Availability/Integrity almost don't matter in that case, since site could go down / we could completely lose the data and be fine. As long as we don't cause a data breach. However, when we go to "production" with this (in the sense of a full-scale survey production), obviously those parameters change significantly.
  4. Someone here at Census mentioned a cost in the tens-of-thousands of dollars to go through the ATO process. I'd say that detail on that (unless it's too agency-specific) would be helpful.
brasky commented 5 years ago

Here are some responses to your questions, but hopefully someone can add more based on your specific context:

  1. In general you need to authorize a system when it is going to be relied on to process, store, or interact with federal data. So if you're developing a prototype with non-production data then an authorization isn't required. Before you accept federal data into the system you need an ATO. To start prototyping with sensitive data is generally not a good idea.

  2. A security boundary is also called an authorization boundary or accreditation boundary. It defines the edge of your product and where your responsibility to protect federal data starts/ends. Any system outside of that which you communicate with would be a "System interconnection" and is generally called out in the system security plan (SSP). https://csrc.nist.gov/glossary/term/accreditation-boundary

  3. I don't have a ton of experience in your space with this particular aspect, but I imagine it's a similar idea: You should assess your system at the impact level you calculate it to be when it's fully operational. See https://csrc.nist.gov/glossary/term/impact and specifically FIPS 199. Your impact will be the high water mark of your CIA impact. So if you have low confidentiality, low integrity, and moderate availability impact it will should be tested at a moderate impact level. **Take this all with a grain of salt as this is how it works in cloud service provider world.

  4. No experience on the cost of the ATO process for an agency, hopefully someone else can chime in on 3 and 4. For a CSP it can be way more than 10s of thousands but I imagine it's significantly cheaper for an agency.

trevorbryant commented 5 years ago

Hey Christian, there's a lot of information below. I have to break this down for myself to understand better. Your agency should have training on how the Risk Management Framework (RMF) and the Assessment and Authorization (A&A; the ATO process) is implemented. They're required to.

Remember that the ATO is for legal regulatory requirements to show that the project meets the minimum security controls of that system's categorization (FISMA low/med/high).

At what point in the development of a system do you need an ATO?

Determining if an ATO is needed during the management and planning phase of the project. If the system is planning to go to production, it needs to add the various requirements (FIPS 199/200, 800-37) of the assessments to its planning and timeline.

You can develop/prototype internally without one?

If the system is a prototype without plans to go to/be used in production, then you can argue avoiding the requirements of the ATO. Unless it's prototyping something to be used in the future, then it's a tough argument. It's always an argument. Most Information System Security Officers (ISSOs) are risk averse and will tell you to do the ATO. If you can get the business unit to state otherwise, it'll be an argument.

Are you relying on some other ATO (or perhaps a platform-level ATO) during that time?

Generally, all systems rely on other ATOs of some sort. These are common with General Support Systems (GSS) such as systems that manage the enterprise. VMWare infrastructure, Microsoft ecosystem, PKI systems, etc. Now with FedRAMP, application-specific solutions can rely on the CSP's ATO.

Does a system prototype that involves confidential/sensitive data in the prototyping phase (e.g. a machine learning model's specs derived from sensitive data) change these parameters?

To provide emphasis, DOD and members of intelligence prototype all the time. If it's in production -- where the sensitive data is located, then somebody has to accept the risk and that's what the ATO is for. If there's sanitized data in test, it's back to the argument detailed above. Sometimes not needed.

I didn't fully understand the "when to start" tip (tip 1 under "General"). Specifically, what is a "security boundary"? The NIST glossary definition (which is currently down for me, so can't link) is hilariously unclear. This confusion may also be due to my not understanding when exactly you need an ATO.

800-37 details when to begin, and it's back to the start of the SDLC with management and planning. In the past it was "system boundary", now it seems to be "authorization boundary". It's the parts, bits, components of the system to be authorized for operate in production. This doesn't include existing systems already authorized if there's interconnectivity somewhere.

How does the "Impact" of a system change as it scales? For example, at Census we're thinking of initially piloting a new data ingestion tool in a low-stakes way. Availability/Integrity almost don't matter in that case, since site could go down / we could completely lose the data and be fine. As long as we don't cause a data breach. However, when we go to "production" with this (in the sense of a full-scale survey production), obviously those parameters change significantly.

Without knowing much about the project I'd suggest talking to the architecture team(s) and work to determine the impact of the system as it scales. The Configuration Management teams are also required to have this knowledge as part of their role expectations. Hopefully capacity planning is a requirement. Also, take a look at your agency's Privacy Impact Assessment (PIA) documents.

Someone here at Census mentioned a cost in the tens-of-thousands of dollars to go through the ATO process. I'd say that detail on that (unless it's too agency-specific) would be helpful.

Unfortunately this is a true fact for almost everywhere. I've seen agencies budget small cost for Assessment & Authorization and go way over budget, and others budget a flat rate enterprise wide. I've seen ATOs go as far as $100,000 USD. If an agency has 92 systems, that's $9.2 million annually assuming each systems only gets authorized for 12 months. Some military branches have well over 3,000 information systems. As a former auditor, assessor, and certifier, I've always referred to Information Assurance as a mandatory sunk cost. It's why the need to include security requirements early in the project are important. The #1 rule of FISMA is cost-effective security.

trevorbryant commented 5 years ago

Closing, but we can re-open if there are further questions.