Open bact opened 9 months ago
We took our definitions of the Risk levels from: https://ec.europa.eu/docsroom/documents/17107/attachments/1/translations/en/renditions/pdf Where they are fairly precise about what they mean.
The terminology section (2.1) introduces the risk level terms we've used. Table 2 on p. describes the abstract levels definitions, that correspond to the defined risk levels in 2.1. Table 4 makes it explicit when each of the defined risk levels should be use.
In the EU AI act is there such a table for defining when unacceptable, high, limited, and minimal should be used?
My guess at this point is Unacceptable == serious High = high Limited = medium Minimal = low
Not sure why they didn't align with the EU risk definition, and created their own terms.
That being said - we need to clean up our definition in the specification to be closer to those in Table 2 I think, so it's not so ambiguous to just have keywords on their own.
Thanks Kate. I will try to provide some further information here so people can give more of their thoughts.
There is no comparable table like one from EU General Risk Assessment Methodology (Figure 4) in the EU AI Act.
Risk level in EU General Risk Assessment Methodology is a combination of 1) severity of harm and 2) probability (likelihood of harm)
EU AI Act draft takes a slightly different approach for the "calculation" of risk.
EU AI Act, like some other EU legislation, is based on precautionary principle. Under this principle, even the likelihood of harm is low (or unknown), but if the severity of harm is high enough (in the view of EU values), it can be considered as an unacceptable risk.
For example, a negative-effect social scoring (severity "4") of 448 people in 448 million people EU (likelihood "1/1,000,000") will be:
Risk levels in EU AI Act are based on 1) its use [for example, Article 5] 2) intended purpose [Article 6] or 3) its design [Article 52a(2)]
Some of the categorisations are list-based, the other are criteria-based. (See Risk level categorisation section below)
An AI system or an AI model will fall automatically in one of the risk levels based on the list or the criteria.
A summary of risk level categorisation is shown in the section below.
(Page numbers in this section are based on the most recent draft [dated 26 Jan 2024] of the EU AI Act, available publicly at https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf )
Discussed in AI Profile WG meeting 2024-03-06. No conclusion yet. But the meeting agree in that the AI Profile should be generic and if there's a need for jurisdiction-specific, a subprofile may be possible.
Let's discuss this in the meeting. Possibly we should adjust 3.0's risk to be "General Risk", so we leave a spot for "AI Risk" to emerge in future, without being a breaking change? Thoughts?
Agree.
We can keep the 4 risk types (levels) as they are now. And probably rename the property to generalRiskAssessment
for 3.0.
@bact @kestewart After re-reading Arthit's detailed explanation, I can see an issues for obtaining EU AI Act compliance in an easy manner since there isn't a direct mapping. If I wanted to scan an AI BOM to audit for a specific country regulations then a generic risk level isn't going to help with that process. I'm going to raise this issue with EU Project Office. Ideally we need them to unify the definitions. But for the short term, maybe we have two fields in SPDX AI Profile, one with name of useRiskAssessment to capture the EU AI (Risk levels in EU AI Act are based on 1) its use [for example, Article 5] 2) intended purpose [Article 6] or 3) its design [Article 52a(2)]) . or we can different types of risk options, ie. AIAct_medium, AIAct_restricted. or anyone else have an idea?
PR #675 is open to make it more explicit in the description of safetyRiskAsssement
property that the current categorization is according to EU General Risk Assessment Methodology, and not the EU AI Act. As agreed in 20 March 2024 AI Team meeting.
SPDX 3.0 AI Profile has
safetyRiskAssessment
[1] for level of risk posed by an AI software. Its type issafetyRiskAssessmentType
[2] which can have one of these values:serious
: The highest level of risk posed by an AI software.high
: The second-highest level of risk posed by an AI software.medium
: The third-highest level of risk posed by an AI software.low
: Low/no risk is posed by the AI software.These values are from EU General Risk Assessment Methodology [3].
EU AI Act (Draft 26 Jan 2024) [4] has four levels of risk:
Different risk level comes with different obligations. An AI system that posed an unacceptable risk is prohibited in the EU. See summary in [5].
While there are similarities between risk levels in SPDX 3.0 and EU AI Act, they are not exactly the same.
Minimal
may use SPDX 3.0low
serious
andhigh
could fall into EU AI ActHigh
Unacceptable
andLimited
in SPDX 3.0In order to accommodate EU AI Act risk levels, we may need to either:
1) Extend enumeration in
safetyRiskAssessmentType
; or 2) AllowsafetyRiskAssessment
to have another type (in addition tosafetyRiskAssessmentType
), where that new type will have a list of EU AI Act four levels of risk/obligationsOther possibilities?
References
[1] https://github.com/spdx/spdx-3-model/blob/main/model/AI/Properties/safetyRiskAssessment.md [2] https://github.com/spdx/spdx-3-model/blob/main/model/AI/Vocabularies/SafetyRiskAssessmentType.md [3] Page 5 https://ec.europa.eu/docsroom/documents/17107/attachments/1/translations/en/renditions/pdf [4] https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf [5] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai