Open ColinEberhardt opened 3 months ago
At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.
NCSC and CISA have put together comprehensive guidelines for this purpose. Perhaps we could lean on some of the standards set there? Just a quick thought, but instead of developing an entirely new model, we predicate our "safe/secure" AI practices on the Secure Design, Development, Deployment, & Operation/Maintenance framework they use in the guidelines?
https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development
At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.