finos / devops-automation

Provide a continuous compliance and assurance approach to DevOps that mutually benefits banks, auditors and regulators whilst accelerating DevOps adoption in engineering and fintech IT departments.
http://devops.finos.org
Apache License 2.0
59 stars 17 forks source link

How are FINOS members approaching adoption of AI Code Assistants? #181

Open ashukla13 opened 7 months ago

ashukla13 commented 7 months ago

AI Code Adoption Approach at Banks There is a lot of interest in adoption of AI Code Assistants like GitHub CoPilot.

How are FINOS member organizations approaching adoption of these tools.

In particular, how are they thinking about the following when it comes to AI Code assistants:

ashukla13 commented 6 months ago

@cnygardtw will you be able to provide a view on what ThoughtWorks is seeing across regulated organizations re. adoption of AI Code Assistants like GitHub CoPilot

cnygardtw commented 6 months ago

@cnygardtw will you be able to provide a view on what ThoughtWorks is seeing across regulated organizations re. adoption of AI Code Assistants like GitHub CoPilot

Sure, I'll be able to provide some background on our experiences.

ashukla13 commented 6 months ago

Minutes from discussions from DevOps SIG SMEs form March SIG meeting (#182 ):

karlmoll commented 6 months ago

Minutes from discussions from DevOps SIG SMEs form March SIG meeting (#182 ):

  • Such tools are code assistants. They don't replace the developer. Developers are responsible for the generated code, including reviewing suggestions from AI code assistants for accuracy and SDLC policy conformance - no different than code they write without the use of AI code assistants
  • All code generated by such tools will be subject to current SDLC controls and processes; including multiple human in loop controls - for requirements, code review (4 eyes check), code scanning (including for security vulnerabilities), and testing
  • Consider from a risk perspective:

    • data flow/information security requirements
    • legal
    • privacy
    • model governance
    • misuse/abuse of chat features beyond technical questions
  • Other things to consider in discussions with AI code assistant vendors - esp., from a legal and contractual perspective:

    • Input (prompt/context) retention or use for training models (beyond custom models that you specifically want to train)
    • Code suggestions violating license/copyright agreements; some vendors offer indemnification and/or filtering of certain types of output - Would help vendors shared more information on how they implement some of the risk mitigations - risk groups may want this level of detail
    • Consider having a session with AI code assistant tool vendors so they can explain how they mitigate the risks outlined above.
  • Did not get to discussion on metrics to consider to track benefits and also the potential issues (e.g., defects); this will be discussed in a separate meeting (likely April)

Code Generation Indemnity Further Reading referenced on today's call

Existing policies: https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/ https://cloud.google.com/blog/products/ai-machine-learning/protecting-customers-with-generative-ai-indemnification

Reporting coverage: https://www.runtime.news/ai-vendors-promised-indemnification-against-copyright-lawsuits-the-details-are-messy/ https://learn.g2.com/ai-code-generators-legal-considerations