Home COSO Releases Guidance on Generative AI

COSO Releases Guidance on Generative AI

COSO Releases Guidance on Generative AI

Everyone is asking the same question right now: “How do we control generative AI without slowing down innovation?”

Generative AI is rapidly transforming how organizations analyze information, automate work and support decision-making. But with that opportunity comes new risks — from hallucinations and data leakage to opaque reasoning and uncontrolled “shadow AI.”

Recognizing this shift, the Committee of Sponsoring Organizations of the Treadway Commission (COSO) recently released new guidance, Achieving Effective Internal Control Over Generative AI. The publication provides what COSO describes as a practical, audit-ready roadmap for applying the COSO Internal Control – Integrated Framework to generative AI technologies.  Rather than introducing a new governance model, the guidance shows organizations how to apply COSO’s familiar control principles to the unique risks introduced by GenAI.

That sounds simple — but it’s a big shift in how organizations think about AI risk. Here are 5 takeaways:

1.  AI introduces New Risks, But Not New Control Principles

Generative AI creates risks we’ve never dealt with before:

  • Hallucinated outputs
  • Prompt injection attacks
  • Bias and misinformation
  • Model drift
  • Shadow AI usage

However, COSO makes clear that these risks can be managed by applying the existing five components and 17 principles of the COSO internal control framework.  This framework works.  It just needs to be applied differently.

2. Governance Should Focus on AI Capabilities, Not Just Tools

One of the most practical insights in the paper is COSO’s capability-based approach to GenAI governance. Instead of focusing on specific AI tools or vendors, COSO recommends governing how AI is used across business operations. Each capability introduces different control considerations and risk exposures.  This approach helps organizations govern AI at the process level, where risk actually occurs. COSO suggests governing AI across capabilities like:

  • Data extraction
  • Transaction processing
  • Workflow automation
  • Knowledge retrieval
  • Decision support

Because that’s where risk actually lives — inside the process.

3.  AI Outputs Can’t Simply Be Trusted Just Yet

Traditional systems are deterministic. Generative AI is probabilistic. That means the same input can generate different outputs. Organizations need controls that require different thinking:

  • Probabilistic outputs that may vary each time
  • Rapid configuration changes and prompts that create risks around parameters
  • Highly scalable automation, where small errors can quickly expand
  • Low barriers to adoption, increasing the risk of unmanaged use
  • Lack of source data acknowledgements or references

As a result, organizations must shift from assuming outputs are correct to designing controls that validate and monitor AI outputs.

4.  Transparency and Traceability Are Critical Control Elements

Why?  Because when AI becomes part of business processes, someone will eventually ask how the answer was produced. Organizations should capture information such as:

  • Prompts used
  • Input data sources
  • Model versions and configuration settings
  • Outputs generated by the model
  • Source references and citations

This information creates the audit trail necessary for oversight, investigation and regulatory scrutiny, helping organizations demonstrate that AI-generated outputs can be trusted.

5.  COSO Provides a Practical Implementation Roadmap

COSO describes a practical 6-step cycle for implementing GenAI controls because AI will evolve constantly:

  1. Establish governance and accountability for AI
  2. Inventory AI use cases across the organization
  3. Assess risks using the COSO internal control components
  4. Design controls
  5. Communicate policies and expectations
  6. Monitor performance and continuously adapt controls

COSO emphasizes that GenAI governance is not a one-time exercise. Because AI models, tools and use cases evolve rapidly, organizations must treat AI oversight as an ongoing internal control discipline.

Final Thoughts

This COSO guidance is important because it bridges the two worlds of emerging AI technology and proven internal control frameworks.  For internal audit, finance and risk leaders, it sends a clear message: Generative AI does not eliminate the need for internal controlsit makes them more important than ever.

Contributors

Brandon Sherman, Nashville Office Managing Partner, Frazier & Deeter Advisory, LLC

Explore related insights