COSO releases generative AI roadmap
Viewpoints

COSO Releases Roadmap for Governing Generative AI

  • Article

In February 2026, the Committee of Sponsoring Organizations (COSO) released Achieving Effective Internal Control Over Generative AI (GenAI), translating the COSO Internal Control – Integrated Framework (ICIF) into practical, audit-ready guidance for governing generative artificial intelligence (AI). 

For banks and credit unions, this publication arrives at a critical moment. GenAI is already being used to: 

  • Draft member/customer communications.
  • Assist with credit and underwriting analysis.
  • Summarize regulatory updates.
  • Support Bank Secrecy Act (BSA)/anti-money laundering (AML) and fraud investigations.
  • Automate reconciliations and transaction processing.
  • Enhance forecasting and liquidity analysis.
  • Improve operational workflows and reporting. 

The opportunity is real. So is the risk. COSO’s central message is straightforward: generative AI does not replace internal control. It must operate within it. 

To help banks and credit unions interpret the newly released publication, Doeren Mayhew has summarized a unified roadmap for management, while highlighting key takeaways for those charged with governance. 

Why Generative AI Is Different 

Unlike traditional rule-based automation, generative AI is: 

  • Probability-based: It can be confidently wrong.
  • Dynamic: Models, prompts and vendor configurations change frequently.
  • Highly scalable: It scales errors as quickly as it scales efficiency.
  • Accessible: Increasing the risk of informal or “shadow AI” deployment.  

For regulated banks and credit unions, those characteristics intersect directly with: 

  • Internal control over financial reporting (ICFR).
  • Model risk management expectations.
  • Consumer compliance and fair lending obligations.
  • Safety and soundness standards.
  • Third-party/vendor risk management.
  • Data privacy and cybersecurity requirements. 

If governance is weak, AI risk quickly elevates. 

A Practical Oversight Lens: COSO’s Capability-First Approach 

One of the most valuable aspects of COSO’s guidance is its “capability-first” taxonomy. Instead of focusing on vendors or product names, it groups GenAI into eight capability types across the data-to-decision lifecycle: 

  1. Data ingestion and extraction.
  2. Data transformation and integration.
  3. Automated transaction processing and reconciliation.
  4. Workflow orchestration and autonomous task execution.
  5. Judgment, forecasting and insight generation.
  6. AI-powered monitoring and continuous review.
  7. Knowledge retrieval and summarization.
  8. Human–AI collaboration. 

This structure is highly practical for management because it mirrors how risk propagates through core financial institution processes. 

Examples Across Banks and Credit Unions 

  • Ingestion: Extracting borrower financial data from uploaded statements.
  • Transformation: Normalizing loan or deposit data across legacy systems.
  • Automated processing: AI-assisted reconciliations in general ledger or shared services.
  • Judgment, forecast and insight: Drafting credit memos, allowance for credit loss analyses or liquidity forecasts.
  • Monitoring: Detecting suspicious transactions or anomalous activity.
  • Retrieval and summarization: Condensing large regulation data or retrieval for automated responses for call center questions.
  • Human–AI collaboration: Assisting frontline staff in drafting customer/member communications. 

Management should maintain a formal inventory of AI use cases, classified by capability type, with clearly identified business owners. If leadership cannot articulate where AI touches data, influences decisions or executes tasks autonomously, governance has not kept pace with deployment. 

Applying COSO’s Five Components to Generative AI 

COSO’s framework remains durable. What changes is the level of rigor required to manage GenAI’s unique characteristics. Below is a practical translation for credit unions and banks. 

1. Control Environment: Tone, Ownership and Accountability 

A strong control environment establishes: 

  • Clear ownership of each AI use case
  • Defined acceptable use boundaries
  • Formal oversight structures
  • Accountability for outcomes 

What Management Should Have in Place 

  • A documented AI governance structure (cross-functional: risk, compliance, IT, operations and business lines).
  • A formal Acceptable Use Policy prohibiting unauthorized entry of customer/member nonpublic information into unsecured tools.
  • Named owners for each AI capability.
  • Defined escalation paths for AI-related incidents. 

Governance Takeaway 

Boards and Supervisory Committees (audit, risk, technology or compliance committees) should expect regular reporting on: 

  • Material AI use cases.
  • Key risk indicators.
  • Incidents and remediation efforts.
  • High-impact changes to AI-enabled processes. 
2. Risk Assessment: Dynamic, Not Annual 

COSO emphasizes GenAI risk assessment must account for rapid model updates, configuration changes, prompt modifications and vendor releases. Below are some key risk domains to consider for credit unions and banks: 

  • Bias in AI-assisted underwriting or credit insights.
  • Hallucinations in regulatory summaries.
  • Prompt injection and data leakage.
  • Deepfake or synthetic identity fraud.
  • Vendor model update cadence.
  • Over-reliance on AI outputs in financial reporting. 

COSO introduces the concept of “AI reliance” in an ICFR context, where management depends on AI output as evidence supporting control operation. If AI is relied upon in the following, then evidentiary standards must mirror other ICFR-relevant controls: 

  • Allowance for credit losses analysis.
  • Liquidity forecasting.
  • Automated reconciliations.
  • Regulatory reporting.
  • Financial close processes. 

Governance Takeaway 

Those charged with governance should ask the following questions: 

  • “Where are we relying on AI output as part of our financial reporting or regulatory compliance control structure?”
  • “How do we evaluate AI-specific fraud scenarios?” 
3. Control Activities: Guardrails Over Automation 

COSO makes clear that AI outputs must be treated as assertions requiring validation, not facts. The following are practical guardrails to consider: 

  • Confidence thresholds for automated postings.
  • Human-in-the-loop review for customer/member-facing communications.
  • Segregation of duties between AI configuration and approval authority.
  • Formal change management over prompts and retrieval datasets.
  • Version logging for models and configurations. 

The reconciliation auto-posting example in the COSO publication demonstrates layered safeguards: threshold controls, routing rules, multi-party approval and post-change sampling. 

Governance Takeaway 

Oversight committees should ensure: 

  • Guardrails are operational, not merely documented.
  • AI cannot take action beyond approved risk tolerance.
  • Configuration changes are controlled and logged.
4. Information and Communication: Traceability Under Supervision 

COSO emphasizes capturing sufficient information to validate AI outputs, including prompts, inputs, model versions and confidence indicators. For banks and credit unions, this supports: 

  • Examination defensibility.
  • Internal audit testing.
  • Model risk management review.
  • Incident investigation.
  • Vendor oversight. 

If regulators inquire how an AI-assisted decision was reached, management should be able to reconstruct: 

  • What data was used.
  • What model version was active.
  • What configuration settings applied.
  • What human review occurred. 

Governance Takeaway 

If traceability does not exist, the institution is assuming risk it cannot defend. 

5. Monitoring Activities: Multi-Metric Tolerances 

COSO notes GenAI systems are probability-based and may require tolerance ranges across multiple performance dimensions rather than binary pass/fail measures. Your credit union or bank should monitor the following activities:  

  • Drift analysis in forecasting models.
  • Sampling of AI-generated outputs.
  • Exception trend monitoring.
  • Hallucination and accuracy metrics.
  • Data leakage and misuse incidents.
  • Backtesting AI-supported forecasts against actual results. 

Monitoring must also address vendor-driven changes that alter model behavior without obvious system redesign. 

Governance Takeaway 

Oversight bodies should understand: 

  • What key performance indicators and key risk indicators measure AI performance.
  • What triggers escalation or rollback.
  • How deficiencies are logged and remediated. 

A Unified Implementation Roadmap 

COSO outlines a six-step implementation cycle for credit unions and banks: 

  1. Establish AI governance structure.
  2. Inventory GenAI use cases.
  3. Assess risks by COSO component.
  4. Design and map controls.
  5. Implement and communicate.
  6. Monitor and adapt. 

For management, this roadmap provides operational discipline. For governance, it becomes a checklist. Leadership should be able to articulate: 

  • Where the institution is in this cycle.
  • Which AI use cases are highest risk.
  • How controls map to COSO.
  • How monitoring is performed and reported. 

Strong Value, Disciplined Governance 

Generative AI will continue to reshape how financial institutions analyze data, serve customers/members, monitor fraud and manage regulatory change. The credit unions and banks that benefit most will not be those that move fastest without controls, but those that move deliberately with discipline. 

COSO’s guidance makes clear that the five components of internal control remain durable. What changes is the rigor required to govern a technology that can be confidently wrong, rapidly updated and widely deployed. 

Here to Guide You 

You do not need to fully understand the algorithm, but you do need to ensure the internal control system is strong enough to govern it. Have questions about this update or need assistance in evaluating your institution’s internal control environment? Doeren Mayhew’s credit union and bank pros stand ready to assist. 

Ready to put this brain power to work?

Contact Our Pros

brad atkin
Brad Atkin
Connect with Me
Brad Atkin is a Shareholder/Principal at Doeren Mayhew, where he is the Practice Leader of the firm's Cybersecurity and IT Advisory Group.

Subscribe for more VIEWPoints