Viewpoints

AI Governance in Michigan: What the DIFS Bulletin Signals for Financial Institutions

  • Article

For credit unions and banks, the most important development in artificial intelligence (AI) regulation is no longer whether a regulator will issue a new AI rule. It is whether examiners now expect institutions to prove their existing governance, compliance, model risk, consumer protection and vendor oversight programs actually work when AI is involved. The Michigan Department of Insurance and Financial Services (DIFS) released Bulletin 2026-03-BT/CF/CU on Jan. 14, 2026 addressing just that.

At first glance, the bulletin looks like guidance. However, the DIFS did not create a new AI-specific rule or prescribe a single governance framework. They stated, “when AI systems are used to make decisions or take actions that affect consumers, those decisions must comply with existing law, and the department may request governance and documentation during examinations and investigations.” In other words, AI is no longer a future policy discussion, it is now an operational, legal and examination issue.

Bigger Than Michigan

Although the bulletin comes from a state regulator, the posture is broader than one state and one charter type. The DIFS is not creating a novel compliance regime from scratch. It is translating a growing market consensus into supervisory expectations. AI must be governed with the same seriousness institutions already apply to models, consumer protection, third-party risk, information security and operational resilience.

For banks, credit unions, mortgage lenders and other regulated financial service providers, the message is increasingly consistent: regulators may tolerate different frameworks, but they will not tolerate unclear accountability, undocumented controls, weak validation or vague vendor assurances.

The DIFS Set the Expectations for Documenting

The most important feature of the bulletin is not that it encourages responsible AI use – most institutions already know that. The real headline is that the DIFS lays out the types of information and documentation it may request in an examination or investigation, including the written AI Systems Program, supporting policies and procedures, scope decisions, training materials, governance records, monitoring evidence and third-party oversight materials. This changes AI from an innovation topic into a preparedness one.

This is where many institutions are exposed.

In many organizations, AI adoption has moved faster than formal governance. Use cases often emerge in marketing, fraud operations, customer/member service, compliance productivity, underwriting support, internal analytics and software development before a centralized control structure is in place. DIFS is signaling this gap is no longer benign. If AI affects consumers, the institution should be prepared to show who approved it, how it was tested, what data it relies on, how bias and performance are monitored, how third parties are governed and how leadership oversees the risk.

Why Institutions Should Pay Attention Now

The bulletin expressly contemplates AI use across the business cycle, including product development, marketing, sales and distribution, deposits, lending, account servicing, management and fraud detection. That scope matters because it moves the discussion beyond classic credit underwriting models. Institutions should assume AI-enabled targeting, servicing workflows, fraud tools, decision support, generative AI assistants and vendor-embedded features can all fall within the governance perimeter if they influence consumer outcomes or key control environments.

That makes this bulletin relevant not only to compliance officers and model risk teams, but also to: 

  • Boards and risk committees
  • Business-line executives
  • Chief information officers
  • Chief information security officers
  • Data leaders, legal and compliance functions
  • Internal audit
  • Procurement and third-party risk teams
  • Product and digital channel leaders

The operational question is no longer “are we using AI?” The better question is, “where has AI already entered our decisions, workflows, controls or customer/member interactions without being fully inventoried and governed?”

The DIFS’ Expectations 

The DIFS centers its expectations on a written AI systems program tailored to the institution’s use and reliance on AI, with senior management accountable to the board or an appropriate board committee. The bulletin says the program should address governance, risk management controls and internal audit, and it may be incorporated into enterprise risk management or aligned to recognized frameworks, such as the National Institute of Standards and Technology AI Risk Management Framework.

This is significant for two reasons.

  1. It gives banks and credit unions flexibility in design. Regulators are not demanding one exact template.
  2. It eliminates excuses. If an institution already has mature structures for model risk, third-party risk, information security, compliance and internal audit, the DIFS states those disciplines now need to extend into AI in a coordinated, documented way. The institution does not need an AI policy alone, it needs a working control environment.

Institutions that can inventory AI use cases, classify risk, document ownership, validate outcomes, govern vendors and demonstrate monitoring will be able to adopt AI faster and with less friction. The successful ones will not be the institutions that deploy AI most aggressively, they will be those that can deploy it repeatedly, defensibly and at examination standard.

The Highest-Risk Blind Spot: Third-Party AI

One of the strongest aspects of the bulletin is its clarity on third-party accountability. The DIFS states “a financial service provider cannot outsource its fundamental risk management responsibility, even when a third party performs the service.”

This is particularly important because many credit unions and banks are not building AI from scratch. They are consuming it through core platforms, fraud vendors, CRM tools, service providers, underwriting technologies, call-center tools and employee productivity platforms. The vendor-driven model can create a dangerous misconception that embedded AI is someone else’s problem.

If a third-party model contributes to unfair outcomes, weak explainability, data leakage or poorly controlled decisions, regulators are likely to start with the institution, not the software provider.

Fairness, Discrimination and Explainability Are Not Side Issues

Another key component to the bulletin is the DIFS ties AI use directly back to existing legal obligations, including discrimination and consumer protection concerns. The bulletin highlights risks, such as inaccuracy, unfair discrimination, data vulnerability, and lack of transparency or explainability. It also expects institutions to use verification and testing methods to identify errors and bias.

For banks and credit unions, that means fair lending, consumer protection concerns, adverse action quality, marketing segmentation, fraud interventions, account restrictions and servicing decisions all deserve renewed scrutiny where AI is involved. That does not mean every institution must eliminate all AI components before proceeding. It means they should be able to justify where explainability is necessary, where human review is required, how potential bias is tested and what escalation path exists when outputs do not align with policy, law or customer/member expectations.

What Boards and Executives Should Do Now

The most practical response is not to pause all AI activity. It is to bring AI under enterprise discipline quickly.

For many credit unions and banks, the most defensible near-term agenda is straightforward:

  • Inventory AI use cases now. Include internally developed models, generative AI use, decision-support tools, vendor-embedded AI and employee use of external AI platforms.
  • Classify by consumer impact and control impact. The most urgent items are not always the most technically sophisticated. They are the ones affecting customer/member outcomes, regulatory obligations or key controls.
  • Formalize an AI governance structure. The DIFS expects clear accountability with senior management answerable to the board or an appropriate committee.
  • Map AI into existing risk frameworks. Link AI to model risk, compliance, third-party risk, information security, records retention, change management and internal audit rather than treating it as a separate experiment.
  • Prioritize third-party contract remediation. Audit rights, transparency, incident notification, data usage restrictions, regulator cooperation and validation support should be assessed now, not after an issue arises.
  • Prepare an examination-ready evidence trail. Policies are not enough. Institutions should expect to produce inventories, approvals, testing results, monitoring evidence, training records, governance materials and vendor oversight documentation.

Staying Proactive and Diligent

Michigan’s DIFS issued a practical warning to the market. If AI is shaping decisions, supporting actions or influencing consumer outcomes, regulators expect banks and credit unions to govern it like a real source of enterprise risk.

For credit unions, banks and other financial institutions, the lesson is immediate. The question is no longer whether AI can create value, it is whether the institution can demonstrate its use of AI is governed, tested, documented, explainable where needed, legally compliant and resilient under examination.

Institutions that act now will not just be better prepared for regulatory scrutiny, but positioned to scale AI with confidence while competitors are still treating governance as an afterthought.

Doeren Mayhew’s credit union and bank pros continue to monitor regulatory updates surrounding AI and their impact on institutions. Should you have questions regarding AI governance or overall risk management, contact our pros today.

Ready to put this brain power to work?

Contact Our Pros

brad atkin
Brad Atkin
Connect with Me
Brad Atkin is a Shareholder/Principal at Doeren Mayhew, where he is the Practice Leader of the firm's Cybersecurity and IT Advisory Group.

Subscribe for more VIEWPoints