Creating a Responsible AI Policy that Flexes with the Future
June 17, 2025
By Kayley Grant, Regulatory Compliance Officer, NCCO
Whether running a credit union or managing everyday life, encounters with artificial intelligence have become a near daily occurrence. As AI becomes embedded in everything from workflows to personal routines, credit unions can no longer think of the technology only in the future tense. More than likely, staff are actively engaging with AI, making it essential for compliance teams to get involved.
Developing a clear, forward-looking AI policy is one of the most significant ways a credit union compliance department can contribute to the credit union’s exploration and integration of the technology. Although the subject matter is still emerging, following established best practices for policy development can keep compliance leaders on the right track, developing a policy solid enough to guide today’s decisions, yet flexible enough to adapt for tomorrow’s.
What follows are three of those best practices, considered through the lens of AI’s potential to both enhance and disrupt the industry.
1. Begin With the Board
The integration of AI creates risk, and board members need to be apprised of any new areas of risk exposure for the credit union. Directors’ risk awareness is not only a best practice; it’s mandated by the NCUA. Before developing an organizational AI policy, the board should be briefed on compliance considerations and potential risks, particularly known and emerging regulatory requirements, as well as areas where AI may already be in use. In an ideal world, an AI policy would predate any AI use, but in practice, many institutions are already contending with shadow AI, which is the use of AI tools or systems without formal approval or oversight. To provide the most thorough briefing possible, the compliance team may want to consider conducting a cross-functional review to surface formal implementations, as well as any shadow AI activity taking place within the cooperative.
In addition to sharing information, compliance will also want to gather feedback. Board input at this stage helps shape the scope, tone and priorities of the policy. Their perspective on things like acceptable risk, mission-driven goals and member impact will ensure the AI framework compliance develops aligns with the credit union’s values and strategic priorities.
Keep in mind, this likely won’t be the last tech policy the board must implement. As AI and other advanced technologies continue to evolve rapidly, it’s increasingly important to have IT expertise represented at the board level.
2. Draft the Policy
Core components of a strong AI policy include purpose and policy statements, board responsibilities, an overview of the subject matter and a commitment to staff training. While each credit union’s language will vary, purpose and policy statements clarify the “why” and “what” behind AI use, defining objectives, integration boundaries benefits and risks.
In addition to reaffirming the board’s ultimate oversight of all AI-related activities, the policy should name, by title, the individual accountable for enforcing the policy. It should also identify the departments and senior managers, again by title, responsible for reviewing and approving any AI deployment.
The policy overview depends on whether the document is serving as a framework for one specific AI tool or as a broader guide for AI use across the organization. In either case, this section should include definitions, a statement of intent and rules for authorized use.
The policy will also need to include language that addresses how AI fits into risk management, vendor due diligence and cybersecurity programs, along with a commitment to staff training (which may need to be conducted by third-party AI providers). One additional component to consider including is guidance on copyright compliance, especially as generative AI tools make it easier to unintentionally reproduce protected content.
The policy should also outline how the credit union will respond to noncompliant AI use, including procedures for assessing whether that use introduced immediate risk and, if so, activating a mitigation plan. Instead of starting from scratch, compliance teams may want to source an AI policy template from their credit union league or a compliance consultant with credit union domain expertise.
3. Accelerate Updates
For any credit union to operate successfully, the right hand must know what the left is doing at all times. In no other modality is this truer than technology. While annual reviews are adequate for most policies, an AI policy should be considered something of a living document. Depending on the credit union and how rapidly it is enhancing its tech stack, a quarterly review cycle may be the most effective way to keep pace with shifting risks. This review should include cross-departmental check-ins on both major AI implementations and day-to-day AI use.
As new tools emerge, use cases expand and regulations evolve, credit unions must be prepared to revise their AI policies frequently. Nearly every U.S. state has an AI bill of some sort pending, and several entities are lobbying the federal government for a standard framework. Unless a credit union’s policy actually reflects the current environment and associated risk, it’s unlikely to hold up under examiner scrutiny or guide leaders to the right decisions.
Embedding Agility Into AI Governance
The integration of AI into credit union operations is no longer a question of if, but how. A collaborative, flexible and forward-thinking policy gives compliance teams the structure and agility they need to take advantage of advanced tools as they come on the scene. By treating the AI policy as both a guardrail and a strategic tool, credit union compliance teams position their cooperative to move into the future confidently, soundly and always in service of their members.
Originally published in Finopotamus on June 9, 2025.
Back