AI is changing the business landscape. It can process data at incredible speed, automate complex tasks, and replicate aspects of human communication, all of which can help efficiency and innovation.
AI can also cause many issues. Employees might misuse the technology, whether by accident or on purpose, leading to ethical or legal issues. Unfortunately, most companies still don’t have a clear policy in place to manage how AI is used.
Few in place
In August 2025, software platform provider Genesys released the results of an independent survey of 4,000 consumers and 1,600 enterprise customer experience and information technology (IT) leaders in more than 10 countries. It found that over a third (35%) of tech-leader respondents said their organizations have “little to no formal [AI] governance policies in place.”
This is a pointed problem, the survey notes, because many businesses are gearing up to deploy agentic AI. This is the latest iteration of the technology that can make decisions autonomously and act independently to achieve specific goals without depending on user commands or predefined inputs. The survey found that while 81% of tech leaders trust agentic AI with sensitive customer data, only 36% of consumers do.
7 steps to consider
Whether or not you’re eyeing agentic AI, its growing popularity is creating a trust-building imperative for today’s businesses. That’s why you should consider writing and implementing an AI governance policy.
Formally defined, an AI governance policy is a written framework that establishes how a company may use AI responsibly, transparently, ethically and legally. It outlines the decision-making processes, accountability measures, ethical standards and legal requirements that must guide the development, purchase and deployment of AI tools.
Creating an AI governance policy should be a collaborative effort involving your company’s leadership team, knowledgeable employees (such as IT staff) and professional advisors (such as a technology consultant and attorney). Here are seven steps your team should consider:
1. Audit usage. Identify where and how your business is using AI. For instance, do you use automated tools in marketing or when screening job applicants, auto-generated financial reports, or customer service chatbots? Inventory everything and note who’s using it, what data it relies on and which decisions it influences.
2. Assign ownership for AI oversight. This may mean appointing a small internal team or naming (or hiring) an AI compliance manager or executive. Your oversight team or compliance leader will be responsible for maintaining the policy, reviewing new tools and handling concerns that arise.
3. Establish core principles. Ground your policy in ethical and legal principles — such as fairness, transparency, accountability, privacy and safety. The policy should reflect your company’s mission, vision and values.
4. Set standards for data and vendor use. Include guidelines on how data used by AI tools is collected, stored and shared. Pay particular attention to intellectual property issues. If you use third-party vendors, define review and approval steps to verify that their systems meet your privacy and compliance standards.
5. Require human oversight. Clearly state that employees must remain in control of AI-assisted work. Human judgment should always be part of the process, including approving AI-generated content and reviewing automated financial reports.
6. Include a mandatory review-and-update clause. Schedule regular reviews — at least annually — to assess whether your policy remains relevant. This is especially important as innovations, such as agentic AI, come online and new regulations emerge.
7. Communicate with and train staff. Incorporate AI governance into onboarding for new employees and follow up with regular training and reminder sessions thereafter. Ask staff members to sign an acknowledgment that they’ve read the policy and perhaps another to confirm they’ve completed the required training. Encourage everyone to ask questions and report potential issues.
Financial impact
Writing an AI governance policy is just one part of preparing your business for the future. Understanding its financial impact is another. Let us help you analyze the costs, tax implications and return on investment of AI tools so you can make informed decisions that balance innovation with sound financial management and robust compliance practices. Contact your Rudler advisor if you would like to dicuss how we can help you with the design and implementation of your AI policy.
RUDLER, PSC CPAs and Business Advisors
This week's Rudler Review is presented by Heather Pillard, Client Accounting Specialist and Karen Daugherty, CPA.
If you would like to discuss your particular situation, contact Heather or Karen at 859-331-1717.
As part of Rudler, PSC's commitment to true proactive client partnerships, we have encouraged our professionals to specialize in their areas of interest, providing clients with specialized knowledge and strategic relationships. Be sure to receive future Rudler Reviews for advice from our experts, sign up today !