Key Points
- The University of Kentucky plans to create centralised guidance for artificial intelligence use across the institution.
- Internal auditors want AI to help examine financial health and employee compliance with university policies.
- Martin Anibaba, UK’s deputy accountability officer and audit executive, said AI could help identify risks including misappropriation of money, fraud, policy violations and actions that do not align with the university’s values.
- Anibaba told the university’s Board of Trustees’ audit and compliance committee that current AI use is “unmonitored or decentralized”.
- UK launched the Commonwealth AI Transdisciplinary Strategy in November to coordinate and expand automated programmes across academics and healthcare.
- The university may use AI for grading tools, tutoring, personalised learning, clinician transcriptions, patient scheduling, job applicant screening and budget assessments.
- Internal auditors plan to develop overarching guidance for “safe, consistent AI use across the university” by June.
- A pilot AI programme is expected to try to match human audit results by December to test accuracy, efficiency and consistency before broader deployment.
- UK expects to begin integrating AI into day-to-day workflows in January.
- Risks of AI use cited by the university include inaccurate or biased learning outcomes, lack of fairness, misclassification of patient groups and decisions that do not align with institutional values.
- Kentucky has limited state-mandated AI guidance, although Senate Bill 4 requires the Commonwealth Office of Technology to create AI policies for state institutions.
- The story was reported by Jesse Fraga of the Lexington Herald-Leader.
What is UK planning with AI?
The University of Kentucky is drawing up centralised rules for artificial intelligence as it prepares to widen the technology’s use in auditing, academic work and administrative processes, according to Jesse Fraga of the Lexington Herald-Leader.
As reported by Jesse Fraga of the Lexington Herald-Leader, the university plans to use AI to audit its financial health and employees’ compliance with policy. Martin Anibaba, UK’s deputy accountability officer and audit executive, said AI is expected to help identify risks such as misappropriation of money, fraud, policy violations and conduct that does not align with the university’s values. He told UK’s Board of Trustees’ audit and compliance committee that AI is currently being used in an “unmonitored or decentralized” way across the university.
Why is the university moving now?
UK’s leadership says the aim is to expand oversight without adding resources at the same rate. Anibaba said: “The goal here is very straightforward. We want to expand the depth and breadth of our work without a proportional increase in resources.” The university has already launched the Commonwealth AI Transdisciplinary Strategy, a university-wide initiative introduced in November to coordinate and increase automated programmes across academics and healthcare.
That strategy could cover several uses. The university may include rubric and grading tools for faculty, tutoring and personalised learning for students, clinician transcriptions and patient scheduling in healthcare, job applicant screening and university budget assessments. According to the report, the institution already has different standards for AI use in instructional and clinical settings, but it does not yet have a university-wide policy covering AI in auditing or in assessing employee and departmental actions.
How will the new rules work?
Internal auditors plan to create broad guidance for “safe, consistent AI use across the university” by June, according to Anibaba. In the meantime, auditors are carrying out shadow AI assessments, which are experiments designed to test the university’s effectiveness and oversight of AI. By December, human auditors are expected to continue their normal duties while a pilot AI programme attempts to produce the same results to validate accuracy, efficiency and consistency before wider deployment.
The university then plans to begin integrating AI into day-to-day workflows in January. Anibaba said he believes AI will help humans identify and respond to financial and employee risks. The approach is being framed as a controlled rollout rather than a full replacement of existing human checks.
What risks were highlighted?
The university also acknowledged that automated systems can create problems of their own. Risks mentioned in a slideshow presented at the board’s auditing committee meeting on 24 April 2025 included inaccurate or biased learning outcomes, lack of fairness, misclassification of patient groups and decisions that do not align with the university’s values. The report said it was unclear whether those issues have already occurred at UK because the AI implementation is still underway.
That caution reflects the wider point made by the university: AI may improve reach and speed, but it also needs controls, human review and regular checking. Anibaba said the controls must be “appropriately designed and upgraded”, and that the audit approach must keep pace as the technology evolves.
Is there wider state oversight?
The report says Kentucky has limited state-mandated guidance on AI use. Senate Bill 4, enacted on 24 March 2025, requires the Commonwealth Office of Technology to create and implement AI policies for state institutions such as UK. That creates a broader legal backdrop, but the article notes there is still no detailed statewide framework covering every possible use case.
Jackson Hurst-Sanders, a Louisville-based business attorney, said that if AI use is challenged in Kentucky courts, judges may look to other states for guidance where proper human oversight was not in place. He pointed to a complaint in Michigan against Sirius XM Radio, where the plaintiff alleged that an AI hiring tool discriminated against Black job applicants, although that case is still pending. Hurst-Sanders said Kentucky courts are likely to use precedent from other courts as a guide, meaning outcomes elsewhere could shape what happens in future Kentucky AI disputes.
What does the legal angle mean?
The article says legal issues often arise when employers use AI to assess potential hires. It was not clear whether UK would use AI for hiring, although the university listed automated job candidate screening and benefits guidance as possible uses in 2025. Hurst-Sanders recommended that institutions create a task force or committee to implement AI usage policies, and the report says UK’s auditors are doing exactly that.
Hurst-Sanders and Anibaba both stressed the need for human judgement over AI output. Their comments reflect a central theme in the story: the university wants to adopt AI, but not to let it run without accountability. As Anibaba put it, the system has to evolve with the technology.
What does this mean for universities?
The UK story shows how universities are increasingly using AI not only in classrooms and clinics, but also in governance, compliance and internal audit. That makes policy design crucial because the same technology that speeds up detection of risk can also amplify bias or error if it is not closely supervised. For higher-education institutions, the challenge is to decide where automation adds value and where human review must remain dominant.
For institutions offering Corporate Courses in AI, Data Analytics, and Compliance & Ethics, this case underlines how modern workplaces can use AI responsibly while maintaining governance, oversight and legal caution.