Critical AI Data Governance Gap in Higher Education: What Institutions Must Do Now
The numbers tell a troubling story. Ninety-four percent of higher education workers now use AI tools in their daily work, but only 54 percent know whether their institution even has policies governing that use. This disconnect, revealed in new research published January 13, 2026, represents one of the most significant governance failures facing colleges and universities today.
Key Takeaways
- The Policy-Practice Gap Is Massive. Nearly all higher education employees (94%) now use AI tools for work, yet only 54% are aware of their institution's AI use policies. This disconnect creates significant exposure for data privacy violations, security breaches, and regulatory noncompliance.
- Shadow AI Poses Immediate Risks. More than half of higher education workers (56%) use AI tools not provided by their institutions. Sensitive student data flowing through unapproved third-party systems bypasses institutional security controls and may violate FERPA, COPPA, and other regulatory requirements.
- Leadership Awareness Is Surprisingly Low. Even decision-makers lack clarity on AI governance, with 38% of executive leaders, 43% of managers, and 30% of cybersecurity professionals unaware of existing AI policies. This suggests many institutions simply lack formal policies rather than failing to communicate existing ones.
- Education Trails Other Sectors in Critical Controls. The education sector shows a 19-point gap in privacy impact assessments compared to global benchmarks, with only 6% of institutions conducting systematic privacy evaluations for AI systems. Red-teaming and adversarial testing similarly lag at just 6% adoption.
- Third-Party EdTech Multiplies Governance Complexity. Only 18% of educational institutions have established AI-specific policies for vendors processing student data. The explosion of AI-enabled EdTech products—from adaptive learning platforms to automated proctoring—means student information flows through systems with minimal institutional oversight.
The findings come from a landmark collaborative study by Educause, the National Association of College and University Business Officers (NACUBO), the College and University Professional Association for Human Resources (CUPA-HR), and the Association for Institutional Research (AIR). Researchers surveyed nearly 2,000 staff members, administrators, and faculty across more than 1,800 institutions. What they discovered should concern every higher education leader: a policy-practice gap that creates substantial risks for data privacy, security, and institutional compliance.
Disconnect Between AI Use and AI Awareness
The gap between what employees are doing and what they understand about institutional expectations is staggering. More than half of respondents reported using AI tools not provided by their institutions for work-related tasks. This means faculty are drafting communications with ChatGPT, staff are analyzing spreadsheets with AI assistants, and administrators are automating workflows through third-party tools—all without institutional oversight or data governance controls.
Perhaps more alarming is what the research reveals about leadership awareness. Thirty-eight percent of executive leaders, 43 percent of managers and directors, and 35 percent of technology professionals reported they were unaware of policies designed to guide their AI use. Even 30 percent of cybersecurity and privacy professionals—the very people responsible for protecting institutional data—said they didn’t know about existing AI policies.
Jenay Robert, senior researcher at Educause and author of the report, noted that this disconnect “could have implications for things like data privacy and security and other data governance issues that protect the institution and [its] data users.” The observation points to a fundamental problem: Many institutions likely lack formal policies entirely, rather than simply failing to communicate existing ones.
Why This Matters for Student Data Protection
Higher education handles extraordinarily sensitive information. Student records contain academic performance data, disciplinary histories, financial aid details, and health information. Under FERPA, institutions bear legal responsibility for protecting student education records. Under COPPA, strict requirements govern the collection of data from children under 13—directly relevant to institutions serving younger populations or K-12 partnerships.
A separate analysis from Kiteworks examining AI data governance across sectors found that education trails global benchmarks by double digits across critical controls. The education sector shows a 19-point gap in privacy impact assessments, with only 6 percent of educational institutions conducting systematic privacy evaluations for AI systems compared to 25 percent globally. This represents the second-largest capability gap identified across all sectors and metrics in the study.
The implications are direct: AI systems analyzing student performance, predicting outcomes, and personalizing learning paths interact directly with protected data categories. When 94 percent of institutions deploy these systems without systematic privacy evaluation, student information flows through tools and processes that were never formally assessed for compliance or risk.
Resource Reality Facing Institutions
Understanding why this gap exists requires acknowledging the unique constraints facing higher education. Unlike financial services or healthcare organizations with dedicated compliance teams, most colleges and universities operate with severely limited IT and security staff. The Kiteworks analysis found that zero percent of education respondents reported board-level attention to skills gap and workforce issues, compared to 14 percent globally.
This isn’t a failure of awareness. Education shows strong board attention to overall cyber risk at 65 percent—tied for highest globally. Leaders understand the stakes. But operational capabilities tell a different story. The sector knows what it should do but lacks the resources to implement proper governance frameworks.
Consider the paradox revealed in bias and fairness controls. Education institutions report 35 percent adoption of bias and fairness audits—exceeding global averages by 6 points. Yet red-teaming and active bias testing trail dramatically at just 6 percent. Institutions are documenting policies without testing whether AI systems produce biased outcomes. Audits review documentation; testing reveals real-world behavior.
Shadow AI: The Invisible Risk
The Educause findings highlight a phenomenon familiar to IT security professionals across all sectors: shadow AI. When 56 percent of higher education workers use AI tools not provided by their institutions, sensitive data flows through systems that may store, train on, or share information in ways that violate institutional policies, regulatory requirements, or contractual obligations.
Shadow AI creates several specific risks in educational contexts. First, student data entered public AI tools may be used to train models, potentially exposing protected information. Second, faculty using AI for grading or assessment may unknowingly violate student privacy protections. Third, administrators automating processes through third-party tools may create data export pathways that bypass institutional security controls.
The research found that 92 percent of institutions have some form of AI strategy in place, including piloting tools, evaluating opportunities and risks, and encouraging use. But strategy without enforcement leaves institutions exposed. When nearly half of workers remain unaware of whatever policies exist, the strategy exists only on paper.
Third-Party EdTech Compounds the Challenge
Higher education’s heavy reliance on third-party educational technology vendors multiplies governance complexity. The Kiteworks analysis found that only 18 percent of educational institutions have established AI-specific policies and attestation requirements for vendors processing student data—a 15-point gap compared to global benchmarks.
The EdTech market has exploded with AI-enabled products: adaptive learning platforms, automated essay scoring systems, proctoring software, student engagement monitors, and early warning tools. Many educational institutions lack the technical expertise to evaluate these systems’ AI data governance practices. Without vendor attestation requirements, institutions accept vendor assurances without verification, leaving student data protection largely to EdTech companies’ discretion.
This creates liability exposure that many institutions may not fully appreciate. When a vendor’s AI system produces biased outcomes or experiences a data breach, the institution remains accountable to students, families, and regulators. Vendor contracts that don’t address AI data governance leave institutions bearing risks they haven’t assessed and can’t control.
Transparency and Trust at Stake
Education exists within a web of accountability relationships unlike most other sectors. Parents expect to understand how technology affects their children’s education. School boards require explanations they can communicate to communities. Accreditation bodies increasingly ask about technology governance. Alumni, donors, and legislators all maintain interests in institutional operations.
The Kiteworks data reveals a 16-point transparency gap, with only 24 percent of educational institutions implementing transparency practices compared to 40 percent globally. Model explainability documentation trails by 14 points at just 12 percent. For a sector accountable to communities in ways commercial organizations aren’t, this creates vulnerabilities that technical security measures alone can’t address.
When AI systems influence course placements, flag behavioral concerns, or personalize learning paths, families want to know how decisions are made. Parents will accept educational AI they understand. They will resist—and potentially litigate against—AI that operates as a black box making consequential decisions about their children.
Measuring What Matters
The Educause research surfaced another significant gap: Only 13 percent of institutions are measuring return on investment for work-related AI tools. This means the vast majority of colleges and universities are deploying AI without systematic assessment of whether these tools actually deliver value.
The measurement gap matters beyond efficiency concerns. Without data on AI system performance, institutions cannot identify when tools produce problematic outcomes, detect bias in automated decisions, or justify continued investment. When budget constraints force difficult choices, AI initiatives without demonstrated ROI become vulnerable to cuts—even if they’re providing genuine value that simply wasn’t measured.
Institutions that can demonstrate AI effectiveness will be positioned to make informed decisions about expansion, modification, or discontinuation. Those operating without measurement will make these decisions based on anecdote, politics, or the loudest voices rather than evidence.
Five Actions Institutions Should Take Now
The research points toward specific steps every institution should prioritize, regardless of resource constraints.
Establish Clear AI Governance Frameworks. This doesn’t require extensive documentation or committee structures. It requires clear statements about what AI tools are approved, what data can and cannot be processed through AI systems, and who holds accountability for compliance. Even a two-page policy statement is better than the current vacuum at many institutions.
Inventory Shadow AI Deployments. Institutions cannot govern what they don’t know exists. Survey departments about AI tools in use. Identify data flows through unofficial channels. Bring shadow AI into the light where it can be assessed and either approved with appropriate controls or discontinued.
Implement Data Classification Schemes. Not all data carries equal sensitivity. Student social security numbers require different protection than course catalog information. Data classification enables proportionate controls—rigorous governance for high-sensitivity data, streamlined processes for lower-risk information.
Provide Comprehensive Training. The 46-point gap between AI use and policy awareness represents a communication failure, not just a policy failure. Training should help faculty and staff understand what data can be entered into AI tools, how to evaluate AI outputs, and when to escalate concerns. This training need not be elaborate—short, specific guidance often works better than lengthy compliance modules.
Develop EdTech Vendor Requirements. Institutions have collective purchasing power they rarely exercise. Develop standard contract language addressing AI data governance, join consortium purchasing programs with shared accountability standards, and require attestations before deploying EdTech products that process student data.
Looking Ahead: The Stakes for 2026 and Beyond
The Educause report captures a workforce simultaneously enthusiastic and cautious about AI. Thirty-three percent of respondents described themselves as “very enthusiastic” or “enthusiastic” about AI, while 48 percent reported a mix of caution and enthusiasm. Only 17 percent expressed pure caution.
This mixed sentiment reflects appropriate complexity. AI offers genuine potential to reduce administrative burden, personalize learning experiences, and improve institutional operations. The risks are equally real: privacy violations, biased outcomes, security breaches, and erosion of human oversight in consequential decisions.
The institutions that will thrive are those that channel enthusiasm through appropriate governance rather than attempting to suppress AI use entirely. Workers clearly want these tools—86 percent said they plan to continue using AI in the future regardless of current policies. The question is whether that use will occur within governance frameworks that protect students and institutions, or outside them.
Consequences of Inaction
Higher education has weathered previous technology transitions with varying degrees of success. The shift to online learning during the pandemic revealed which institutions had invested in digital infrastructure and which had not. The current AI data governance gap will produce similar sorting.
Institutions that establish clear frameworks, train their workforces, and implement appropriate controls will be positioned to realize AI’s benefits while managing its risks. Those that allow the current policy-practice gap to persist face regulatory exposure, reputational damage, and potential harm to the students they serve.
The Kiteworks analysis offered a stark assessment: Education enters 2026 “stretched between competing realities: stewardship of the most sensitive data about society’s most vulnerable population, deployed with governance capabilities that would be unacceptable in sectors handling far less consequential information.” The resource constraints are real. The gaps are documented. The consequences fall on students.
Education built its mission around student welfare. Extending that mission to AI data governance isn’t optional—it represents the same commitment applied to new technology. The research is clear about where the gaps exist. The question now is whether institutions will act to close them.
Frequently Asked Questions
The AI governance gap refers to the disconnect between widespread AI tool usage among higher education employees and their awareness of institutional policies governing that use. Research shows 94% of higher ed workers use AI tools, but only 54% know their institution’s AI policies exist. This gap creates risks for data privacy, security, and regulatory compliance because employees may unknowingly violate FERPA, COPPA, or other requirements when using AI to process student information.
Higher education institutions handle extraordinarily sensitive information about students, including academic records, financial aid details, health information, and behavioral assessments. AI systems that analyze student performance, predict outcomes, or personalize learning interact directly with this protected data. Without proper governance, institutions risk regulatory violations, data breaches, biased algorithmic decisions affecting students, and erosion of trust with parents and communities who expect transparency about how technology affects their children’s education.
Shadow AI refers to AI tools that employees use for work purposes without institutional approval or oversight. The Educause research found that 56% of higher education workers use AI tools not provided by their institutions. This matters because sensitive student data entered into public AI tools may be used to train models, potentially exposing protected information. Shadow AI also creates data export pathways that bypass institutional security controls and may violate vendor contracts, accreditation requirements, or federal regulations.
FERPA (Family Educational Rights and Privacy Act) requires educational institutions to protect student education records from unauthorized disclosure. When AI tools process student data—whether for grading assistance, learning analytics, or administrative automation—institutions must ensure that data handling complies with FERPA requirements. This includes verifying that AI vendors qualify as “school officials” under FERPA, that appropriate data use agreements are in place, and that student information isn’t retained or used by AI systems in ways that violate student privacy rights.
Effective AI data governance policies should specify which AI tools are approved for institutional use, what categories of data can and cannot be processed through AI systems, and who holds accountability for compliance. Policies should address both internally deployed AI and third-party EdTech products, establish data classification requirements, define transparency obligations to students and families, and outline training requirements for faculty and staff. Even a simple two-page policy document provides better protection than the policy vacuum many institutions currently maintain.
Only 13% of institutions currently measure ROI for AI tools, leaving most without evidence of whether these investments deliver value. Effective measurement should track efficiency gains in specific processes, error rates before and after AI implementation, user satisfaction among faculty and staff, and any impacts on student outcomes. Institutions should also monitor for unintended consequences including bias in automated decisions, data privacy incidents, and compliance violations. Without systematic measurement, institutions cannot make informed decisions about continuing, expanding, or discontinuing AI initiatives.
The most significant risks include regulatory compliance enforcement and litigation arising from AI systems deployed without adequate privacy impact assessments, particularly those processing data about minors under FERPA and COPPA protections. Additional risks include biased AI outcomes in student-facing systems like course recommendations and early warning tools, erosion of parent and community trust due to lack of transparency about how AI affects students, and security breaches through shadow AI deployments that bypass institutional controls. Institutions that fail to address these risks face potential regulatory penalties, reputational damage, and most importantly, harm to the students they serve.