×
AI adoption in higher ed faces budget and policy hurdles, new survey reveals
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The adoption of artificial intelligence in higher education has risen significantly since 2024, with 57% of institutions now considering it a strategic priority according to EDUCAUSE’s 2025 AI Landscape Study. The study, which surveyed nearly 800 higher education institutions, reveals both progress and persistent challenges in implementing AI across campuses of different sizes.

Current Implementation Status: AI has gained substantial traction in specific educational functions, with institutions actively deploying the technology in key areas.

  • More than half of institutions are using AI for curriculum design (54%) and administrative workflow automation (52%)
  • Student usage of AI surpasses faculty adoption, primarily focusing on problem-solving, proofreading, and content summarization

Financial Constraints: Limited funding mechanisms are creating significant barriers to comprehensive AI implementation across institutions.

  • Only 2% of institutions have secured new funding sources for AI initiatives
  • Executive leadership often underestimates the financial requirements for successful AI deployment
  • Larger institutions tend to view AI as an investment opportunity, while smaller schools struggle with resource allocation

Policy Development Progress: While institutions are making strides in creating AI governance frameworks, significant gaps remain in comprehensive policy coverage.

  • AI-related acceptable use policies have increased from 23% to 39% year-over-year
  • Only 9% of institutions report having adequate cybersecurity and privacy policies addressing AI risks
  • 55% of respondents indicate AI strategy implementation occurs in isolated pockets rather than through unified institutional approaches

Digital Divide Implications: A clear disparity exists between large and small institutions in their ability to implement and support AI initiatives.

  • Larger institutions demonstrate more robust infrastructure and resource allocation for AI implementation
  • Smaller schools predominantly rely on upskilling existing staff rather than new hiring
  • The gap is most evident in areas requiring substantial internal resources, including AI licensing and IT support

Resource Sharing Recommendations: The study provides specific guidance for institutions based on their size and capabilities.

  • Larger institutions are encouraged to document and share their AI implementation experiences
  • Smaller schools should focus on building peer institution networks for resource and knowledge sharing
  • Cross-institutional collaboration is emphasized as a key strategy for addressing resource disparities

Future Outlook: While AI adoption in higher education shows promising growth, the widening resource gap between large and small institutions could create long-term implications for educational equity and technological advancement in the sector. Success in bridging this divide will likely depend on increased funding allocation and more robust inter-institutional collaboration networks.

Survey: Higher Ed AI Adoption Faces Financial, Policy Hurdles

Recent News

Robin Williams’ daughter Zelda slams OpenAI’s Ghibli-style images amid artistic and ethical concerns

Robin Williams' daughter condemns OpenAI's AI-generated Ghibli-style images, highlighting both environmental costs and the contradiction with Miyazaki's well-documented opposition to artificial intelligence in creative work.

AI search tools provide wrong answers up to 60% of the time despite growing adoption

Independent testing reveals AI search tools frequently provide incorrect information, with error rates ranging from 37% to 94% across major platforms despite their growing popularity as Google alternatives.

Have at it! LessWrong forum encourages “crazy” ideas to solve AI safety challenges

The online community fosters unorthodox thinking about AI development risks, challenging orthodox research methods that may overlook critical safety solutions.