For our March “What the CV-HEC is Happening” blog we once again present Fresno attorney Ashley Emerzian of Emerzian Law Group, who specializes in education law providing transactional and advisory services to California public agencies, schools, non-profits and private business for over fifteen years. She earned a Juris Doctor at the University of California, Davis School of Law (Order of the Coif); a bachelors in Psychology with a minor in Global Studies at UC Los Angeles (Cum Laude); and a Certificate in Diversity, Equity & Inclusion at Cornell University. Here, in her second CVHEC blog, she provides a legal  perspective into the emergence of AI in higher education and its challenges in reshaping how colleges and universities teach, support students, conduct research and run operations amidst the promise of scaling high-quality education while freeing educators to focus on mentorship and innovation. (CVHEC blog submissions are welcome for consideration: Tom Uribes, cvheccommunications@mail.fresnostate.edu).

 

 

Compliance in the Age of AI

From innovation to obligation, AI is changing the playbook for higher education

BY ASHLEY N. EMERZIAN, ESQ.
Emerzian Law Group
March 16, 2026

When the internet first came on-line to the public circa 1993, few could have predicted the transformational impact on how humans communicate, learn, and engage with each other.  

As an elder millennial, I recall how my parents, grandparents, and teachers regarded it with caution – sometimes optimistically and sometimes with outright disdain – as they navigated the challenges of adapting to such a seismic shift in the foundation of American life, including education. 

Wasn’t it cheating to use the “internet” to “research” the Challenger explosion for that report?  Shouldn’t we go to the library instead? 

As an amusing side note, I once had a college English professor enforce library research as an imperative of passing his class; no online “research” allowed, as the manual nature of navigating the library stacks was seen as not only a valuable skill, but something that enhanced one’s capacity in the academic space. While this anecdote may elicit a laugh today, the approach my professor took feels oddly reminiscent of the apprehension educational institutions and businesses feel about today’s technological landscape.

Looking toward the present day, artificial intelligence is producing that same sense of seismic shift, but on an even larger scale. As many have already made clear before this article – AI is here to stay, no use resisting the shift.

If the internet gave us access to information, AI gives us the ability to interpret, generate and act on that information with unprecedented speed and sophistication. Just as the early web ushered humanity into a digital era, AI is pushing us into an era of intelligent systems — one where machines aren’t just tools, but are creating and collaborating alongside us in ways that – once again, few could have predicted.

The official California Education Code definition of AI is as follows: “An engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer, from the input it receives, how to generate outputs that can influence physical or virtual environments.”  (Education Code section 33328.5.)

While this definition may sound like it’s describing like a cylon from Battlestar Galactica or my favorite AI sci-fi character, Mr. Data from Star Trek, as far as higher education goes, AI is no longer a futuristic concept — it’s reshaping how colleges and universities teach, support students, conduct research and run operations. From personalized learning paths to smarter admin systems, AI holds the promise of scaling high-quality education while freeing educators to focus on mentorship and innovation. AI also brings with it several serious institutional challenges for students, faculty and administration alike – not to mention the campus lawyer.

Last month The Chronical of Higher Education released its special publication “AI in the Classroom – Adjusting to the tool that changes everything.”  A worthwhile read, that publication includes real life examples of the successes and failures of AI on campus (and, if you are short on time, pop it into your favorite generative summary platform for a quick skim of the highlights).  Generally, it quite rightly focuses on adjusting to, rather than resisting, AI’s new prominent place in the education landscape.  

Admittedly, this is easier said than done – particularly in the realm of legal compliance.  A litany of real-life examples pulled just from the past few months of legal practice illustrate why.  For example:

  • Do you really have to respond to that 20-page California Public Records Act Request that you can tell is obviously AI generated, and came from an “anonymous” email address purporting to be a “concerned citizen”?  What if instead it came on letterhead from an attorney’s office?
  • How do you respond to the upset student bringing a grade challenge on the basis that their professor used AI to grade her coursework without disclosing it in advance?  What if the professor has – subsequent to the grading at issue – sought a reasonable accommodation from HR to utilize AI for grading in the course?
  • Can you accept a faculty member’s AI generated statement submitted in the course of a pending Title IX investigation in lieu of an interview?  What if the statement looks like an identical submission from Witness A, faculty member’s best friend?  Are these actions subject to discipline under the Code of Conduct or professionalism policy?
  • Is it unfair labor practice to use AI to generate graphical representations regarding the institution’s budget if it is circulated to union members during pending negotiations, instead of being shared at the bargaining table?  What if the disclosure was inadvertent due to a lack of permission settings within Google folders?
  • Can a disgruntled employee bring a whistleblower complaint based on AI generated legal summaries that actually get the applicable law wrong?  
  • What if HR used ChatGPT to decide  how to apply the law to the college’s policies, but interpreted the results incorrectly? What If this leads to a discrimination complaint against the HR department?

The list goes on.  In any event, these situations are good reminders that AI is a tool being used by humans – not a replacement for critical thinking and prudent risk management.  It is also a reminder that administrators and the professionals that support them must invest in AI literacy, including training staff to understand the benefits and limits of AI’s reliability and accuracy.

As we all forge ahead – willingly or not – into the new AI-generation realizing its potential and mitigating institution-wide risks, it will require thoughtful strategy, strong governance and continuous collaboration among educators, students and administrators. 

For more information, or to request education law compliance help when the robots from the Matrix come knocking, contact your legal teams or feel free to reach out to me at aemerzian@emerzianlaw.com.