Denied by Design: How AI is Undermining the Public's Right to Know

The Request That Changed Everything

I used to work on the IT team at a California County Office of Education. I won’t name the agency here, not out of fear, but because this isn’t about one office. It’s about a growing pattern in public institutions where internal expertise is ignored, decisions are made in silos, and transparency is treated like a threat rather than a public right.

For months, I had been noticing a troubling dynamic: core IT work was regularly outsourced to external vendors while our in-house team, many of whom had the expertise and institutional knowledge to do the job, were bypassed. Procurement decisions were made without consulting the very people responsible for maintaining and supporting those systems.

That came to a head when I walked into my own building and found brand-new network switches already installed, without warning, without a heads-up, and certainly without any input from our team. I hadn’t seen the purchase request. No ticket. No internal discussion. Just hardware, physically installed in my workspace, as if we didn’t exist.

When I tried to ask how and why this happened, my questions went nowhere. No one seemed to have a straight answer, or if they did, they weren’t willing to share it.

So I submitted a California Public Records Act (CPRA) request. I simply wanted to understand what had been purchased, from whom, and why. It wasn’t an attack. It was a request for clarity.

The response revealed something staggering: over $90,000 had been spent on new switches, completely off the radar of the internal IT team. Even more surprisingly, the equipment purchased wasn’t from the vendor we had been using for over two decades. The agency had changed brands, an infrastructure-wide shift, with no plan communicated to the people tasked with managing it.

But perhaps the most bizarre part? You wouldn’t know that from the documents they gave me. The brand names, model numbers, and the name of the third-party company that sold and installed the equipment were all redacted. It was as if the agency was trying to hide the identity of the switch itself.

That was the moment I realized this wasn’t just an internal communication failure. It was a structural problem, one that extended into the legal logic used to justify secrecy. And it was only the beginning.

What Concealing Revealed

When the documents finally came in, I expected invoices and line items, just basic procurement information. Instead, what I got felt more like a redacted intelligence report than a simple purchase order.

The total spent on switches was there, over $90,000, but critical details were blacked out. The vendor name was redacted. So was the make and model of the switches. Even the name of the MSP (Managed Service Provider) that installed the equipment was hidden.

To put that into context: a public education agency used taxpayer money to purchase network hardware, installed it in an operational facility, and then claimed that disclosing the brand of that switch would somehow jeopardize cybersecurity.

It wasn’t just excessive, it was illogical.

The justification cited California Government Code § 7929.210(a), which allows agencies to withhold records if releasing them would “reveal vulnerabilities to, or otherwise increase the potential for an attack on” an IT system. But that law was designed to protect actual security configurations, penetration test results, or vulnerability reports, not general product names.

And the inconsistency was obvious. In fact, within the same batch of documents, I found “9300” listed as the switch model. Any IT professional would recognize that as part of Cisco’s Catalyst line, yet the word “Cisco” was redacted. So that case the model could be disclosed, but not the brand?

Digging deeper, I realized this wasn’t the first time this agency had disclosed brand information. Public surplus declarations and job descriptions on their own website listed Cisco, SonicWall, Dell, and Fortinet by name. If these brands were previously safe to mention in public documents, why were they suddenly considered security risks?

The agency's stance not only contradicted its own past behavior, it violated the spirit of the California Public Records Act, which is rooted in maximum feasible disclosure.

When I followed up and challenged the redactions, some vendor names were eventually restored. But the brand and model information remained censored. The logic behind that decision didn’t hold up, and as I would soon discover, this pattern wasn’t isolated to one agency.

Rebuttals and Red Flags

I didn’t let it go. Not because I enjoy a fight, but because the justifications being offered were detached from both technical reality and legal precedent. I spent years working in public education IT. I knew what was typical, what was sensitive, and what was plainly public.

So I drafted a response, multiple, actually, each one more detailed than the last. I cited California’s Government Code § 7922.525(a), which requires agencies to release all reasonably segregable portions of a record, even if other parts are exempt. I cited the fact that the COE’s own surplus declarations had previously listed the exact same vendor and model names now being withheld.

I highlighted the absurdity of hiding product names like “Cisco Catalyst” when the same model numbers were:

  • Publicly auctioned off by the county,

  • Listed in board minutes, and

  • Visibly labeled on devices installed in unlocked network closets.

I explained how job postings from the agency itself referenced Cisco and SonicWall by name, as part of the expected technical skill set. If those brands are okay for public job ads and surplus reports, how can they suddenly become cybersecurity secrets in a purchase order?

The more I pushed, the more it became clear: this wasn’t about risk. It was about control.

And that’s when something even more troubling started to emerge.

Something Strange Starts to Emerge

At this point, I assumed the issue was limited to one agency taking an overly defensive stance. So, I expanded my inquiry. Using the PublicEdTech.org project, I submitted identical records requests to other County Offices of Education across California, asking for basic procurement details on their network switches, firewalls, and access points.

That’s when something strange began to happen.

I started receiving denial letters that looked… familiar. Too familiar.

Several were structurally identical, with the same blocks of language, the same interpretation of Government Code § 7929.210, and even the same phrasing choices down to the word. In some cases, entire sentences were copied verbatim across different agencies in different counties, each with their own leadership, counsel, and technical environments.

So I tested a hunch.

I ran these denial letters through multiple AI content detectors, tools designed to identify whether a piece of writing was likely authored by a large language model (LLM) like ChatGPT. The results were striking: high probabilities of AI authorship across the board.

Some returned scores in the 98-100% range for AI generation. The tone, structure, and syntax were all textbook LLM: overly formal, vaguely reasoned, and filled with generalizations that mimic legal logic without actually applying it.

This raised an entirely new concern.

Public agencies were now using (or being advised by) AI systems to automate legal determinations, including the rejection of lawful records requests under CPRA, California’s cornerstone transparency law.

These weren’t policy memos. They were legal denials, in some cases directly from legal council. The difference matters.

Even more alarming, these AI outputs were being treated as authoritative. No case law cited. No site-specific risk analysis. Just a blanket reliance on generic, predictive text, text that was never meant to serve as legal reasoning in the first place.

This wasn’t just a transparency issue anymore. It was a systems problem. The human review layer had been replaced by something that sounded authoritative but lacked understanding of both the law and its context.

Why This Is a Problem

Let me be clear: I’m not against AI. In fact, I’ve worked with it, studied it, and even use it in parts of my own projects (Like this one!). AI is here to stay, and it has the potential to enhance public service, streamline operations, and improve access to information.

But only if we understand what it actually is, and what it’s not.

The denials I encountered weren’t reviewed by legal experts tailoring thoughtful exemptions to nuanced requests. They were stitched together by language models, tools like ChatGPT, designed not to interpret law, but to generate plausible-sounding language based on probability. These tools don’t reason. They don’t verify facts. They don’t apply precedent. Their goal is not truth, but fluency.

That distinction is everything.

A large language model (LLM) like ChatGPT is trained to predict the next word in a sentence based on patterns in a massive dataset. It doesn’t “know” the law. It imitates the structure of legal writing. It cannot discern between a constitutional right and a policy preference. It doesn’t understand the burden of proof, or how to weigh public interest against disclosure risk under the California Public Records Act.

So when a public agency uses LLM-generated language to deny access to public records, here’s what happens:

  • The agency offloads its legal judgment to a tool incapable of legal judgment.

  • The public receives decisions cloaked in legal-sounding language with no foundation.

  • And accountability becomes automated out of existence.

This hurts both sides.

For the agency, it creates legal exposure, because the denials lack the case-specific justification that CPRA requires. If challenged in court, many would fall apart under scrutiny. The “security risk” arguments being used to block disclosure of model numbers and vendor names? They’re not only inconsistent with industry norms, they’re legally unsupported and in many cases contradicted by the agency’s own public documents.

For the public, it raises the barrier to access even higher. You're no longer negotiating with a person. You're being stonewalled by a faceless machine, or worse, by a person hiding behind one.

And if that sounds dystopian, it’s because it is. The promise of AI in government was to make services better, not to erode transparency under the guise of efficiency.

Who Gets Hurt

The consequences of this AI-driven denial strategy aren’t theoretical, they’re real, and they’re already playing out in ways that hurt both public agencies and the people they serve.

The Public Loses Transparency

When record requests are denied without proper justification, the public is left in the dark. Parents, educators, journalists, and advocates lose access to the information they need to hold institutions accountable. These aren’t niche interests, they’re foundational rights protected under California law.

If someone can’t find out what hardware is installed at their child’s school, how funds were spent, or what vendors were used, oversight becomes impossible. These questions aren’t about curiosity, they’re about equity, safety, and responsible governance.

And when the denials come wrapped in the neutral, passive language of AI, they feel final. Cold. Detached. The implicit message becomes: There’s no one to talk to. There’s no one to appeal to. This is just the way it is.

Agencies Lose Credibility

On the other side, public institutions are damaging their own standing. When agencies issue denials based on generic, AI-generated text, they open themselves up to serious legal and reputational risks.

The California Public Records Act requires reasoned, case-specific decisions. That means redactions must be narrowly tailored, with clear explanations. Substituting legal-sounding AI responses in place of actual legal review violates both the letter and spirit of the law.

Worse, it signals a posture of secrecy and defensiveness. That erodes trust, not just in the IT department or legal office, but in the entire institution. When transparency is replaced with algorithmic opacity, the public assumes the worst. And who could blame them?

Staff Are Silenced

This is personal, too. I was an employee who tried to ask questions through the proper channels. I filed internal tickets, sent emails, requested context, and was ignored. The moment I turned to public records law, my agency shifted from ignoring me to actively redacting information that had been public just months before.

It was no longer about policy. It became about control. Not security, not safety, control of the narrative and insulation from accountability.

So who gets hurt?

  • Students, when funding is misallocated.

  • Educators, when they’re left out of the decision-making process.

  • Technologists, when their expertise is bypassed.

  • Communities, when their right to know is denied by default.

And ultimately, public institutions themselves, when they forget that accountability isn’t the enemy, it’s the mission.

What Needs to Happen Next

This is fixable, but only if we act with clarity and urgency.

AI is not the villain here. The problem is how it’s being used. When legal denials are generated by models that mimic, rather than apply, the law, public agencies abandon their responsibility to think, to assess, and to engage. That failure isn't technological. It's ethical. And it’s administrative.

So what needs to change?

1. Human Review Must Be Mandatory

Public records decisions are legal decisions. Whether handled internally or by counsel, they must involve someone trained to apply the law, not just format it.

If an LLM is used to generate a draft or template, that must be disclosed internally, and it must be reviewed and approved by someone with legal authority. No final decision should be automated.

2. Agencies Must Disclose Use of AI in Legal Contexts

If AI is being used in legal or policy-facing communications, especially those that impact public rights, agencies should clearly state that in writing.

Just like agencies disclose the use of outside legal counsel, or note the chain of approval, they should disclose when AI has played a role in shaping determinations.

Transparency here protects trust. Secrecy undermines it.

3. Public Records Responses Must Meet Legal Standards

No matter how they’re written, PRA denials must still:

  • Be narrowly tailored

  • Cite specific code sections

  • Demonstrate a public interest balancing test where applicable

  • And provide segregable portions of records where possible

Boilerplate language, human or machine-generated, doesn’t meet that standard.

4. Oversight Bodies Must Get Involved

This issue isn’t going away on its own. It’s time for:

  • The State Bar of California to examine whether AI-generated legal reasoning violates ethical duties of competence and diligence when used by or under the supervision of attorneys

  • The California Attorney General to investigate the use of LLMs in obstructing CPRA compliance

  • And the Legislature to consider guidance or disclosure rules on AI use in public agency decision-making

This isn’t about banning AI. It’s about ensuring the law remains grounded in human accountability.

5. The Public Must Stay Engaged

If you're reading this and you’ve received a denial that feels robotic or overly vague, challenge it. Ask questions. File appeals. Submit your own rebuttals. The CPRA was built for public use, not just journalists and lawyers.

We can’t let automation become an excuse for silence.

A Story Worth Watching

This started as a simple question: Why were switches installed in our building without anyone being told? It became a journey into redacted invoices, vendor secrecy, and, unexpectedly, AI-generated legal denials.

Along the way, it stopped being about just one purchase, one office, or one mistake. It became a cautionary tale about what happens when language models replace legal reasoning, and transparency is treated like a vulnerability instead of a value.

The use of AI in public institutions isn’t inherently wrong. But when AI becomes a shield to avoid accountability, when denial letters are cut from the same digital cloth, devoid of context or case-specific analysis, we’re no longer protecting the public. We’re protecting power.

This issue isn’t going away. In fact, it’s just beginning. That’s why PublicEdTech.org will continue monitoring this space, documenting patterns, and helping agencies, technologists, and advocates navigate the ethical use of emerging tools. Because automation without accountability isn’t innovation, it’s abdication.

If you’ve received a questionable public records denial, we want to hear from you. If you work in a public agency and feel pressure to use AI to handle sensitive requests, reach out. This isn’t just about criticizing misuse. It’s about building better practices.

Transparency isn’t just about access. It’s about trust.

Next
Next

📡 The 2025 Cybersecurity Threat Landscape: What Schools Need to Know