Multiple things can be true at the same time

Dear reader. I am sure you have read a lot of blog posts about AI in the past weeks or months. And now I too am writing. Mostly to help me cope with what my kind of hacker people would call out as hypocrisy or cognitive dissonance.

There are various reasons to criticize AI because of its many, many externalities so here is a bit of a rant. If you keep reading, you will find a perspective on security at the end, but that is not the main goal of this blog post. The main goal is for me to cope. It is in no particularly order and I will allow myself to add paragraphs or links at the bottom. This post was last updated 2026-04-23 and will use "LLM" and "AI" interchangeably.

The summary of this blog post is likely "Multiple things can be true at the same time"

I am using LLMs and yet I am utterly unhappy with how they are built, trained, and run. AI is biased and reproducing harmful stereotypes about people that are non-confirming in one of many various ways.

I also see AI being used for decision-making and without questioning. It's unclear whether people are naive or intentionally using it to diffuse responsibility. Regardless, the people behind these systems imply that "their subjects" do not matter, when they allow AI to make decisions.

Even worse, all of modern society considers a proof of deep thought, care and analysis as long writing. All of history, all of human knowledge is in written form. AI is essentially seeding distrust in writing. I fear it may lead to a complete loss of what may have been previously considered objective and socially accepted Truths. What this means for education, and academia is still largely unclear. Not only is an industry killing it's career path for junios, I fear we are killing the path for growth and thought of all coming generations.

AI is also bad for the environment, due to their excessive electrical power requirements. AI is making hardware expensive. Memory and storage are no longer affordable, most of it going straight to people who are buying it to fill a datacenter that has not yet been constructed and with money they don't have.

AI has plagiarized all the works that are (publicly) available on the internet. It destroys and devalues creative work. AI is also unpredictable. Given the stochastic nature of these systems, there is no way to make them reliable.

AI is also intensifying everything (Thank you BenVdS for the link). People are overwhelmed and exhausted - perceiving a pressure and restlessness because the AI must be fed a prompt. People succumb to feeling that an untyped prompt or an unread responses is wasted time. Even if the generated text isn’t even fully read. It must be replied to.

And yet, despite all of this, I am using LLMs.

While all of that appears true and important to me, I am also very enthusiastic about software security. My whole career is built on the analysis, composition and architecture of secure software. From time to time, working in security requires you to accept some harsh truths. As Felix "FX" Lindner used to say: You can’t argue with a root shell. Given an exploit - as a "proof by construction" - you can only admit there is a bug and face reality.

To be more specific, there was not just one single bug. We were given 14 bugs. Then we applied our in-house Firefox expertise, which led to us finding and fixing another 271 bugs,, many of which being sandbox escapes.

So, yes, as Graydon wrote, the capability increase of AI was "very sudden and very severe", that it caught us all by surprise. But if you are building secure software, protecting hundreds millions of users, you can’t take the moral high ground and sneer at those that touch the AI. As long as there are tools out there that give us a significant advantage over the current attackers, we need to adopt them.

Looking back, my head has been hurting for the past months, because of this tension - because of this paradox.

But in the end: Multiple things can be true at the same time.


If you find a mistake in this article, you can submit a pull request on GitHub.

Recent posts

  1. Multiple things can be true at the same time (Thu 23 April 2026)
  2. Composing Sanitizer configurations (Sun 08 March 2026)
  3. Perfect types with `setHTML()` (Sat 07 March 2026)
  4. Why the Sanitizer API is just setHTML() (Sun 07 December 2025)
  5. The C3PO Bug in Lego Star Wars: The Complete Saga (Sat 06 December 2025)
π