← Back to stories Modern data center corridor with server racks and computer equipment. Ideal for technology and IT concepts.
Photo by Brett Sayles on Pexels
虎嗅 2026-03-30

“Slow LLM” plugin deliberately makes AI chats feel like an outage to force people to think

What happened

A New York–based designer-academic has released an intentionally “inhumane” tool to slow down large language models and force users to confront how smoothly produced AI answers have reshaped everyday cognition. Sam Lavigne, an assistant professor at the University of Texas at Austin, has open‑sourced a project called Slow LLM that deliberately drags out responses from services such as ChatGPT (OpenAI), Claude (Anthropic), Grok (xAI) and Gemini (Google). Is it a prank, a provocation, or a piece of civic design? Lavigne frames it as an experiment in restoring “friction” to everyday digital life.

How it works

Slow LLM has two simple deployment modes. One is a Chrome extension that replaces the browser’s fetch function with a version that releases already‑received data extremely slowly, so the remote model and servers are fine but the user experiences a drawn‑out response. The other is a DNS‑level option that, reportedly, can slow multiple devices on a home network and extend coverage to models that stream differently (the project’s README and code are on GitHub: https://github.com/antiboredom/slow-llm). The effect is real waiting, not an actual backend outage.

Why he made it

Lavigne has a track record of tools that reintroduce disruption to digital workflows — think Zoom Escaper, which injected disruptive noises into calls to help people escape meetings, and SlopAvoider, which filtered post‑ChatGPT search results. His argument: product design has spent decades removing friction, and generative AI has accelerated outsourcing of basic cognitive tasks. By slowing responses, Slow LLM aims to prompt users to pause and ask “Can I do this myself?” rather than habitually offload decisions and thinking to an LLM.

Questions and risks

The project raises ethical and legal questions. It has been reported that Lavigne has not yet tested the DNS option on unwitting people but is considering the idea — a move that would raise consent concerns. More broadly, experiments that tamper with network behavior intersect with debates over AI governance, platform responsibility and digital sovereignty as governments and companies across the U.S., China and Europe wrestle with how to regulate powerful models. Slow LLM is intended as a design provocation; whether it becomes a useful nudge or an invasive prank will depend on how and by whom it is deployed.

AI
View original source →