The AI burnout is real

April 15, 2026Eric Dodds Weblog

My former co-founder recently visited San Francisco for a conference and caught up with a handful of friends he hadn’t seen in several years.

When I asked him how the meetings went, he thought for a moment and said:

The AI burnout is real.

He went on to explain that everyone he met with, all 10+ year veterans of tech in the Valley, voluntarily lamented the impact of AI on their daily work and the industry at large. On a personal level, they are working harder than they ever have, trying to meet both implicit and explicit expectations for higher output. On a macro level, the playbooks for tech are being rewritten in real time, which has caused job loss, cratering valuations and no certain way to map out a long-term career.

Everyone felt the power of AI (“it’s truly amazing”), but instead of excitement, they live in a malaise.

That sentiment seems illogical on the surface. We’re living through a technological transformation on the scale of the Industrial Revolution, and these people have a front row seat. Disruption has negative consequences, but also presents nearly immeasurable opportunity.

Why are knowledge workers in tech struggling with burnout?

A practical look at how AI is changing work

The first frontier of the new revolution has been software engineering. Six months ago, Anthropic’s CEO claimed that in the near future, 90% or more of code will be written by AI. This is because software code is a near-perfect use case for LLMs and because the economics of scarce, high-wage labor could mean big ROI for the frontier model providers.

But disruption of knowledge work across almost every discipline is quickly following.

I’ve worked remotely for tech companies in the Valley for the last 6 years and use AI at an advanced level in my daily work. At Vercel, expectations for using AI to increase output are explicit1. I’m thankful for the challenge because it is forcing me and my peers to rapidly learn (and shape) what work will look like in the future.

Having navigated this disruption daily for some time, I’m not surprised by the burnout. Here are the ways that I see AI changing work.

“The hardest parts of my job, all day”

I recently saw the Anthropic CEO’s prediction play out in real time: a seasoned full-stack engineer told me that after several months of learning to wield Claud Code, they barely hand-write any part software any more. When I asked how that changed his daily work, he said, “I feel tired more because now I’m doing doing the hardest parts of my job all day long.”

Simon Willison, a renowned software engineer and AI thinker, recently echoed that sentiment on Lenny’s podcast2:

I’m finding that using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting.

I’ve experienced this first-hand. Before AI, the ebb and flow of work on a daily and weekly basis had enough variety built in to give my brain a break. For example, I might dig deep on a writing for a few hours, then switch to planning, looking at my team’s work for the week and prioritizing projects.

Today, Claude is connected to Slack, Linear and Notion, and can give me an overview of work in progress, upcoming tasks and the schedule for the week in less than 10 minutes.

It’s always been healthy (and more productive) to take breaks throughout the day, but I’ve found them to be increasingly mandatory because with AI, it’s possible to run my brain at redline on the hard problems around the clock.

Longer hours: dopamine, constant learning curves and resumption cost

AI makes us more productive, but most knowledge workers I know in tech haven’t become margin-rich; they’re putting in more hours.

I’ve observed three reasons for this:

  1. Using AI successfully is a massive dopamine hit
  2. The pace of change requires constant re-learning
  3. The cost of multi-tasking and context switching are harder to perceive

The AI dopamine machine

I recently put off a significant project until the day it was due because I knew that with AI, I could complete it in a few hours (and did). I marveled at the fact that waiting until the last minute wasn’t stressful, and even told my wife that sometimes using AI to do days of work in hours feels like loading a program into the matrix: the productivity is exhilarating.

Historically time has been a ceiling on knowledge worker ambition. AI has raised the ceiling, and discovering its altitude is almost addicting because the extra hours don’t feel like the slog they once did.

Constant learning curves

The entire AI ecosystem is advancing at a rate most of us have never seen. New models are released at least every few months, new ways to build and use agents are being discovered3 and the app layer on top is in a constant state of churn as Anthropic and OpenAI launch competing features.

Getting the most out of AI means constantly testing and learning, which requires time and energy. Ironically, climbing these learning curves can sometimes mean it takes longer to complete a task using AI.

The unfelt cost of multi-tasking and context switching

Distractions are the silent killer of productivity4, but AI numbs us to the real cost because LLMs make us feel like we can finally multi-task without the consequences.

Any type of distraction incurs resumption cost, which is the amount of time it takes your brain to get back up to speed on any given task after an interruption. The more distractions you let in, the longer it will take you to complete a task.

AI isn’t a fix for this. If anything, running multiple chats or agents simultaneously can make the problem worse, but studies show that distractions with a similar form factor hide the impact5:

...interruptions that share a context with the main task may be perceived as being beneficial but the actual disruption cost is the same as with a different context.

Because we can do more with AI, increased output further obfuscates the consequences, but when the bill comes due, we pay with time.

More parallel paths to manage

AI increases your capacity, meaning you can take on more tasks and work on more projects, but that work does’t happen in a vacuum where you have complete control over sequence and priority.

In a previous post about AI and focus4, I wrote:

If you normally work on 3 projects in a week, AI augmentation may increase your capacity to 5. That’s a 60% increase in project load, but it’s not just work. Projects require stakeholder management, meetings, due-date coordination and the inevitable interruptions that come with them. Resumption costs can pile up quickly.

Re-learning how to estimate timelines

Agents not only make us faster at a given task, they present entirely new ways of accomplishing that task. In the past week, I’ve edited similar documents in different ways with AI (this is part of the learning curve). Each methodology made editing faster than it would have been in the past, but time to complete the task varied significantly among them.

The practical problem I face is that it’s hard to estimate how long it will take me to review a document, which makes it hard to set expectations well with the person who needs it reviewed.

When the learning curve and tools stabilize, this will get better, but even then we can’t control how long it takes models to respond.

Unpredictable downtime waiting on model responses

Working with AI means waiting on models to respond. As agents take on more complex work, the latency increases and becomes more unpredictable. Will the task take 30 seconds or 3 minutes?

Simon Willison again2:

I’ve got 25 years of experience in how long it takes to build something. And that’s all completely gone—it doesn’t work anymore because I can look at a problem and say that this is going to take two weeks, so it’s not worth it.

Estimating timelines accurately in knowledge work was already difficult, and now we are re-learning that skill entirely using technology that changes by the week.

Unstable identities

Sahaj Garg, co-founder and CTO at Wispr, wrote an essay about AI displacing cognitive labor. While I don’t agree with all of his points, he rightly pinpoints our identities as one of the most high-consequence vulnerabilities AI can disrupt:

A software engineer’s identity is as tied to their cognitive ability as a steelworker’s was to their trade. “I’m smart, I solve hard problems, I build things” is not just a job description. It’s a self-concept. When AI can solve harder problems and build things faster, that self-concept shatters.

When you are re-learning not only how to do your job, but what your job is, confidence doesn’t provide the scaffolding it used to and the value of experience is fragmented. These changes strike at the heart of who we are, not just the work we do.

You’re doing a great job

If you know someone working in the midst of this chaos, take a moment to encourage them. This transition is difficult, fraught with uncertainty and often feels lonely.

If that’s you, keep going. Figuring this out is simple: show up and learn every day. You’re doing a great job.


Join almost 3,000 subscribers via Substack to get notified about new posts.

Footnotes

  1. I recently wrote a post about Vercel’s Chat SDK, which briefly recounts the origin of the product: a challenge for the entire company to multiply output.

  2. Simon Willson collected highlights from his excellent interview on Lenny’s podcast. 2

  3. In early AI, RAG systems and vector search were all the rage, but it turns out that markdown and the command line are cheaper and more effective. Vercel discovered this by re-architecting an internal agent and removing 80% of the tools it originally used.

  4. This post was inspired in part by my thoughts on Fragmented focus in the age of AI. 2

  5. The more I’ve researched focus, the more I’ve learned that the consequences are worse than we think. A study called [The Cost of Interrupted Work: More Speed and Stress][https://isr.uci.edu/content/cost-interrupted-work-more-speed-more-stress.html] analyzes the cost of distractions and worker perceptions of those costs.