Despite the misleading hype cycle1, AI is fundamentally changing how people work.
Across the large engineering team I work with, there is a spectrum of viewpoints about using AI, ranging (roughly) from “let’s try handing it the keys” to “I’m inherently skeptical of non-deterministic changes to our codebase.”
No matter the level of skepticism, though, no engineer I’ve talked to wants to go back to a world where AI isn’t at their disposal. One of our best engineers, who is on the more skeptical end of the spectrum, told me recently, “many people are thinking about and using AI the wrong way, but if you learn how to use the tools correctly, it can seem like magic.” I agree.
On a team level, I’ve seen material productivity gains. Almost a year ago, I was tasked with rebuilding a function and initially reduced the headcount size by 60%, planning to rehire. Instead of backfilling, though, I kept the team small and velocity increased. A key part of that outcome was having the right people in place and motivating them, but using AI to research, rapidly prototype, accelerate our processes and build new ones dramatically increased the output of everyone on the team, acting as a measurable force-multiplier on existing talent.
Personally, I use multiple AI tools as a core part of my job, side projects2 and personal life. Some workflows are complex, especially those that are agentic in nature, spanning multiple tools to accomplish some task.
AI has become an integral part of how I interact with and use technology, but more importantly, it is reshaping aspects of how I work and even the way I think about how to produce work.
Flyover country
My friend is a product leader and we often talk about how we use AI. In a recent discussion, he made the following statement in passing:
AI is normal for us, but there are a huge number of people who haven’t used it, don’t know what it is, or haven’t even heard of it.
The history AI has already made3 with blinding speed makes it easy to forget that outside of Silicon Valley, most of the world is continuing on just like it was before.
Since that conversation, I’ve stepped back to more intentionally observe how other people in my life interact with AI. My goal has been understanding the behavior of people outside the heat of Silicon Valley—who don’t spend their days figuring out how to better use AI, how it is shaping the broader market landscape and how it can shape product experiences in software. My friend Barry McCardle4 has called this paying attention to flyover country.
A one-word summary of my anecdotal research would be contrast.
One of the primary differences I’ve noticed is creation versus optimization. Me and my team have invented new ways of doing our work, which includes many things that weren’t practically possible before. The people around me who don’t work in tech use AI to do things they were already doing, but faster, more comprehensively, more conveniently.
A few examples:
My brother, an avid mountain biker, used AI to help him tune the suspension on his race bike. It worked incredibly well and was ten times faster than his previous method, which included reading owner manuals, searching online for advanced guides and multiple rounds of testing and learning.
My uncle, a writer, uses ChatGPT to help him write. He asks AI to edit his work and help him find just the right word or phrase.
My wife has used AI within existing platforms and workflows. Canva, Squarespace and a multitude of other tools have added AI functionality that makes using their tools faster and in some cases more powerful (though in my experience many of those features feel half-baked).
Local maximums and asymmetric value
These people are getting material leverage from AI, but in a way that feels strikingly similar to the problems Amazon Alexa and other voice assistants face: wide adoption of a short list of simple use cases that make existing activities more convenient5—in other words, dramatic underutilization of the technology relative to its capability.
One core reason for these local maximums is the interface itself. The power is obfuscated by an inanimate physical speaker or blank chat input. People can ask AI anything, but without an interface to help them understand what is possible, they will default to the familiar. This problem will likely be solved over time6, but I’m not alone in thinking that the chat interface is one of the great tragedies of early AI7.
The other core reason is that wielding AI in a way that creates true asymmetric leverage for your work requires deep pre-existing knowledge of and skill in what you are working on.
If you don’t have foundational knowledge, the gains will always have a logarithmic limit because AI is non-deterministic. You can make a certain amount of progress, but at some point, you don’t know enough to understand and articulate progress in the right direction, so you hit a local maximum—even though LLMs can help you learn about new subjects8. (This is why, despite a world of possibilities with open-ended AI, closed systems will be the big winners in the near-term9.)
Almost twenty years ago when I first entered the market as a knowledge worker, I noticed a similar limitation watching people use Google search. Google was a massive leap forward in information retrieval, but a vast majority of users executed search in basic ways that used a fraction of the tool’s potential.
I clearly remember that a common characteristic of abnormally productive peers was being “really good at Googling.” Technical personas over-indexed for this skill, which isn’t surprising, because those who generated asymmetric value from search were almost always the ones who had a deeper understanding of how the tool itself worked, both in terms of advanced user features and, more importantly, the algorithm itself.
In the AI age, app creation has exploded. The statistics shared through the Neon acquisition are astounding: 80% of databases were created by AI agents, not humans10. Vercel’s CEO estimates that v0 has contributed to quadrupling application creation on their platform11. Having used these tools extensively without a true foundation in software development, I have experienced the local maximum first-hand. There's a big difference between building a basic blog and running a revenue-producing app in production.
The rich will get richer
A good friend of mine used to work in product at Linear. I was blown away when he told me, in the "early days" of AI, how many companies were wiring LLMs into the product in order to automate all kinds of tasks, including writing code for features.
This is one example of many where both the pre-existing foundational knowledge and specific AI skills are required in order to break past the local maximum and get repeatable asymmetric leverage.
Commenting on a study claiming AI actually slowed developers down, Simon Willison summarized the challenge well12:
My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect...One of the top performers [in the study] for AI was also someone with the most previous Cursor experience...My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learning curve.
The people capable of realizing real gains are knowledge workers who already over-indexed in their ability to create leverage with technology (and increasingly AI specifically)—and why an immense amount of money and effort is being put into building tools to benefit that group (Cursor, Glean, Clay, etc.).
This is the most logical initial frontier in terms of market economics. Highly effective knowledge workers are expensive. If you can get the same or more output from half the headcount, the entire cost structure of a business can change. This boon in productivity will be good for business and ultimately the consumer if more efficient resource allocation results in better products and lower prices, but it will also significantly raise the barrier of entry for jobs in knowledge work.
The gap
Hiring and retaining good talent is already hard, but the required skillsets to provide value on a lean, hyper-productive team using AI are becoming increasingly specialized.
There's already an industry for learning AI, but, as Simon pointed out, and as I know from first-hand experience, it isn't as easy as giving people access to AI. The learning curve is steep, highly context-specific and changing at an incredible rate. There is also an incredible amount of snake oil.
I recently attended an AI symposium at a nationally known university13. After my panel discussion, several professors approached me to talk about how they might incorporate AI into their curricula. Their questions raised serious concerns for me about the ability of traditional education to prepare students for what they will face in the job market.
Friends with children who just graduated college have told me that the professional landscape does seem to be growing increasingly volatile. Required skillsets are changing and companies are figuring out how to do more with less + AI, exacerbating the struggle in a soft job market.
The increasing demand for specialized knowledge workers will far outstrip the ability of the market to provide it—and there isn't a clear pathway for overcoming the gap. We are in the early stages, but as adoption of advanced AI workflows becomes widespread and mandatory, reducing the need for headcount, it will feel to many like a socioeconomic wall materialized suddenly.
Thank you to John Wessel for reading an early draft and giving me feedback.
Footnotes
-
Dimitri Dimandt wrote a concise, helpful post about AI hype feeling similar to crypto and why it’s hard to make comparisons between people’s experiences. ↩
-
I recently used Cursor to migrate this 13-year-old WordPress blog to Next.js and Vercel. ↩
-
ChatGPT reached 100 million users 2 months after launch. Their revenue almost doubled from $5.5 billion to $10 billion in the first 6 months of 2025. Cursor set a record for the fastest time to $100M in ARR. ↩
-
Every time I talk with Barry, I walk away smarter. He's thought through so many critical questions on a deep level and articulates them clearly. ↩
-
Research shows that Alexa, Siri and Google Home have all struggled to help users break past basic use cases like searching the internet, checking the weather, getting directions and playing music. ↩
-
The blank screen problem is a big issue for AI, but I believe it will likely be solved over time. AI is delivered as software, which doesn’t face the same user experience limitations of hardware like smart speakers. Also, the amount of money and mindshare being poured into the space is eye watering. Maybe I’m overestimating the major players, though—the big money is in the API business, so perhaps they will continue to deliver consumer apps with a primitive interface. ↩
-
Earlier this year I attended the Data Council conference, where Naveen Rao, VP of AI at Databricks, made a public appeal that we move past the chat interface. Here's the full quote:
"The other one is UI innovation. A lot of the stuff people are delivering is a chatbot. That is sh!t. It’s the worst f*cking interface I’ve ever seen for most applications. I’m so tired of seeing chatbots. Please fix this. Give people the insights, intelligence to the right user at the right time. This is why IDEs and coding interfaces have taken off because they deliver value where I need it. They don’t make me cut and paste something into a chatbot.
It’s the stupidest interface I’ve ever seen. So let’s fix those interfaces and actually deliver intelligence the right way. I think there’s so many opportunities for people to come out and have creativity here. The hard problems are still the hard problems and you can go found that company, but there’s so much here that everyone has focused on the hard problems, and probably too many people have been focused on the hard problems, that we’re not seeing this thing - the user interface in front of us." ↩ -
I sent this post to a few peers for review, and one of the first comments was that LLMs are different than Google in that they are a much better way to learn about new subjects and create foundational knowledge. That’s true in theory, but the learning path is still non-deterministic and as a learner you don’t have heuristics to understand whether you are learning the right things. That’s fine for some subjects and projects, but I’ve seen it cause real problems when producing knowledge work for production. For the record, I’m extremely bullish on the ways that LLMs can improve education, but based on the current state, I think it will take some radical innovation to see those breakthroughs. ↩
-
AI requires a significant amount of quality context in order to work well. It’s extremely hard to give AI that context manually, but closed systems like Notion are building platforms where all of the parts are connected. It’s a brilliant strategy and if executed well, it won’t feel like AI, it will feel like abnormally awesome software (sadly, their marketing is following the AI hype cycle). Grammarly seems to be pursuing a similar play with their acquisition of Superhuman. This is the same cycle of bundling we’ve seen historically, but significantly accelerated because of how much more powerful AI is in a high-context, bundled environment. ↩
-
The database creation statistic Databricks published as part of the Neon acquisition announcement was fascinating. A major source of database creation seems to be Replit, v0 and similar tools—to the point that Neon launched their own AI app builder. I use Vercel’s v0 extensively, but I think there’s still a gigantic gap between building a hobby project with an AI tool and shipping production software. ↩
-
V0 seems to have significant adoption and coupled with Vercel’s incredible deployment platform, it’s not a surprise that they are seeing massive growth in app creation. ↩
-
Simon Willison has thought deeply and written extensively about AI on his blog. He has also built tools for using LLMs. You can read his full comment on the developer productivy study on Hacker News. ↩
-
The AI symposium was at Clemson University, my alma mater. I participated in a panel discussion about the future of product development with AI. It was recorded as a podcast episode, which you can listen to on Spotify (or wherever else you listen to podcasts). ↩