Over the past few months, two questions have come up in nearly every conversation I have about AI and work. Parents ask what their children should study. Colleagues ask what skills they should be building. Everyone seems to be grappling with the same underlying thing: what do you actually do in a world where AI can increasingly do what you were trained to do?
The public discourse tends to split into two camps. On one side, you have the optimists: AI will usher in a kind of techno-utopia, automating away the drudge work, freeing humans for higher-order pursuits, and ultimately creating more jobs and more prosperity than it displaces. History, they point out, is on their side. Every major wave of automation has eventually produced net employment gains. On the other side, you have the pessimists: this time is genuinely different. AI targets knowledge work specifically, the pace of displacement will outrun our ability to adapt, and mass structural unemployment among white-collar workers is a real possibility, not a science fiction scenario.
After spending a couple of weeks synthesizing some key, almost up-to-date research on AI's labor market impact since ChatGPT launched in November 2022, my honest read is that neither camp is quite right. You can read my full research paper here (download PDF) if you want the extended version. This article just distills the key insights and practical tips.
The overall tone I want to set is one of cautious optimism. There is no need to panic. But there is every reason to be deliberate. The unemployment numbers do not, at least right now, show the kind of dramatic and undeniable disruption we saw during COVID. The diffusion of AI across labor markets is happening more slowly than headlines suggest, held back by structural, legal, and organizational friction. But that window will not stay open indefinitely. We have time to catch up. We just cannot assume the status quo will hold.
1. AI is hitting the most educated, highest-paid workers hardest, not the most vulnerable. This completely inverts the historical pattern of automation, which primarily disrupted low-wage routine work. If you are a knowledge worker, AI is in your lane.
2. Within AI-exposed occupations, early-career workers (22 to 25) have already seen a 6 to 16% employment decline since ChatGPT launched, while experienced workers in those same roles saw stable or growing employment. This is specifically about early-career workers in high-exposure, high-wage fields getting squeezed. Why is this surprising? Because most people assume the most educated workers are the most protected. The nuance is that within those fields, junior workers are bearing the brunt because they primarily supply codified, textbook-style knowledge that AI replicates most easily. Senior workers in the same roles have accumulated situated judgment and decision-making authority, which AI does not yet reliably replace.
3. If your job is primarily about executing a well-defined set of tasks, AI is a direct substitution risk. If your job is primarily about judgment, domain expertise, and accountability for outcomes, AI is far more likely to augment you than replace you. The tasks AI can perform are not the same as the job itself. This distinction is what determines which side of the labor market you end up on.
4. Based on the Anthropic Labor Market Report, actual AI deployment currently covers only about 13% of what frontier models could theoretically do. Even though the pace of change already feels fast, we are at the very tip of the iceberg. This is double-edged: the disruption we have seen so far significantly understates what is coming, but it also means we have more runway to adapt than the most alarming headlines suggest. The change that feels rapid is, in structural terms, still early.
What Current Research Tells Us
The aggregate picture is calmer than you might expect
The most important thing to establish upfront is that, at the aggregate level, the data does not bear out a catastrophic disruption narrative. The Anthropic Labor Market Report, one of the most rigorous studies on this topic, used a difference-in-differences framework comparing workers in the top quartile of observed AI exposure to those with zero exposure. Their finding: the average change in the unemployment gap since ChatGPT's release is small and statistically insignificant, at plus 0.2 percentage points. A scenario comparable to a "Great Recession for white-collar workers" would have been detectable in their data. It has not appeared.
The broader unemployment picture confirms this. The U.S. unemployment rate has risen only modestly from 3.4% in early 2023 to around 4.2% by late 2025, well within historical norms. Overall wage growth has actually decelerated, from 5.1% annually in 2021 to 3.7% in 2025, which is more consistent with macroeconomic normalization after the pandemic than with AI-driven disruption. The predicted 21% average wage increase from some models has not emerged.
So if you have been anxious about the immediate employment cliff, the data says: take a breath. What we are seeing is not a COVID-scale event. The disruption is more slow-moving, more structural, and more targeted than that.
But the disruption is real and concentrated in specific places
That said, sector-level data tells a more uneven story. The information sector, which includes software publishing, data processing, telecommunications, and media, peaked in January 2023, just two months after ChatGPT's launch, and has since contracted by approximately 3.3%, losing roughly 148,000 jobs. Professional and business services plateaued in late 2022 and have declined modestly since. Education and health services, by contrast, have grown by approximately 12%, adding over 2.8 million jobs.
The most striking early signal comes from granular payroll microdata. Workers aged 22 to 25 in the most AI-exposed occupations experienced a 6 to 16% relative employment decline between late 2022 and September 2025. Experienced workers aged 26 and older in those same occupations saw stable or growing employment over the same period. The divergence began precisely at the ChatGPT inflection point, and it is concentrated specifically in AI-exposed roles, not among young workers broadly.
Higher-paid jobs face more AI exposure, not less
One of the most counterintuitive findings from the research is the relationship between wages and AI exposure. In every prior wave of automation, disruption disproportionately hit lower-wage, lower-skill work. Generative AI has inverted this. Computer and mathematical occupations, legal roles, business and financial operations: these sit at the top of the AI exposure distribution. Cooks, motorcycle mechanics, and bartenders have near-zero exposure.
Workers in the top quartile of observed AI exposure earn 47% more than unexposed workers and are nearly four times as likely to hold graduate degrees. The most exposed workers are, on the whole, among the most educated and highest-paid in the economy. The narrative of AI primarily threatening low-wage, vulnerable workers does not match the data.
High Exposure Does Not Mean You Are Getting Replaced
It would be intuitive to assume that the jobs most highly exposed to AI are those most at risk of displacement. The current research points to a reality that is considerably more nuanced.
First, it is worth being precise about what "high exposure" means. A job is considered highly AI-exposed when a significant portion of its component tasks could theoretically be performed or substantially accelerated by frontier language models, based on assessments of task-by-task AI capability. Exposure is a technical measure of overlap between what workers do and what AI can do. It says nothing, by itself, about whether those workers will actually be displaced.
The reason: AI operates at the task level, but jobs are not just bundles of tasks. Jobs are bundles of tasks organized around a core of judgment and accountability.
Jensen Huang made a sharp observation about this: if you reduced his job to its component tasks, you might conclude he is a typist, since he spends most of his day at a desk with a keyboard. The point is that conflating the tasks someone performs with the nature of their job leads to badly wrong conclusions about displacement risk. If your job is fundamentally about executing a well-defined series of tasks, and those tasks can be automated, then you should be genuinely concerned. But if your job is primarily about exercising sound judgment and domain expertise alongside a set of tasks that can now be automated, then AI is more likely to make you more effective, not redundant.
Even in highly AI-exposed occupations, these factors reliably buffer against displacement. The first three are the most durable because they reflect genuine current limitations of AI:
1. Jobs defined by outcomes and decisions, not task execution. The more a job is about what to decide rather than what to do, the more resilient it is. A radiologist who decides whether an anomaly requires follow-up is more protected than a radiologist who only generates reports.
2. Experience that translates into decision-making authority. The relevant question is not how many years of experience you have, but whether those years have resulted in others genuinely depending on your judgment. Consider two consultants at the same firm with identical tenure. One has built a reputation as a reliable executor: polished deliverables, on-time projects, efficient throughput. The other has developed a specialty where clients call them first when facing a genuinely ambiguous problem. Same experience on paper, very different displacement risk. Experience alone is not a moat. Decision-making authority is.
3. Human accountability legally or socially required. Judicial decisions, certain medical diagnoses, child welfare determinations: social and legal norms prevent outsourcing these to AI regardless of technical capability.
4. Institutional protections. Professional licensing, unionization, and regulatory barriers slow disruption regardless of technical feasibility. Doctors, lawyers, and licensed engineers benefit from structural buffers.
5. Substantial physical components. Surgery, skilled construction, nursing, hands-on education: these remain protected by AI's current inability to operate reliably in unstructured physical environments. (More on why this protection may be temporary in my piece on world models and physical AI, if you are interested in where this is heading.)
6. Unfavorable automation economics. Where AI deployment costs exceed labor savings, automation incentives are weak even if technically feasible. This buffer is real but declining as AI costs fall, making it the least durable of the six.
Consider programmers. They have among the highest measured AI exposure scores of any occupation. This case is worth being honest about: the early-career employment data in Figure 2 is consistent with what is already playing out in tech. Meta has publicly discussed flattening its internal engineering structure and having engineering managers oversee AI agents rather than growing headcount of junior engineers. The entry-level software engineering job market has tightened meaningfully since 2022. The threat at that layer is real, and anyone dismissing it should reckon with the actual numbers.
But the distinction the research points to matters. The disruption is concentrated where programming means executing to a specification: writing code to a known pattern, implementing a defined feature, producing output that a more senior engineer will review. What has not been displaced is the architectural layer: system design, problem framing, evaluating tradeoffs across competing constraints, and being accountable for whether what ships actually works in production. AI can write the code. It cannot yet reliably decide which code to write, or recognize when the framing of the problem is wrong in the first place. The protected layer is not the codified knowledge of how to program. It is the situated judgment of what to build and why.
The same pattern holds for radiologists, lawyers, and financial analysts. All are highly exposed. None are experiencing rapid displacement. The tasks AI can do are not the same as what makes the job valuable.
This is a much longer discussion than a single article can contain. If you want the full treatment of how each framework explains these dynamics, the full paper is here. What follows is the practical application.
What the Shape of an AI-Native Worker Could Look Like
Here is what the research suggests about the shape of a worker who will be augmented rather than replaced.
1. Get genuinely proficient with AI tools. Not just in the sense of knowing how to write a prompt well (which is important), but in the sense of understanding what these systems can and cannot do, where they tend to fail, and how to evaluate their outputs against your own domain knowledge. Demis Hassabis said at Davos earlier this year that young workers should become "unbelievably proficient" with AI tools. This is now as foundational as computer literacy was in the 1990s. The workers who will be most valuable are those who can use AI to do what previously required a team.
The "workslop" phenomenon: AI-generated output that appears complete and useful but lacks actual substance. Research estimates this accounts for approximately 15% of AI-assisted content. The workers most at risk from this are those who accept AI outputs without the domain knowledge to evaluate them. Proficiency with AI tools means knowing when the output is hollow. That requires the domain foundation to recognize it.
Research on this point is striking: when the same AI tools were given to students with differing levels of domain expertise, AI did not level the playing field. It widened the gap. Students with stronger domain knowledge used AI as a genuine lever, because they could recognize what good output looked like, evaluate what the tool was getting wrong, and direct it toward better outcomes. Students with weaker foundations could not guide the tool effectively and were less able to catch errors. The implication: building genuine domain expertise is not something you can skip on the assumption that AI will compensate for it. Domain depth is precisely what determines whether you can use AI well.
2. Invest in situated expertise, not just breadth. Codified knowledge, the material you learn from textbooks and can document in a how-to guide, is what AI replicates most readily. Situated expertise is context-dependent knowledge that emerges from practice: understanding why something works in your specific organization, what the history of a situation means, how to navigate the institutional realities of your field. This is genuinely hard for AI to replicate because it requires actually being embedded in a domain over time. Domain depth is also what allows you to catch AI when it is wrong, which it often is in ways that are difficult to spot without background knowledge.
3. Deliberately move towards roles where you are accountable for decisions, not just deliverables. The most important career investment right now is getting into positions where your judgment is what others rely on. This might mean seeking roles with more scope and fewer resources, pushing for ownership of decisions rather than completion of tasks, or developing a specialty that organizations depend on for consequential choices. The critical buffer against displacement is not years of experience per se, but whether that experience has translated into decision-making authority.
4. Position yourself within the AI value chain. The AI economy is generating new occupational categories: AI agent managers, AI safety and alignment specialists, AI-human interaction designers, AI ethics and governance officers. Beyond the application layer, the infrastructure buildout is real. Data centers, electrical grid expansion, semiconductor manufacturing, and energy infrastructure are generating significant demand for skilled trades and engineering roles. Being part of the wave that is reshaping the economy provides job security and optionality that roles tangential to AI do not.
5. Develop cross-domain fluency at the AI intersection. The highest-value augmentation opportunities lie where AI proficiency meets deep domain expertise. A lawyer who uses AI legal research tools effectively and can critically evaluate their outputs against deep knowledge of case strategy is more valuable than either a pure lawyer or a pure technologist. The combination creates value that neither component does alone.
A note on sequencing: not all of this is immediately actionable for everyone. If you are early in your career or mid-transition, points 1 and 2 are the most universally applicable starting points. The moves towards situated expertise and decision-making authority take time to build. The point is to be oriented toward them, not to expect them overnight.
When Knowledge Becomes a Commodity: Is Your Degree Still Worth It?
This is, I think, the genuinely existential question underneath the career advice. The traditional rationale for education is that knowledge is power. You go to school, you acquire knowledge and credentials, and those credentials signal your value in the labor market. But a frontier language model is now, in some meaningful sense, a body of knowledge accessible to anyone with an internet connection. If knowledge itself has become a commodity, what are you actually paying for when you go to university?
The honest answer is that I am not sure, and I think we should be willing to sit with that uncertainty rather than paper over it. The challenge to the value of universities is not new: it arguably started with Coursera and the rise of massive open online courses, which gave anyone with an internet connection access to curricula from top institutions. LLMs feel like a continuation of that trajectory rather than a rupture with it. If the primary reason to attend university is to acquire knowledge, that value proposition has contracted substantially, and I think it is worth saying so plainly.
What I do think a university education offers that an LLM cannot: the experience of building real relationships, of developing emotional intelligence through years of being embedded with people different from you, of learning to navigate disagreement in person. I also believe we still need people who develop genuine deep expertise to push the frontier of human knowledge. Frontier models can synthesize and reason over what already exists; they are not yet capable of originating research in the way a scientist at the edge of their field does. And I remain, perhaps unsurprisingly given my own background, a strong advocate for liberal arts and the humanities, not because they are economically optimal, but because learning to think from first principles, to form and defend an opinion, to grapple with the human condition in conversation with other humans, matters in ways that are difficult to price. The ideal setting for that is not a lecture hall. It is a seminar room where you argue.
I hope that educators and academic institutions are thinking seriously about what they are actually for. Because the answer is no longer simply the transmission of knowledge.
Math and quantitative reasoning retain their value, and this is more counterintuitive than it sounds. The Anthropic Labor Market Report shows that computer and mathematical tasks represent 34% of total Claude usage, and AI models can now solve math olympiad-level problems. By the naive exposure logic, math skills should be among the most threatened. The research points in the opposite direction. Majors in atmospheric sciences, engineering, chemistry, and quantitative fields show the strongest predicted gains in AI-era returns, and enrollment is already responding: a one percentage point increase in a major's predicted AI-era returns was associated with a 30% increase in enrollment by spring 2025.
Why does math retain its value despite high AI exposure? Because the relevant skill is not computation. It is the ability to formulate problems correctly, design experiments, evaluate competing hypotheses under genuine uncertainty, and recognize when a mathematical model is making assumptions that do not hold. These are the things AI remains genuinely weak at. Models can execute math. They cannot yet reliably determine which math to do.
Verbally intensive fields face structural headwinds. Majors where the primary intellectual product is written argumentation face a real squeeze. French, theology, political science, philosophy: these are fields where AI can now produce competent written output, which compresses the wage premium on verbal skill production specifically. This does not make these programs worthless. The judgment, critical reasoning, and domain knowledge developed in these fields remain genuinely valuable. But students in these programs should pair that foundation with something more technically distinctive.
Study something deeply, then layer AI proficiency on top. The research consistently shows that domain knowledge is what enables effective use of AI, not the ability to prompt well in isolation. There is a well-documented "GenAI wall effect": AI can close performance gaps between adjacent occupational groups but hits a hard limit when trying to bridge distant ones. What allows you to use AI effectively in a domain is having enough foundation to recognize when it is right, when it is plausibly wrong, and when it is missing the point entirely. Study medicine, materials science, clinical psychology, or civil engineering, and learn to use AI exceptionally well within that domain.
Be careful about studying AI as a field in isolation from a domain you want to apply it in. This advice is not aimed at aspiring machine learning researchers or AI PhD students. The world genuinely needs people who can advance the frontier of AI research, and that is a clearly valuable path. The caution is directed at students who are defaulting to "AI" or "data science" degrees as a general hedge, without a domain to apply them to. Pure AI credentials without domain depth are becoming increasingly commoditized. The workers who will command premiums are those who bring AI capabilities to bear on hard domain problems in medicine, law, engineering, or science. The intersection is where the value is.
Consider fields with institutional and physical protection. Medicine, law, licensed engineering, and accounting benefit from regulatory frameworks that slow AI disruption regardless of technical capability. Fields with substantial physical components, nursing, physical therapy, skilled construction, early childhood education, remain protected by AI's current inability to operate reliably in unstructured physical environments.
The Honest Uncertainty
This might not be the most satisfying conclusion, but it is the honest one: the analysis suggests we are approximately 13% of the way through the long-run labor market effects that quantitative models predict. Technology diffusion follows S-curves. Electricity took roughly 30 years from commercial availability to peak productivity impact. The three-year post-ChatGPT window captures only the early phase of what may be a multi-decade adjustment.
This means two things simultaneously. First, the disruption we have seen so far significantly understates what is coming. The gap between theoretical AI capability and actual deployed usage is large, driven by organizational inertia, legal constraints, and workflow integration challenges. As those constraints erode, the impact will accelerate. Second, the pace may be slower than the most alarming forecasts suggest. Amodei's prediction that 50% of entry-level white-collar jobs could be eliminated within five years has not materialized at scale. The directional trends are consistent, but magnitude and timeline remain genuinely uncertain.
The scenario most consistent with the available evidence is what the research calls "bifurcated": AI simultaneously augments high-expertise work and automates entry-level work, producing strong outcomes for experienced workers and persistent difficulty for new labor market entrants. This is what Amodei described as something society "has almost never seen before": high GDP growth alongside structural unemployment concentrated among young workers. Whether that scenario fully materializes depends on factors including AI capability development, institutional responses, and the pace of workforce adaptation.
AI Labor Markets Career Economics Future of Work