Center for AI and Ethics
DE EN
Topic · AI and work · As of mid-2026

Will AI take our jobs?

AI and work.

What is really happening in 2026 — between doomer rhetoric and Silicon Valley euphoria. A sober stocktake with a clear eye for what can go right, without talking down what is hard right now.

Since the end of 2022, the debate around AI and work has been louder than the subject has generally benefited from. One camp saw the end of everything: office work, writing professions, consulting, accounting — a world economy that would have to reorganise itself within months. The other promised a golden age: a productivity revolution, shorter working weeks, an economy of abundance. In between sat the majority of working people, who simply kept working and wondered which part of any of it was true.

By mid-2026 the fog has lifted a little. We now have solid data from more than three years of AI use in everyday work. We know more about retraining rates, productivity effects, customer satisfaction under automated service, about the roles that have emerged and the ones that have genuinely gone. And we know this: neither the doom rhetoric of 2023 nor the euphoria holds up against the evidence.

This page is an attempt at a sober reading. It shows the serious disruptions under way — but also the economic and social openings that many observers miss. It is written from a stance we take to be our remit: hope with a clear eye. Not the naive kind that talks the dark side away. The kind that assumes, on good grounds, that societies can handle this transition — provided they shape it rather than suffer it.

01

Where we stand in 2026

Begin with numbers, not opinions.

  • 76 per cent of working people, according to McKinsey 2025, use AI in some form at work — up from 30 per cent in 2023. Adoption has run faster than almost any forecast expected.
  • US productivity rose by around 2.7 per cent in 2025 — nearly twice the average of the previous decade (1.4 per cent). The Stanford economist Erik Brynjolfsson describes this as the shift from the investment phase to the "harvest phase": the dollars that have flowed into AI in recent years are now showing up in the statistics.
  • At the same time, the entry-level labour market has taken a hit. A Harvard study from February 2026 covering 62 million workers finds that junior roles in AI-adopting firms fell by 9 to 10 per cent, while senior roles held steady. In some sub-segments — entry-level developers in the United States under the age of 25 — the drop reaches as much as 20 per cent against the 2022 peak.
  • Global unemployment sits at 4.9 per cent on the ILO's Employment Trends 2026 data — in line with recent years. The feared mass unemployment driven by AI does not show up in the macro figures.
  • Roughly one in four workers worldwide, the ILO reports, is in a role with material exposure to generative AI — but only 3.3 per cent of global employment falls into the highest-risk category. The effect is real, but spread out, not concentrated.

The short story: AI has arrived across the board, productivity is rising, the labour market is stable — but at certain edges, above all at the entry to knowledge-based professions, real and painful disruptions are taking place.

02

The four futures — what may come

No one knows where this transition ends. But the possible paths can be described. The Future of Jobs Report 2026 from the World Economic Forum sketches four scenarios that we find the most useful frame for this debate — because they take both ends of the narrative seriously without committing to either.

1 · Supercharged Progress

AI breakthroughs accelerate productivity and innovation sharply, but outrun the capacity of societies, education and the labour market to adapt. The result: large gains for a few, visible displacement for many, social strain. Possible, but neither desirable nor inevitable.

2 · The Age of Displacement

AI develops faster than workforces can retool. The pessimistic scenario: genuine spikes in unemployment, social instability, political backlash. This is the scenario the alarmists of 2023 and 2024 described — and it is a possible scenario, not an automatic one.

3 · Co-Pilot Economy

Adoption that is deliberate and shaped. AI augments human roles rather than replacing them. Job profiles are redrawn, retraining becomes the norm, labour policy, social partners and education systems move with it. The result: balanced growth, prosperity spread more widely. The shapeable scenario — and the one CAIE is working towards.

4 · Stalled Progress

The scenario often forgotten: the technology does not live up to its promises. Skill gaps, patchy adoption and unresolved reliability issues (see our page on Agentic AI) hold back the expected productivity gains. The result: disappointed expectations, capital moves on, inequality still widens — but more slowly.

Three of these four scenarios are not catastrophic. One (Displacement) would be. Little suggests that this one scenario arrives automatically — every serious forecast of recent months (WEF, Gartner, McKinsey, ILO), read soberly, points to redistribution rather than extinction. That is no ground for euphoria. But it is solid ground for not reproducing the pure fear narrative any further.

03

The myth of the jobs apocalypse

2023 and 2024 were loud years. Goldman Sachs warned that 300 million jobs were at risk. High-profile AI figures forecast the end of entire professions. Influencers urged haste: "Reskill now, jump on board at once, or you are finished." The rhetoric worked — it produced fear, clicks and full course-subscription baskets. But it was not backed by the data then, and it is backed by the data even less now.

Gartner itself put its position pointedly in a widely read paper in early 2026: "AI won't cause a jobs apocalypse, but it will unleash job chaos." No extinction. But disorder — shifts, new roles, fault lines, pressure to retrain. That is taxing, and in places painful. But it is not the end of the world the loud narrative has been drawing.

The Klarna case has become a turning point in this debate. Between 2022 and 2024 the Swedish fintech replaced around 700 customer service positions with an AI chatbot, built in collaboration with OpenAI. The company saved some 60 million dollars in the short term and celebrated the move as a blueprint for the industry. The reality in 2025 and 2026: repeat customer contacts rose by 25 per cent — one in four customers got back in touch because their issue had not really been resolved, merely closed quickly. Complaints about robotic answers, inflexible scripts and useless escalations piled up. In early 2026 Klarna drew the obvious conclusion and began bringing people back. Service staff who had been let go were rehired, alongside a hybrid model: AI handles routine work, humans take over everything that requires nuance, discretion or emotional labour. The point of the story is this: it was not automation that was rolled back — humans were returned to the spot where they had been missing.

Klarna is not an exception but the most visible example of a pattern. Gartner expects that by 2027 roughly half of all companies that cut staff because of AI will be hiring again — often under different job titles, but in comparable functions. The logic is simple: quality costs people. Those who want quality hire them back. Those who only want to cut costs lose customers.

04

Where the real disruptions sit

Correcting the alarmism must not tip into glossing over. There are real, measurable disruptions — and they cluster in a particular part of the labour market: the entry to knowledge-based professions.

The figures from the first half of 2026 are sobering. Entry-level tech roles in the United States have fallen by around two-thirds compared with 2023; in the United Kingdom the drop in 2024 was 46 per cent, with projections of 53 per cent by the end of 2026. The Harvard study from February 2026 covering 62 million workers finds that in firms which have introduced generative AI, junior employment fell by 9 to 10 per cent, while senior roles held steady. In specific age brackets — US software developers aged 22 to 25 — employment is down by almost 20 per cent from its late-2022 peak.

The mechanism is not mysterious. Precisely the tasks new entrants traditionally cut their teeth on — writing boilerplate, fixing bugs, drafting tests, simple summaries, minutes, standard text — are the ones AI now does passably all too well. Those juniors who are still hired increasingly have to bring, on day one, what previously took a year or two to grow into.

This is a social problem that cannot be argued away. A whole cohort of young people is arriving at a labour market that works differently from the one their training prepared them for. It does not only affect development — also copywriting, parts of translation, clerical entry roles, tier-1 support. Those affected are generally not the people who will fill the electrician shortage in data centres tomorrow.

That kind of honesty belongs in every debate about AI and work. Playing the macro hope to relativise individual hardship misses the point. Hope is right — but it has to be translated into retraining programmes, bridging offers, reforms of vocational education. Otherwise it stays rhetoric.

05

Where new spaces are opening

At the same time — and this is the side the mainstream debate tends to miss — new spaces are opening at a pace the public discussion has not caught up with. The clearest of them: the infrastructure around AI itself.

Data centres as digital industrial cities

A modern hyperscale data centre is not a server hall. It is a campus — with administrative buildings, training centres, 24/7 canteens, parking for thousands of workers, dedicated security and medical facilities, in some cases on-site fire brigades, warehouses, maintenance bays. Around the campus: hotels for service engineers, housing, childcare, retail, restaurants, petrol stations, schools. An economic region grows up — not as a by-product, but as a precondition.

The underlying numbers are not small. The International Energy Agency forecasts that US data centres alone will draw more than 250 TWh of electricity in 2026, and that the sector will double globally by 2030 to around 945 TWh. According to IEA figures and US Bureau of Labor Statistics data, delivering that build-out physically will require around 300,000 additional electricians — while roughly 20,000 retire each year. Add roughly 17,500 electrical and electronics engineers, as well as network specialists, cooling technicians and building-security experts. Google, Microsoft, Amazon and other hyperscalers launched their own training programmes in 2025 and 2026, because the market is not delivering.

The second-ring economy

Around these builds, jobs appear that have nothing to do with programming AI — and that is precisely what makes them interesting for people crossing over:

  • Security services, upgraded digitally: biometric access control, drone surveillance, AI-assisted video analysis. The classic guarding job becomes more technical, more demanding, better paid.
  • Facility management: "smart building operator" rather than caretaker — AI-controlled climate systems, autonomous cleaning robots, app-driven maintenance planning.
  • Logistics: daily deliveries at hyperscale volumes, AI-optimised routing, autonomous transport robots, digital warehouse management.
  • Cleaning, catering, grounds keeping — all more digital, all learnable. Tablet-based checklists, QR codes on equipment, AI translation of technical language into plain instructions.
  • Trades: electrical, heating, ventilation, cooling, water engineering, fire protection — the physical layer without which no cloud runs. This is where the most acute labour shortage of the coming years sits.

When the tech fails — human-in-the-loop in the physical world

The more that is automated, the larger a group of roles grows that barely surfaces in the debate: the response teams for when something goes wrong. Drones crash, robots jam, server-room doors stick, cooling loops leak, cyberattacks strike physical infrastructure. Each of these situations needs people on site, making decisions and doing hands-on work — often with specialist training, always on stand-by, generally well paid. This is human-in-the-loop in its most physical form (see the details on our page on Agentic AI).

Regional structural shift

The geographic effect is especially striking. Decommissioned industrial sites, former lignite-mining regions, rural areas with plenty of land and access to cheap energy are turning into new employment hubs. Microsoft announced a 20-billion-euro investment in European infrastructure in 2023; the hyperscalers' investments have multiplied since. A former power-plant technician retrains over six months to become a data-centre specialist. A bank-branch employee moves into campus customer services. A car mechanic becomes a robotics maintenance technician. These are not theoretical paths — they are being walked right now.

And beyond the infrastructure

The new spaces also include: AI governance (the kind of role our pages on ISO 42001 and the EU AI Act describe), the evaluation and auditing of AI systems, compliance specialisms, specialist roles in ethically sensitive fields (health, justice, public administration), AI-supported education and retraining, human-machine interface design, and domain expertise in combined fields — lawyer-plus-AI, doctor-plus-AI, engineer-plus-AI. Many of these roles did not have a name three years ago.

06

The productivity paradox

One economic argument is missing from almost every alarmist piece: the historical record of productivity leaps. The Industrial Revolution, electrification, the spread of computing, the internet — each of these episodes reduced some occupations sharply or erased them outright. Each raised total employment, not lowered it.

The mechanism is well understood. When technology makes activities cheaper, the prices of the products those activities go into fall. When products become cheaper, more are bought. When more are bought, more production is needed — and with it new work, often in fields that did not exist before the leap. Economists call this the Jevons logic or, more broadly, the productivity-employment link.

Erik Brynjolfsson, the Stanford economist and one of the most-cited researchers on the economics of AI, describes the current phase as a move from the "investment phase" into the "harvest phase" along what he calls a J-curve: first, dollars flow for years into infrastructure, training and reorganisation; then, with a lag, the gains register in the statistics. The jump in US productivity to 2.7 per cent in 2025 — double the decade average — fits that reading exactly.

Important: Brynjolfsson himself is no cheerleader. In his recent writing he warns explicitly against treating the gains as automatically broad-based. "I'm fairly confident about the productivity gains and about the wealth that will be created. But I'm very worried that it will not be distributed evenly, and that many people will be seriously hurt along the way." That is the nuance the public debate lacks: the pie grows, but that says nothing about who gets how much of it. Distribution is a political question, not a technical one.

07

Why humans remain indispensable — and where they do not

The romantic line that "people always want to deal with people" is both true and untrue. It holds where nuance, discretion, judgement, emotional labour and responsibility are in play — in much of care work, counselling, teaching, medicine and crisis response. It does not hold where standardisation, speed and availability are what actually define quality: booking appointments, basic FAQs, translating everyday text, filling in forms. In those settings customers accept automation readily — as long as it works.

The Klarna lesson sharpens the point. People accept automation when it is good. They reject it — and, in case of doubt, switch providers — when automation pretends to be service without being it. A bot that closes a ticket without resolving the issue is cheaper for the company and more expensive for the customer. Gartner data show that customer satisfaction in hybrid setups (AI for routine plus immediate human escalation for anything complex) is markedly higher than under pure automation — and often higher than under purely human service, because waiting times drop.

Another point rarely made precisely in the debate: responsibility. An AI can suggest. It can act. But it cannot answer for what it does. Who is liable when an AI system gets it wrong in a medical, legal, financial or safety-critical context? In the end a human is — individually, institutionally, legally. That is exactly why roles that carry responsibility will not disappear. They become denser, more demanding, often better paid. A radiologist working with AI support reads more images in the same time, and more accurately — and must stand behind every single one. That is not a dying profession but an upgraded one.

This logic of responsibility has a wider social consequence. Beyond pure automation, a growing field is emerging that one can call oversight work. People who watch, check, adjust, shut down and approve AI systems. Human-in-the-loop, not as a cost centre but as a core function. With every system that becomes more autonomous, the demand for humans to control it grows — paradoxically.

08

What this means for organisations and individuals

For organisations

The firms that have lost most so far are not the cautious ones but the hasty. Klarna is the most visible example — but not the only one. Gartner estimates that by 2027 around half of all companies that cut staff because of AI will be hiring again. That is not only commercially embarrassing; it is devastating in human terms — for those let go, for the team that stayed, for trust in leadership.

Three recommendations that the 2025 and 2026 data support well:

  • Complement rather than replace. Brynjolfsson's work shows the largest productivity gains among entry-level and lower-skilled workers (plus 34 per cent), not among high-end professionals. AI lifts the lower end rather than displacing the upper. That is an argument for training, not for cuts.
  • Invest in governance. An AI system without an oversight architecture is a liability. An AI system with clean governance under ISO/IEC 42001 (see our page on ISO 42001) and the requirements of the EU AI Act is more competitive — because it can be deployed in regulated markets in the first place.
  • Open up retraining paths. The Harvard numbers show plainly: anyone who pushes out juniors loses the senior generation of the day after tomorrow. Firms that deliberately keep entry routes open — with AI training, mentoring, reduced starter portfolios — secure talent that the rest of the market will no longer find.

For individuals

The good news first. The barrier to entry for AI is lower in 2026 than it was in 2023. Back then you needed prompt-engineering courses, background knowledge, jargon. Today, for most everyday tasks, a clear sentence to a good model is enough. Which means: no one needs to spend hundreds of euros on courses to keep up. One or two serious evenings a month with one of the large systems achieves more than most paid training.

More important than tool knowledge is judgement. An AI answer can sound good and still be wrong (the hard limits here are laid out on our page on Agentic AI). Anyone who learns to tell plausible statements from evidenced ones, to cross-check, to demand sources, stays relevant on any future path. This is not a tech skill but an old one: critical thinking, now in a new setting.

On career choices: anyone starting out today would do well to favour fields where responsibility, physical presence, complex human interaction or domain expertise combine. Skilled trades, care work, medicine, teaching, engineering, specialist advisory roles, AI governance — all more robust than pure knowledge work without a physical or accountability anchor. Not because AI will replace every knowledge job tomorrow, but because the hybrid, responsibility-carrying roles are the ones growing fastest.

09

How CAIE reads this

Our work on AI and work follows a clear line. AI is a tool. Used responsibly, it opens up possibilities — used without responsibility, it causes harm. Both sides belong in any serious debate.

This stance has a history. Co-founder David Mirga has been writing and publishing on AI since early 2023, through several books and over a hundred specialist articles. His thread runs in two parts: taking the fear of AI off people — and putting tools into their hands. Concrete workflows for different industries, processes and job profiles, so that spectators become users. The conviction behind it is simple: anyone who can work productively with AI does not have to worry about their job — they become more valuable. Human plus AI, that is the lever. Not human against AI, not AI instead of human. This page stands in that line.

Our concrete work on AI and work runs on three levels:

  • Education and public understanding. Clear guidance for people who are not AI experts and do not have to become them — workshops, talks, publications, school formats. See our page on AI literacy.
  • Guiding organisations through responsible AI deployment — with the premise that the workforce is part of the solution, not a line item. Governance under ISO/IEC 42001 and the EU AI Act provides the technical guard-rails.
  • Intervention in the public debate. Against alarmism as much as against uncritical euphoria. For an AI policy that shapes the transition rather than suffers it.

If you have questions — as a business, an educational institution, a public body or as an individual — write to us. We reply personally.

10

Frequently asked questions

Will AI really wipe out millions of jobs?

Serious forecasts point to redistribution, not extinction. The World Economic Forum's Future of Jobs Report 2026 projects around 170 million new roles worldwide by 2030, against 92 million displaced — a net gain of 78 million. Gartner goes further, expecting more than 500 million new AI-adjacent roles over the medium term. The caveat is real: the people whose work disappears are often not the ones who fill the new roles. The problem is less the absolute volume of jobs than the transition itself — and nobody should talk that down.

Which occupations are hit hardest in the short term?

Clerical work, certain entry-level roles in software development and copywriting, tier-1 support, routine translation, standardised bookkeeping. A Harvard study from February 2026, drawing on data covering 62 million workers, shows that in firms adopting generative AI, junior employment fell by 9 to 10 per cent, while senior roles held steady. The effect is real and hard on those affected — but it is not the whole labour market.

Why did Klarna bring people back?

Between 2022 and 2024, Klarna replaced around 700 customer service positions with an AI chatbot and saved roughly 60 million dollars in the short term. From 2025 onwards the company reversed course: repeat customer contacts had risen by 25 per cent, because the bot closed queries too quickly without actually resolving them. Since early 2026 Klarna has run a hybrid model — AI for routine matters, humans for complex, emotionally demanding or high-value cases. It is not an isolated case but a pattern: Gartner expects that by 2027 around half of all companies that cut staff because of AI will be hiring again.

Where are the new jobs appearing — and for whom?

Around AI infrastructure: data centres, power supply, grid expansion, cooling, maintenance, security. The IEA expects US data centres to draw more than 250 TWh of electricity in 2026, and estimates that around 300,000 additional electricians will be needed for this build-out alone. Add to that AI governance, evaluation, compliance roles, and human-machine interface work. Then a second ring of economic activity: logistics, trades, hospitality, education in the regions where this infrastructure is being built. Many of these roles can be learned — which makes them accessible to people crossing over from shrinking industries.

Does that mean the alarmism was wrong?

Partly. The panic rhetoric of 2023 and 2024 — "AI is about to wipe out every job, get on board now or you're finished" — was neither grounded in the data nor helpful. Gartner itself put its current position sharply in a recent paper: "AI won't cause a jobs apocalypse, but it will unleash job chaos." That is exactly where we are. There will be no great wiping-out, but there will be disorder: shifts, new roles, short dislocations, the need to retrain, people searching for their footing. It is taxing, but it can be shaped.

What should organisations do now?

Three things. First, an honest look at which tasks in-house will genuinely improve with AI — and which will only appear to. Second, invest in the workforce, not out of it. The most successful cases are not the firms that replaced fastest, but the ones that combined most intelligently. Third, governance — human-in-the-loop where it matters, clear lines of responsibility, oversight of automated decisions. Anyone who gets those three right is in a better position than anyone reaching for the headcount lever.

What should individuals do?

Learn to work with AI — but without panicking into action. The barrier to entry in 2026 is lower than it was in 2023: one or two serious hours with a large model replaces many paid courses. More important than tool knowledge is judgement: when is an AI answer plausible, when is it merely well phrased? Where do I need to check? Anyone who cultivates that kind of judgement is well placed on any future path.

How current is this page?

As of mid-2026. The studies cited come from recent months (WEF Future of Jobs 2026, Gartner 2025/2026, ILO Employment Trends 2026, Harvard February 2026, Brynjolfsson/Stanford 2025). We revise the page regularly and mark the date of record.