Wave 6: Economic Effects of AI

Wave 6 asked panelists to consider the economic effects of AI. Respondents were asked to forecast U.S. GDP growth, labor force participation, and wealth inequality through 2030 and 2050 under slow, moderate, and rapid AI progress scenarios, as well as to assess which scenario is most likely to best match reality by 2030.

First released on:
7 April 2026

The following report summarizes responses from 231 experts, in addition to 53 superforecasters and 706 members of the public collected between Feb 09, 2026 and Mar 03, 2026. Within expert respondents, 51 computer scientists, 46 industry professionals, 54 economists, and 80 research staff at policy think tanks participated. For comparison with a survey of economists, AI industry and policy experts, and superforecasters that ran from October 2025 to February 2026, see Appendix C in Forecasting the Economic Effects of AI. Wave 6 distributed an abridged version of that survey's questions on scenario forecasts, GDP growth, labor force participation, and wealth inequality.

Our wider website contains more information about LEAP, our Panel, and our Methodology, as well as reports from other waves.

Insights

  1. The average (mean) expert, superforecaster, and public participant forecasts that the moderate AI progress scenario is most likely to occur by 2030.
    The mean forecaster across all three groups gives the "Moderate Progress" AI scenario the highest probability of occurring (as judged by LEAP panelists in 2030). However, the public gives a 30% probability to Slow Progress—higher than both experts (26%) and superforecasters (26%)—and gives lower weight to Moderate Progress (46%) than experts (52%) or superforecasters (54%). Both experts and the public assign more probability to the Rapid Progress scenario than superforecasters do (23% vs. 19%). In Wave 1 (Aug 2025), we asked participants to similarly forecast the likelihood of these AI progress scenarios.1 While the mean forecaster similarly expected most panelists to choose moderate AI progress as the best matching scenario in 2030 (Experts: 49%, Public: 45%, and Superforecasters: 49%), all groups assigned a slightly higher weight to the slow progress scenario (Experts: 28%, Public: 30%, Superforecasters: 37%) than they did in the current iteration of this question—suggesting that over the last 6 months, forecasters have become less likely to anticipate slow AI progress. The weight assigned to rapid AI progress was unchanged for experts (23%), higher for the public (26%), and lower for superforecasters (14%) in Wave 1 relative to this wave.

  2. Near-term economic forecasts are aligned with historical trends, but rapid AI progress would dramatically reshape growth, employment, and inequality by mid-century.
    All groups broadly agree on the near-term outlook: annualized real GDP growth of ~2.5–2.7% and labor force participation (LFPR) of ~62% by 2030, roughly in line with current trends. By 2050, however, which AI progress scenario occurs matters enormously. On GDP, the median expert forecasts 5.0% annualized growth under rapid progress versus 2.5% under slow progress. On employment, experts expect LFPR to fall to 52% under rapid progress, compared to 60% under slow progress. On inequality, experts forecast the top 10%'s wealth share rising from today's ~71% to 81% under rapid progress versus 75% under slow progress.

  3. Experts and superforecasters diverge most on the effects of rapid progress, suggesting a disagreement about the connection between AI progress and economic change
    While near-term forecasts are broadly aligned, superforecasters are consistently more conservative than experts, and this gap widens under the rapid progress AI scenario. On GDP, superforecasters forecast just 3.4% annualized growth to 2050 under rapid progress versus the expert median of 5.0%. Superforecasters expect a 56% labor force participation rate in 2050 compared to experts' forecast of 52%. Experts forecast an 81% top-10% wealth share under rapid progress versus superforecasters' 76%. This may suggest that superforecasters expect other mechanisms to slow the pace of economic change despite increases in AI capability — at least to a greater extent than experts expect.

Questions

  • Baseline Scenarios: Imagine the LEAP panel is asked at the end of the year 2030 which of the three given scenarios best matches the world as it is then. What is the probability that a given scenario will be the most commonly selected option among the panel of experts? ⬇️

  • Change in Gross Domestic Product: What will be the annualized change, in percent, in the real Gross Domestic Product (GDP) of the U.S. between the following years? ⬇️

  • Labor Force Participation Rate: What will be the labor force participation rate in the U.S. at the beginning of the following years? ⬇️

  • Economic Inequality: What will be the fraction of the national wealth owned by the top 10% wealthiest individuals in the U.S. at the beginning of the following years? ⬇️

For full question details and resolution criteria, see below.

Results

In this section, we present each question, and summarize the forecasts made and the reasoning underlying those forecasts. More concretely, we present background material, historical baselines, and resolution criteria; graphs, results summaries, results tables; as well as rationale analyses and rationale examples. In the first three waves, experts and superforecasters wrote over 600,000 words supporting their beliefs. We analyse these rationales alongside predictions to provide significantly more context on why experts believe what they believe, and the drivers of disagreement, than the forecasts alone.

Baseline Scenarios

Question. Imagine the LEAP panel is asked at the end of the year 2030 which of the three given scenarios best matches the world as it is then. What is the probability that a given scenario will be the most commonly selected option among the panel of experts?

Baseline Scenarios. The figure above shows the distribution of forecasts by participant group, illustrating the median (50th percentile) and interquartile range (25th–75th percentiles) of each forecast.

Results. All participant groups assign the highest probability to Moderate Progress being the most commonly selected scenario among LEAP panelists at the end of 2030. The mean expert assigns 52% probability to Moderate Progress, 26% to Slow Progress, and 23% to Rapid Progress. Superforecasters also place the most weight on Moderate Progress (54%) and the least on rapid progress (19%). The public gives meaningfully more weight to Slow Progress than either experts or superforecasters—30% versus 26% and 26% respectively—a difference that is statistically significant. Across all groups, Rapid Progress receives the lowest probability and moderate by far the largest, suggesting broad agreement that AI is unlikely to either stagnate or transform the economy dramatically by 2030.

Rationale Analysis:

  • Pace of recent progress: Faster-progress respondents frequently cite recent advancements in agentic tools and coding as the primary reason for shifting their forecasts upward since the previous wave: "Until I started using Claude Opus 4.6, I was firmly in the 'Slow Progress' camp. Opus 4.6 is the first AI model I've used where the output is not only on par with something I could have created but is objectively better."; "There was a new, dramatic acceleration in AI capabilities toward the end of 2025 with the release of Claude Code and other similar agentic tools. These have drastically changed my perspective…" Many also argue that, as one writes, the "slow progress world of 2030 is basically already in existence, so it's nearly ruled out. AI would have to hit a wall right now for that scenario to come true." Other forecasters remain unconvinced, however, with one writing that "nothing in the past few months' development feels like it is so radically different to cause an update to previous estimates," and several pointing to ongoing reliability and long-task issues.
  • Physical world capabilities: Echoing points made in Wave 1, many slower-progress respondents continue to argue that physical-world tasks face fundamentally harder challenges than digital-only tasks, and that this will limit the pace of progress. "I think there's a BIG divide between what AI can do in digital space and in physical space," was a common sentiment. In particular, level-5 autonomous vehicles and versatile household robots were singled out as bottlenecks: "My expertise in autonomous cars guides me in setting probability for the progress scenarios. The [likelihood] of level 5 cars is very low by 2030."; "We keep hitting weird bottlenecks with AI like it can write code but robots still can't reliably fold laundry." Faster-progress respondents generally acknowledge robotics as a lagging domain but argue it will not anchor the panel's overall judgment. As one writes: "Maybe advances will lag in those areas, but maybe AI's other capabilities will be sufficiently advanced so that, on net, moderate or rapid progress simply can't be denied."
  • Input constraints and investment: Faster-progress respondents tend to view investment in AI as a tailwind that has grown stronger since Wave 1 and which will mitigate issues like constraints on compute and energy. One points to "evidence of big companies (Google, Amazon, Microsoft, Meta, etc) investing heavily into AI" as a reason to "shift upwards" the estimated pace of AI progress. Another makes a similar point: "There is enough economic incentive and capital for the leading labs to continue building larger models…" Other forecasters, however, express the sentiment that "slow progress still has a meaningful probability because AI development could face bottlenecks in compute scaling, energy, data quality, regulation, safety concerns, or diminishing returns from current architectures."
  • Capability vs. adoption: Faster-progress respondents tended to take a strict interpretation of the request (embedded in the question's parameters) to focus on "capability rather than adoption," and note that this make it "more likely that progress will be considered to be at least moderate if not rapid," whereas slower-progress respondents were more likely to blur this line: "Physical systems, institutions, and adoption cycles simply do not move at software speed. Even very large capability gains diffuse unevenly across society."

Rationale examples, faster-progress respondents:

For scientific breakthroughs, AI systems seem poised to finally meet E.O. Wilson's predictions on consilience—allowing multidisciplinary research to flourish without needing human experts in all domains. Such multi-disciplinary research alone is likely to produce scientific breakthroughs that have been difficult to achieve in the modern, highly specialized world of academia. It would also allow a sort of virtual Bell Labs that also relied on multidisciplinary research and had amazing breakthroughs in the 1950s. This suggests that by the end of 2030 at least moderate progress will have been made in this area and possibly rapid progress.

The last two months have seen the explosion of Claude Code & Codex combined with incredibly increased capacities in the most recent releases of frontier models. I'm updating on the possibility that the speed of progress increases relative to what I thought when we began these forecasting exercises. I still think there's some possibility there will be real plateaus (hence the 20% for slow), but I'm becoming less convinced that will be the case.

Signifiers of more progress to come: current buildout of data centers for AI is very vast; the degree to which post-training, reinforcement learning and synthetic data seem to have expanded training opportunities; appearance of more hardware (i.e. chips) that promise to make "thinking" models more practical; impressive recent progress on agent models.

Progress is happening very quickly now and that is likely to continue, and a few more model iterations, with a focus on long-running agency and scaffolds to simulate continual learning, will get us to systems that can do a wide variety of expert work like programming and science, though perhaps not full automation of white collar work.

Rationale examples, slower-progress respondents:

I suspect physical and data bottlenecks will hinder the path to AGI. For AI to independently generate Pulitzer-caliber novels or create materials that revolutionize energy storage as described in this scenario, I think models would need to break through the "data wall" and the diminishing returns of scaling laws. Furthermore, I believe bridging the gap to the 99.9% reliability required for fully autonomous driving or general-purpose robotics presents a massive engineering challenge…expecting perfection in safety and physical interaction within a short five-year window is extremely optimistic, which is why I view this as a low-probability outcome.

I still believe that the rate of progress of AI has been slowing, based on…the latest models. Many of the milestones in the moderate progress are absolutely unachievable by 2030—such as domestic robots that can navigate any home, and carry out daily tasks as quickly as humans. Rapid progress is, in my opinion, absolutely unachievable by 2030.

"AI style" is still patent in AI generated images, texts or music. The creative outcomes of AI may look made by humans by 2030, but they will have this repetitive AI style that will make them not acceptable for humans. For human-shaped robots, the improvement in movement has always been slow and I do not expect overwhelming agility increases by 2030.

Change in Gross Domestic Product

Question. What will be the annualized change, in percent, in the real Gross Domestic Product (GDP) of the U.S. between the following years?

Change in Gross Domestic Product. The figure above shows the distribution of forecasts by participant group, illustrating the median (50th percentile) and interquartile range (25th–75th percentiles) of each forecast.

Results. When not told to condition on a progress scenario, experts, superforecasters, and the public broadly agree on near-term GDP growth, with median forecasts of 2.7%, 2.5%, and 2.5% annualized real GDP growth for 2025–2030, respectively—broadly in line with recent historical rates. Scenario assumptions make a substantial difference, however. Under Rapid Progress, the median expert expects 4.0% annualized growth to 2030 and 5.0% to 2050, while under Slow Progress those figures fall to 2.3% and 2.5%. Superforecasters are more conservative under Rapid Progress, forecasting only 3.5% annualized growth to 2030 and 3.4% to 2050—below expert estimates—suggesting superforecasters are more skeptical that AI-driven gains will translate into GDP growth even under optimistic capability scenarios. Differences between experts and the public are small and generally not statistically significant, with both groups expecting somewhat stronger long-run growth under Moderate and Rapid Progress scenarios than superforecasters.

Rationale Analysis:

Below, we refer to those who advance arguments for why GDP will remain near historical norms as "lower-growth" respondents and those who advance arguments for why GDP will significantly increase as "higher-growth" respondents.

  • Adoption lag: The most frequently cited reason for why even strong AI capabilities are unlikely to translate quickly into meaningful GDP growth is that institutional, societal, and regulatory frictions will slow diffusion: "Even if AI capabilities move quickly, the economy doesn't just flip a switch. Companies have to reorganize. Capital has to cycle through. Workers have to move into new roles."; "Software progress can move quickly, but GDP responds to capital replacement cycles, regulatory approval, and human trust, all of which move much more slowly."
  • Demographic and structural headwinds: Lower-growth respondents cite an aging population, a declining labor force participation rate, declining immigration, and a high U.S. debt load as drags on GDP that AI cannot easily offset. "Given slower population growth, an aging workforce, and moderate productivity gains, a reasonable central estimate for annual real GDP growth would be between 1.7% and 1.9%," writes one, and another that "without immigration, the U.S. population would start to shrink in 2033." Higher-growth respondents tend to emphasize the potential of AI to decouple GDP growth from population growth: "In 'Rapid,' Al substitutes for labor (robotics) and accelerates innovation (R&D), removing traditional bottlenecks."
  • The base rate and Fed/CBO projections: Lower-growth respondents anchor heavily on the stability of U.S. real GDP growth at roughly 2–2.5% annualized over recent decades. One points to the fact that "numerous technological breakouts have occurred over the past 100 years…(electrification, mass customer product manufacturing, global supply chains, radio and television rollouts, digitization, computer technology, an infinite set of advances based on the internet…)" as a reason to be skeptical that the emergence of AI will result in GDP growth "significantly different than [what] the historical record indicates." Another notes that "The Federal Reserve's FOMC published dot plot still projects the long term US GDP growth rate at slightly less than 2%," and another that the CBO is projecting 1.6% growth over the next 30 years. Higher-growth respondents often argue AI's potential to automate cognitive labor and accelerate R&D make it different; that if it "becomes a strong collaborator across sectors, automates large portions of knowledge work, and improves R&D productivity, leading to sustained productivity gains," that this will "eventually put GDP growth above the base rate."
  • Job cuts and demand destruction: A significant subset of lower-growth respondents worry that rapid AI progress could paradoxically reduce GDP by destroying jobs and consumer purchasing power. One writes, "If managers can replace large fractions of employees with Al, with no obvious alternative employment opportunities, there will be less wealth to be spent among the general populace, which will reduce demand for goods/services and hurt GDP," and another that "if large swaths of the population are out of [a] job, consumption will suffer." Higher-growth respondents tend to assume displaced workers will be reabsorbed or view automation as a solution to labor scarcity: "The integration of autonomous systems in factories and homes overcomes traditional labor shortages and operational bottlenecks."
  • Non-AI related risks: Many lower-growth respondents emphasize that GDP over these horizons will be shaped more by tariffs, trade wars, kinetic wars, political instability, debt crises, and climate change than by AI progress: "Will there be a significant recession during the two dates or a major military conflict or another pandemic?"; "Tariffs and the U.S.-China rift…will reduce the overall number of buyers for any good or service produced in the United States."; "For 2050 I am also accounting for possible climate change shocks that reduce GDP growth." Higher-growth respondents tend to either not address these factors or to cite one or some of them as a reason to widen their lower confidence interval.
  • Timing mismatch between AI uplift and measurement windows: A number of respondents argue that even if AI delivers a meaningful GDP boost, it may manifest as a temporary bump that doesn't align neatly with the 2025–2030 or 2045–2050 measurement periods. "My 2045–2050 growth predictions are lower than my predictions for peak five year growth by 2050, because I think it is possible that there will be a temporary period of highly transformational effects and peak growth will have already passed by 2050," writes one forecaster. Another adds that "moderate progress might lead to elevated growth for a decade or so, before returning to the long-term average."
  • Existential risk: A minority of respondents—mostly those forecasting extreme variance rather than simply high or low growth—flag the possibility that human economic activity ceases entirely. One writes: "I think there's a >10% chance that 'human economic activity ceases' for all 2050 10th percentiles," and another that "I expect the rapid progress scenario to very likely lead to societal collapse hence the negative GDP forecast." Relatedly, a few respondents raise the concern that GDP may become a meaningless metric under rapid progress: "Does GDP mean anything for Moderate and Rapid scenarios by the year 2045? It seems similar to trying to measure the current Human GDP in terms of the Crow economy."

Rationale examples, lower-growth respondents:

I think most activities in the economy are about navigating equilibria, not raw optimization power. Therefore, and since I expect diffuse AI capabilities, I don't expect massively out of the ordinary gains in the rate of GDP growth, even in the very rapid progress scenarios.

Many of the most frequently cited breakthrough examples require physical engineering rather than purely cognitive advances. 'Cure cancer' is not a software problem alone. It requires wet lab automation, controlled experimentation, regulatory validation, and interaction with extremely expensive and sensitive hardware systems. Connecting autonomous agents directly to multimillion dollar laboratory equipment introduces both technical and existential risk. Granting systems access to reagents capable of producing biological hazards raises obvious safety concerns. These constraints slow deployment even when underlying intelligence is capable.

Social resistance may ultimately prove even more significant than integration complexity. Human reactions such as 'I do not trust it' are emotional rather than epistemic, but they still shape policy and adoption. Individuals and institutions are wired to protect status, expertise, and organization.

Given slower population growth, an aging workforce, and moderate productivity gains, a reasonable central estimate for annual real GDP growth would be between 1.7% and 1.9%. Under a moderate AI acceleration scenario, AI could significantly boost productivity, increase total factor productivity (TFP) growth and raise labor productivity, particularly in services and knowledge-intensive sectors. In a rapid progress scenario, TFP growth could exceed 2% annually, with broad automation of cognitive labor. In such a case, annualized GDP growth could reach up to 5%.

The one scenario I believe is exceedingly unlikely is a major acceleration of economic progress. Either the technology is not as successful as the hype predicts it will be, or it will be as successful as predicted in the optimistic scenarios, which will lead to a complete displacement of most of the society from economic activity. Unlike in previous technology iterations, there is no space for the displaced humans to be reintroduced even after re-training as the strong AI should affect all economic activities. Given the current utter lack of societal level preparation for such a scenario, I am deeply sceptical this will not lead to a full collapse.

Rationale examples, higher-growth respondents:

If AI systems can truly 'collapse years-long research timelines into days' and deploy Level-5 robo-taxis that go anywhere, the traditional constraints on economic output—specifically labor shortages and R&D bottlenecks—are effectively removed. In this world, the marginal cost of intelligence and physical logistics approaches zero. I believe this would trigger a productivity boom comparable to the electrification of the 1920s or the IT [information technology] boom of the 1990s, but compressed into a much shorter timeframe, leading to a massive, albeit potentially volatile, spike in real output.

By automating standard research, data analysis, and software tasks, AI frees human experts to focus exclusively on high-impact breakthroughs. The integration of autonomous systems in factories and homes overcomes traditional labor shortages and operational bottlenecks. AI significantly lowers the marginal cost of services, pushing growth far beyond historical 2-4% values.

The U.S. economy has strong momentum and structural advantages for AI adoption, justifying an above-consensus baseline. The big uncertainty is whether AI proves transformative enough to overcome serious long-term demographic and fiscal headwinds, which is why the near-term scenarios cluster tightly while the long-term ones diverge dramatically. The bullish lean reflects a belief that the U.S. tech ecosystem and flexible labor markets position it better than most to capture AI-driven productivity gains.

Currently it makes economic sense to offshore economic activity from wealthy countries to poorer ones to increase profits by benefitting from cheap labour. If AI delivers cheaper labour than developing countries then not only might the US stop offshoring that work, but it would also make sense for investors in developing countries to move their investments to the US too. You could have a scenario where global investment becomes extremely highly concentrated.

Labor Force Participation Rate

Question. What will be the labor force participation rate in the U.S. at the beginning of the following years?

Labor Force Participation Rate. The figure above shows the distribution of forecasts by participant group, illustrating the median (50th percentile) and interquartile range (25th–75th percentiles) of each forecast.

Results. Near-term, all groups expect labor force participation to remain roughly stable. Unconditionally, the median expert, public, and superforecaster all forecast a participation rate of approximately 62% by the beginning of 2030—close to its current level of around 62.4%. Scenario assumptions have a large effect in the longer run: under Rapid Progress, the median expert expects LFPR to fall to 52% by the beginning of 2050, 8 percentage points below the median Slow Progress forecast of 60%. This divergence between scenarios is much larger for experts (8 pp) than for the public (0 pp) or superforecasters (4 pp), indicating that experts foresee a substantially greater risk of AI-driven labor force displacement under optimistic capability scenarios. At the unconditional 2050 median, all three groups forecast approximately 60%, though the expert distribution is shifted slightly lower than the public's—a difference that is statistically significant—and under Rapid Progress experts forecast substantially lower LFPR than the public (52% vs. 60%, p<0.001), suggesting experts believe AI-driven productivity gains are more labor-displacing than the public does.

Rationale analysis:

Below, we refer to those who advance arguments for why labor participation will remain near historical norms as "stable-LFPR" respondents and those who advance arguments for why labor participation will significantly decline as "low-LFPR" respondents.

  • Creative destruction vs. pure destruction: Most stable-LFPR respondents argue AI will create ample new jobs even as old ones are destroyed: "Even if AI replaces some existing human tasks, I expect labor markets to reallocate rather than permanently shrink: people and firms will adapt by shifting work toward tasks where humans retain an advantage, and new roles and occupations will emerge." Many invoke prior technological revolutions as evidence that participation rates will hold, with one noting that massive declines "never happened in any of the previous GPT [general purpose technology] transformations (steam engine, electricity, combustion engine, computers, ICT [information and communications technology]) or in the recent 30 years of automation." Low-LFPR respondents tend to emphasize that AI is different: "Al is inherently an automating technology so prospects for maintaining high labor force employment are low. New jobs are inevitable but these can easily be supplemented and eventually replaced by Al and robotics."; "Even though AI may create some new types of jobs, [there will be] far fewer than the traditional jobs being replaced."
  • Demographics: Both stable- and low-LFPR respondents cite an aging population as likely to put downward pressure on labor participation, but stable-LFPR respondents tend to emphasize it as a primary consideration: "I think two primary forces drive these estimates: demographics and AI displacement. Demographics dominates through 2030 regardless of [the] scenario, which is why my near-term numbers stay in a narrow band." Low-LFPR respondents, however, tend to emphasize this factor's potential to compound AI displacement: "Demographics alone will push participation down steadily as the 65+ share of the population grows from 18% now to over 21% by 2035 and keeps rising. With rapid AI progress, the decline accelerates."
  • Human desire to work: Many stable-LFPR respondents argue that work fulfills psychological and social needs beyond income: "I expect lots of people to still work in 2050, even if their labor is not that important or useful. People just like working, and will invent reasons to pay each other for services." Some low-LFPR respondents argue otherwise: "Most people do not enjoy working, so rapid progress would allow most people to not have to work." One forecaster projects that by 2050 under rapid progress, only "about 1/4 people in the working-age population will chose to be employed…(for meaning, purpose, etc.)."
  • Adoption lag: Stable-LFPR respondents often emphasize that rapid capability gains may not result in rapid adoption, especially when considering short timelines: "Labor markets are sticky and five years is not enough time for even rapid AI gains to fully reshape workforce composition." Low-LFPR respondents generally acknowledge near-term stickiness but argue 2050 is long enough for full disruption: "E.g., all the software engineering jobs we are currently worried are going away might actually go [away] gradually over next few decades."
  • LFPR as a measurement artifact: Some stable-LFPR respondents note that displaced workers still count as participants if they seek work: "If AI displaces workers but they keep looking for jobs, participation stays the same (unemployment goes up instead)." But low-LFPR respondents generally argue that prolonged displacement is likely to produce discouraged workers who exit entirely: "Even in the short term…many white-collar jobs could disappear, leading a significant number of people to stop actively searching for work simply because those jobs no longer exist."
  • Policy responses (UBI, safety nets, jobs programs): Both camps acknowledge public policy as a wild card, but disagree on its likely impact. Stable-LFPR respondents tend to think it more likely that intervention will sustain participation. "Governments have a strong incentive to keep labor participation high," writes one. Another adds, "I could imagine massive government jobs programs, or ways of keeping people 'busy and productive' even if it's a bit menial." Several low-LFPR respondents, however, argue UBI or an "expansion of 'social safety net' policies that discourage labor force participation" could have the opposite effect: "Policies such as UBI or 'tax the robots' would have to be in place for large percentages of the workforce to fully pull out of the workforce and thereby lower workforce participation rates substantively."
  • Wealth effects leading to voluntary exits: This distinction appears primarily among some low-LFPR respondents who argue AI-driven productivity gains may create wealth that makes work optional for many: "Society will also be richer, so I would expect more early retirements and other forms of reducing how much time the average individual spends in the labor force." Stable-LFPR respondents rarely engage with wealth effects directly, instead emphasizing that without redistribution mechanisms, most people will still need income from work: "Without a deep change in the economic paradigm, it is hard to imagine a world where only a small fraction of the population works..." One notes that "Baumol-like effects may make work more remunerative for difficult to automate jobs in the near-term, hence some possibility for growth."

Rationale examples, stable-LFPR respondents:

I do not expect AI to dramatically shift the LFPR on its own. An increase in productivity from a given technology has both substitution and income effects: higher productivity makes a given person feel richer, which reduces labor supply. Higher productivity also raises the opportunity cost of not working, which raises labor supply. Over the past century, these two effects have largely canceled out, although people do work a bit less now than before.

I am no believer in "the end of labor" coming with AI. This has never happened in any of the previous GPT [general purpose technology] transformations (steam engine, electricity, combustion engine, computers, ICT) or in the recent 30 years of automation. Why? Because people adjust. Some jobs will disappear, but others will emerge. Incomes may adjust, but labor shares will not fall a lot. With a rising flow of "robots", the marginal product of robots goes down so the capital share of NDP [net domestic product] remains roughly the same (it has been virtually constant across the OECD over the past 50 years—variation across countries is much larger than within countries over time).

AI lowers barriers to productive contribution by allowing individuals to operate at higher leverage with fewer resources. This expands opportunities for entrepreneurship, artistic production, and flexible forms of employment that still count as economic participation. By 2050 the uncertainty increases because entire classes of professions may emerge, but the underlying pattern remains the same: AI augments human agency rather than replacing it. The workforce evolves, but participation persists because humans continue to seek purpose, identity, and social contribution through work.

Rationale examples, low-LFPR respondents:

My default is to think that rapid progress will break from past advancements that did not hurt employment over the long run. If AI/AGI can do what is expected of it, all but the most innovative people will be obsolete. If it improves from what it is now, as an amazing research tool and aggregator, but with numerous flaws that need human intervention, the impact will not be devastating, but is still likely to lead to a reduction in employment.

Instead of workers finding alternative employment (likely at lower wages), societies could face large-scale unemployment at unprecedented levels—similar to the Great Depression. In that era, the government intervened to create new jobs and public works programs as a buffer to the horrific effects of massive unemployment. Will governments seek to create similar public works programs, or will they seek massive increases of welfare benefits, such as long-term unemployment compensation or food dividends?

The greater AI capabilities are, and the more they spread into the physical world, the more difficult it will be for human workers to compete. Society will also be richer, so I would expect more early retirements and other forms of reducing how much time the average individual spends in the labor force.

I note that my 2030 estimates are more aggressive than historical baselines might suggest. The conventional view treats LFPR as a lagging indicator that shifts slowly through attrition and demographic drift. However, the emerging pattern of AI-driven mass restructuring (exemplified by Block's 40% workforce reduction in February 2026, explicitly attributed to 'intelligence tools' and rewarded with a 23% stock price surge) suggests a different dynamic. When profitable companies discover that AI allows them to shed half their workforce while increasing output, and when the market rewards rather than punishes this decision, the incentive cascade could produce LFPR drops on a faster timeline than any previous technological transition.

Economic Inequality

Question. What will be the fraction of the national wealth owned by the top 10% wealthiest individuals in the U.S. at the beginning of the following years?

Economic Inequality. The figure above shows the distribution of forecasts by participant group, illustrating the median (50th percentile) and interquartile range (25th–75th percentiles) of each forecast.

Results. Starting from a top-10% wealth share of approximately 71.2% in 2023, all participant groups expect wealth concentration to increase modestly over the coming decades. Unconditionally, the median expert forecasts a top-10% share of 73% by the beginning of 2030 and 75% by 2050, with superforecasters expecting 72% and 73%, respectively. AI capability scenarios matter significantly: under Rapid Progress, the median expert forecasts a top-10% wealth share of 76% by 2030 and 81% by 2050, compared to 72% and 75% under Slow Progress. Experts generally forecast higher wealth concentration than both superforecasters and the public, especially in the longer run and under faster-progress scenarios. Under Rapid Progress by 2050, experts forecast a median share of 81%, meaningfully above the superforecaster median of 76%, suggesting experts believe that rapid AI-driven growth will disproportionately benefit capital holders and further concentrate wealth at the top.

Rationale analysis:

Below, we refer to those who advance arguments for why economic inequality will remain near historical norms as "stable-inequality" respondents and those who advance arguments for why economic inequality will significantly increase as "high-inequality" respondents.

  • Capital vs. labor returns: The most frequently cited dynamic across all respondents is that AI will likely increase the returns to capital and suppress the value of human labor, and that this dynamic will result in more concentrated wealth among asset owners. Most high-inequality respondents treat this as the dominant mechanism: "If capital and labor become easy substitutes, I would expect returns to capital to go up a lot, which naturally increases the amount of wealth owned by the wealthiest." Stable-inequality respondents generally do not dispute the direction but argue the magnitude is overstated. One writes that roughly "80% of all assets in rich countries are either 1) housing or 2) pension funds. That is, most of wealth is owned by ordinary working, middle-class people. Rising asset prices will accordingly benefit the masses."
  • Redistributive policies: The second most common theme is whether governments, responding to political backlash, will intervene to counteract concentration. Many stable-inequality respondents argue that they will: "If wealth inequality skyrockets, we may see changes in capital taxation to redistribute the gains from AI." High-inequality respondents tend to discount this possibility, at least in the near term: "I do not assume that the United States, for example, will introduce a universal basic income or substantially higher taxation to offset these effects." Several respondents note this makes the question "more a political question than an AI one."
  • Base rate vs. discontinuity: Many stable-inequality respondents anchor on the historical base rate, which one observes has been "remarkably stable in the area of 70%, with some ups and downs," whereas high-inequality respondents tend to argue that AI represents a structural break, pointing again to the capital over labor dynamic: "AI is a technology that heavily favors people that have capital. This means that capital concentration will increase radically." Echoing that sentiment, another writes that "the top 10% of US households own the vast majority of equities and business equity" and that an AI-driven stock boom will lead to a "massive concentration of wealth at the top."
  • Social instability: Several respondents across both poles reference an implicit upper bound on wealth concentration imposed by the need for social cohesion. One writes: "I expect there is probably some sort of ceiling to wealth concentration, beyond which social unrest (whether peaceful or violent) takes hold and restores things to some extent." High-inequality respondents sometimes acknowledge this but a few argue AI-enabled control measures could raise that ceiling: "Civil unrest will be harder to realize given AI can be harnessed very effectively to subdue people."
  • The impact of rapid progress: Relatedly, a notable subset of respondents across both poles agree that rapid AI progress introduces extreme uncertainty, but disagree on the direction. High-inequality respondents tend to think that "if we're experiencing more progress, we're likely experiencing more income/power inequality" and that this could lead to social instability. Some stable-inequality respondents, however, argue the opposite—that rapid progress could actually reduce inequality through abundance or disruption: "Slower progress could concentrate wealth more. Faster, disruptive, broader growth could spread gains more widely and lower the top share."; "As AI democratizes knowledge, there will be more social mobility…"
  • Winner-take-all dynamics in the AI industry: High-inequality respondents often cite the concentrated structure of the AI industry as an amplifier: "AI will clearly favour wealth concentration in the hands of the AI owners, which will be a very small group. The smarter the AI, the greater the concentration." But some stable-inequality respondents point to the potential for competitive dynamics and creative destruction to limit permanent concentration. As one writes, "OpenAI, Anthropic, Google and Meta are just one or two Chinese open weight models away from the bankruptcy of their AI businesses." Another points to the potential for an AI "bubble that has been financed by members of the top 10%" to "blow up."
  • The 0.1% vs. the 10%: A less frequent but notable difference lies in how respondents interpret the 10% metric provided in the question's parameters. Stable-inequality respondents sometimes argue the question targets too broad a group to see massive shifts. One forecaster points out, "The issue isn't the 10% at the top, it is the 0.1% at the top," and another suggests that "there may be a few trillionaires and a number of billionaires whose wealth has grown because of AI, but the majority of the top 10% may not reap the same benefits." High-inequality respondents rarely make this distinction, generally treating the top 10% as a monolith that will collectively absorb the "lion's share of automation dividends."

Rationale examples, high-inequality respondents:

AI is a technology that heavily favors people that have capital. This means that capital concentration will increase radically. I believe it will be at least on the same levels as the Industrial Revolution or more. At the height of the Second Industrial Revolution, concentration of wealth in the top ten percent reached up to ninety percent. I think [it] is likely that [this] will occur again.

Inequality is fundamentally driven by the gap between the return on capital and economic growth/wages. I think AI accelerates return by reducing labor costs and boosting corporate profits. In the 'Rapid' scenario, the obsolescence of human labor creates extreme concentration of wealth among asset owners, unless radical political intervention redistributes it.

Accompanied by the protectionist behavior of the US government, the fight against unions, and the AI hype narrative, those that depend on labor income for their wealth will continue to see socioeconomic decline compared to those that own assets. This will become more extreme the more rapidly AI progresses over the next 2 decades.

In the longer term—especially by 2050—I expect wealth concentration to increase significantly. This dynamic would likely be particularly strong in a rapid progress scenario. I do not assume that the United States, for example, will introduce a universal basic income or substantially higher taxation to offset these effects. Instead, it seems more plausible that the very top tier of highly skilled individuals—the absolute elite—will further expand their advantage. With privileged access to advanced computational resources and cutting-edge AI models, they will be able to amplify their productivity and creative output even more. As a result, they may become less dependent on broader labor structures and therefore less inclined to share the economic gains, reinforcing long-term inequality.

Rationale examples, stable-inequality respondents:

As Nobel laureates Acemoglu and Johnson argue in their latest (2023) book, 'Power and Progress', the future effects of technology on inequality remain to be written, and they will be largely determined by societal choices, not by solely technological ones.

I see no fundamental reason why AI should be linked to more wealth inequality. AI generated wealth can be taxed in the same way any other wealth [can]. As most AI wealth today is based on data scraped (stolen?) from content created by other people, and the theories of AI were developed over the centuries and largely funded from the public purse, I don't see a moral reason for one group of people to gain a disproportionate share of the benefit.

Based on Gini coefficient studies across developed nations, this is something that should not change absent some extreme political or economic change prompted by some extreme exogenous event like extraterrestrial discovery, military conflict, or natural or biomedical disaster that dramatically changes the course of history (i.e. COVID-19 wouldn't count). I think AI can and will flourish within capitalism and change much of the underlying activities without changing capitalism and its principles—one of which is wealth disparity—significantly.

Footnotes

  1. We note that the Wave 1 question was framed slightly differently than the current wave. Namely, we asked participants to forecast the "percent of LEAP panelists [that] will choose "slow progress", "moderate progress", or "rapid progress" as best matching the general level of AI progress" rather than "the probability that a given scenario will be the most commonly selected option among the panel of experts."

  2. In some cases, the "aggregate" refers to the mean; in others, the median is used, depending on which is more appropriate for the distribution of responses. 2 3 4

  3. We occasionally elicit participants' quantile forecasts (estimates of specific percentiles of a continuous outcome) to illustrate the range and uncertainty of their predictions. 2 3 4

Cite Our Work

Please use one of the following citation formats to cite this work.

APA Format

Murphy, C., Rosenberg, J., Canedy, J., Jacobs, Z., Flechner, N., Britt, R., Pan, A., Rogers-Smith, C., Mayland, D., Buffington, C., Kučinskas, S., Coston, A., Kerner, H., Pierson, E., Rabbany, R., Salganik, M., Seamans, R., Su, Y., Tramèr, F., Hashimoto, T., Narayanan, A., Tetlock, P. E., & Karger, E. (2025). The Longitudinal Expert AI Panel: Understanding Expert Views on AI Capabilities, Adoption, and Impact (Working paper No. 5). Forecasting Research Institute. Retrieved 2026-05-01, from https://leap.forecastingresearch.org/reports/wave6

BibTeX

@techreport{leap2025,
    author = {Murphy, Connacher and Rosenberg, Josh and Canedy, Jordan and Jacobs, Zach and Flechner, Nadja and Britt, Rhiannon and Pan, Alexa and Rogers-Smith, Charlie and Mayland, Dan and Buffington, Cathy and Kučinskas, Simas and Coston, Amanda and Kerner, Hannah and Pierson, Emma and Rabbany, Reihaneh and Salganik, Matthew and Seamans, Robert and Su, Yu and Tramèr, Florian and Hashimoto, Tatsunori and Narayanan, Arvind and Tetlock, Philip E. and Karger, Ezra},
    title = {The Longitudinal Expert AI Panel: Understanding Expert Views on AI Capabilities, Adoption, and Impact},
    institution = {Forecasting Research Institute},
    type = {Working paper},
    number = {5},
    url = {https://leap.forecastingresearch.org/reports/wave6}
    urldate = {2026-05-01}
    year = {2025}
  }