Wave 3: Broad Adoption of AI

Wave 3 asked panelists to forecast AI investment, AI use in jobs, the gap between closed vs open-source performance, and barriers to adoption.

First released on:
10 November, 2025

The following report summarizes responses from 263 experts, in addition to 56 superforecasters and 998 members of the public collected between Sep 22, 2025 and Oct 13, 2025. Within expert respondents, 60 computer scientists, 54 industry professionals, 59 economists, and 90 research staff at policy think tanks participated.

Our wider website contains more information about LEAP, our Panel, and our Methodology, as well as reports from other waves.

Questions

  • AI Investment: What will be the global private investment (in billion USD) in AI in the following years? ⬇️

  • Generative AI Use Intensity: What percent of work hours in the U.S. at the following dates will be estimated as assisted by generative AI, according to a future iteration of the St. Louis Fed study or a similar study selected by an FRI-appointed expert panel? ⬇️

  • Open vs Proprietary Polarity: What will be the mean benchmark performance of the best closed-weight AI models and the top open-weight AI models on the following set of benchmarks by the following resolution dates? ⬇️

  • AI Companions: What proportion of U.S. adults will self-report using AI for companionship at least once daily by the following resolution dates? ⬇️

  • Barriers to Adoption, Part II: By the end of 2030, what percent of LEAP expert panelists will say that each of the following factors has significantly slowed AI adoption relative to popular expectations around AI adoption progress in 2025 in the general economy? ⬇️

For full question details and resolution criteria, see below.

Results

In this section, we present each question, and summarize the forecasts made and the reasoning underlying those forecasts. More concretely, we present (1) background material, historical baselines, and resolution criteria; (2) graphs, results summaries, and results tables; (3) rationale analyses and rationale examples. In the first three waves, experts and superforecasters wrote over 600,000 words supporting their beliefs. We analyze these rationales alongside predictions to provide significantly more context on why experts believe what they believe, and the drivers of disagreement, than the forecasts alone.

AI Investment

Question. What will be the global private investment (in billion USD) in AI in the following years?

Results. The median expert predicts 200 billion USD in private investment in AI by 2027,1 and 260 billion USD in 2030,2 almost double the 130 billion today.3 50% of experts believe that investment in 2030 will be between 196–400 billion USD, with 25% each on either side of this interval. The top decile believes that investment will be greater than 750 billion USD in the median scenario. Experts and superforecasts largely predict similarly, whereas the general public believes there will be substantially less global private investment in AI: 1604 vs 200 billion USD in 2027, and 1835 vs 260 billion USD in 2030.

AI Investment. The figure above shows the median 50th percentile (as well as 25th and 75th percentiles when applicable) forecasts by participant group.
AI Investment. The figure above shows pooled probability distributions. We estimate each forecaster’s full probability distribution from their 25th, 50th, and 75th percentile forecasts, by fitting the cumulative density function of an appropriate distribution (i.e., beta or gamma distribution) to the observed forecasts using nonlinear least squares.these samples to an appropriate distribution (i.e., beta or gamma distribution). We then sample from these distributions and plot the aggregated distribution for each forecaster category. The 75th percentile of this final distribution represents the value that forecasters in aggregate believe there’s a 25% chance the true value exceeds.

Rationale analysis:

  • Trend extrapolation: Most high-forecast respondents extrapolate from historical trends showing over 30% compound annual growth from 2013-2024 and believe this momentum can continue. One argues, “$1T [trillion] is not very much compared to the total pool of investable assets.” Another notes that “the strong rebound to ~$130 billion in 2024 is critical. It occurred despite higher interest rates, signaling powerful, non-speculative belief in the transformative potential of generative AI.” But many low-forecast respondents worry about the AI bubble bursting, with one forecaster noting “both Deutsche Bank and Bain & Co. have just warned that the current AI boom is not sustainable,” (Edwards 2025) and another likening the current situation to “the dot com bubble in 2000.”
  • Enterprise adoption timelines: High-forecast respondents frequently note that “AI adoption is still in its early stages across many industries, suggesting there is substantial room for further expansion.” One forecaster quotes a 2025 McKinsey report: “Over the next three years, 92 percent of companies plan to increase their AI investments. But while nearly all companies are investing in AI, only 1 percent of leaders call their companies 'mature' on the deployment spectrum.” Some low-forecast respondents, however, argue that “the uptake for commercial purposes is still fairly slow,” and worry that productivity gains may not materialize quickly enough to justify high levels of investment.​ One noted, “Anthropic CEO's forecast of 90% of coding in the USA done by AI 'within six months' has been a fantastic dud” (Council on Foreign Relations 2025).
  • Economic downturn: Although rarely emphasized by high-forecast respondents, pessimists often stress macroeconomic risks, with one noting “The NBER [National Bureau of Economic Research] lists 14 recessions since (but not including) the Big One in 1929,” (Federal Reserve Bank of St. Louis 2025) and from that calculates that “the chances of an economic contraction through 2027 and 2030 are 33% and 61% respectively, assuming a Poisson [i.e., random] process.”
  • Market evolution: High-forecast respondents tend to expect continued startup ecosystem growth, believing, “AI start ups & companies are staying in the private market for longer as it's much easier to get funding.” One notes that: Open AI has raised billions this year in the private market and that “these companies can also remain more agile to compete against a very fast changing market.” But some low-forecast respondents anticipate consolidation, with one arguing that “some leading companies will raise less private capital as they mature, merge, or turn to other funding sources.”
  • Investment sources: Some high-forecast respondents emphasize international expansion potential: “I suspect the number will be much higher than forecast as China's economy matures and begins to lift economies throughout the Asian region, including India. U.S. AI is just the tip of the iceberg.” Low-forecast respondents tend to express skepticism, with one noting that “investment is highly concentrated in the US,” and that “I don't expect the rest of the world to pick up the slack.”

Generative AI Use Intensity

Question. What percent of work hours in the U.S. at the following dates will be estimated as assisted by generative AI, according to a future iteration of the St. Louis Fed study or a similar study selected by an FRI-appointed expert panel?

Results. Experts predict that 4% of 2025 work hours in the U.S. will be assisted by generative AI,8 8% in 2027,9 and 18% in 2030.10 That’s 9 times our estimate for Sep 2024, 2%. The top 25% of experts believe that more than 30% of work hours will be assisted by generative AI in 2030 and the top decile of experts believe that more than 40% of work hours will be assisted by generative AI. Experts and superforecasts predict similarly, whereas the public predicts much less progress in the medium-term: 10% by 2030,11 almost half that of experts.

Generative AI Use Intensity. The figure above shows the median 50th percentile (as well as 25th and 75th percentiles when applicable) forecasts by participant group.
Generative AI Use Intensity. The figure above shows pooled probability distributions. We estimate each forecaster’s full probability distribution from their 25th, 50th, and 75th percentile forecasts, by fitting the cumulative density function of an appropriate distribution (i.e., beta or gamma distribution) to the observed forecasts using nonlinear least squares.these samples to an appropriate distribution (i.e., beta or gamma distribution). We then sample from these distributions and plot the aggregated distribution for each forecaster category. The 75th percentile of this final distribution represents the value that forecasters in aggregate believe there’s a 25% chance the true value exceeds.

Rationale analysis:

  • Pace of enterprise adoption: Many high-forecast respondents emphasize that fast distribution channels—through existing hardware like PCs and phones, and familiar software from Microsoft, Google, and Adobe—will accelerate enterprise adoption: “Tools are more and more integrated into everyday software products...it’s shipping by default in editors, docs, email, calendars, crm [customer relationship management software], helpdesk [internal support software]... you don't ‘go to AI’, your software already has AI.” Many high-forecast respondents also believe that competitive pressure will necessitate quick adoption: “Companies that don't adapt will go out of business.” Low-forecast respondents frequently point to “institutional and bureaucratic barriers,” and argue that “these changes are mediated by human organizations changing how they work. Most organizations are not that fast.”
  • Use cases: High-forecast respondents often highlight established use cases for routine writing, computational tasks, coding, report preparation, bookkeeping, graphic design, tax preparation, and legal briefing. They argue that “the primary long-term driver will be the shift from sporadic, task-specific use to continuous, deeply integrated AI assistance within core software platforms.” Low-forecast respondents tend to acknowledge the potential but stress that there are “many jobs that simply do not need the use of GenAI, [for example] clerks, bartenders, plumbers, etc.,” and that this creates a natural ceiling on adoption rates.​
  • Impact on labor market: Forecasters are divided on whether AI will predominantly assist or replace human workers. Most think an expansion of AI assistance is likely, but others argue that in many cases, “AI would eliminate rather than assist with jobs,” and that when AI did assist, it would do so quickly, freeing up the human to do non-AI work—in which case the measured use of AI could flatten or decline, even as the impact of AI use was rising.
  • Capability requirements: High-forecast respondents tend to believe current generative AI capabilities are sufficient for substantial workplace assistance: “I don't think AI tech improvement is needed here…This is just about how quickly tech can get integrated into organizations.” Many pessimists disagree. One notes, “I'm bullish on AI as a whole but less persuaded by the current models.”
  • Generational changes: Several high-forecast respondents highlight that by the end of 2030, “more people who frequently used generative AI in their academic career will have entered the workforce and these younger people are often the ones spending a lot of time on repetitive deliverables.” They see this demographic shift as a key driver, while pessimists tend to emphasize that the vast majority of laborers entered the workforce prior to the advent of generative AI and are still “used to doing their normal functions without AI systems.”
  • Pace of blue-collar adoption: A notable split exists on whether AI will penetrate manual labor roles. Low-forecast respondents frequently emphasize that manual labor is a key bottleneck given that substantial portions of the workforce perform tasks, “inherently unsuitable for generative AI assistance.” But several high-forecast respondents push back on that notion, arguing that “AI can be integrated into practically any form of decision-making,” for example, “construction workers can use AI assistants for safety checks, logistics planning, or instructional support. Retail and service workers may rely on AI-powered scheduling, training, or customer-facing chat systems. Thus, penetration will broaden across the economy, not remain confined to office work.”

Open vs Proprietary Polarity

Question. What will be the mean benchmark performance of the best closed-weight AI models and the top open-weight AI models on the following set of benchmarks by the following resolution dates?

Results. Experts and superforecasts predict a ~15% performance gap between open weight and proprietary models across all time horizons.[^3011] Superforecasters predict similarly to experts, while the general public predict substantially less progress overall (48% vs 70% benchmark performance in 2030), as well as slightly smaller gaps in performance: ~10% in 2025 and 2027, and 8% by 2030—about half that of experts.12

Open vs Proprietary Polarity. The figure above shows the distribution of forecasts by participant group, illustrating the median (50th percentile) and interquartile range (25th–75th percentiles) of each forecast.

Rationale analysis:

  • Capital expenditure advantage: Most high-polarity respondents emphasize that jumps in model capabilities increasingly require massive amounts of capital to pay for the high cost of compute, proprietary data, the best tool and retrieval stacks, and top-tier research teams. As a result, one forecaster concludes, “We should expect closed-weight models with heavy capital expenditure to perform particularly well at leading-edge tasks.” Low-polarity respondents often argue that capital-intensive techniques developed by closed labs will be adopted by open-source models. “We've seen this in the past,” notes one forecaster, adding “everything from telegraphy to computers (ENIAC onwards) to the internet, in which what starts as a more elite and constrained technology migrates widely into open knowledge.” Other low-polarity respondents believe that in lieu of massive capex, “aggressive community efforts” that result in rapid iteration, distributed innovation, and collaborative scaling will help open-source models close the gap, and that “open-weight progress will be [also] accelerated by expanded access to high-quality synthetic data.”
  • Trend extrapolation: High-polarity respondents often point to historical trends that show a persistent gap between closed and open model performance: “Open-model performance has tended to lag closed-model performance by anywhere between about 6 and 22 months,” wrote one. Others estimated similar lags. Low-polarity respondents challenge this pattern, pointing to recent convergence: “According to Stanford's 2025 AI Index Report, the gap between closed and open models narrowed from 8.04% to 1.70% in just one year [on the Chatbot Arena Leaderboard]” (Stanford University Human Centered Artificial Intelligence 2025).
  • Benchmark saturation: Many low-polarity respondents believe convergence could result from the four specific benchmarks becoming saturated prior to the end of 2030. SWE-bench was thought to be particularly susceptible: “I find it likely that the current version of SWE-Bench is fully saturated by 2027 by both closed and open models.” Many also thought the other benchmarks were also likely to saturate: “Eventually, benchmarks saturate (e.g., see what happened to GPQA),” wrote one forecaster, with another arguing “there is very little chance a benchmark with >1% score today is not completely saturated in 2030.” High-polarity respondents typically either do not view convergence due to benchmark saturation as likely, or focus less on the specific benchmarks than on polarity in general.
  • Architectural breakthroughs: Most high-polarity respondents believe proprietary labs have advantages in breakthrough development. One notes that “a major architectural breakthrough (e.g., a successor to the Transformer), kept proprietary, could dramatically widen the gap.” As with the capex gap, low-polarity respondents tend to view rapid iteration, distributed innovation, and collaborative scaling as ways for open-source models to remain competitive. One points to the “proprietary and non-proprietary development of the human genome project” as an apt historical parallel.
  • Benchmark focus: Some high-polarity respondents believe that FrontierMath and ARC-AGI II, lacking clear practical utility, will be more of a focus of closed model developers: “Math to impress the general public, ARC-AGI for the specialists and the VCs.” Several low-polarity respondents, however, come to the opposite conclusion: “FrontierMath and ARC-AGI II don't have obvious economic utility, so they may become more of a focus of academics, etc. using open source models.”

AI Companions

Question. What proportion of U.S. adults will self-report using AI for companionship at least once daily by the following resolution dates?

Results. Experts predict that 10% of U.S. adults will self-report using AI for companionship at least once per day in 2027,13 up from 6% in July 2025. That number increases to 15% of adults in 2030,14 and 30% in 2040.15 By 2040, the bottom quartile of experts believe that less than 16% of people will self-report using AI for companionship daily whereas the top quartile believe that more this figure will be more than 40%;16 the top decile of experts believe that more than 60% of people will use AI for companionship daily. While experts and superforecasters predict similarly, the general public predicts substantially less adoption by 2040: 20% vs experts’ 30%.17

AI Companions. The figure above shows the median 50th percentile (as well as 25th and 75th percentiles when applicable) forecasts by participant group.
AI Companions. The figure above shows pooled probability distributions. We estimate each forecaster’s full probability distribution from their 25th, 50th, and 75th percentile forecasts, by fitting the cumulative density function of an appropriate distribution (i.e., beta or gamma distribution) to the observed forecasts using nonlinear least squares.these samples to an appropriate distribution (i.e., beta or gamma distribution). We then sample from these distributions and plot the aggregated distribution for each forecaster category. The 75th percentile of this final distribution represents the value that forecasters in aggregate believe there’s a 25% chance the true value exceeds.

Rationale analysis:

  • Loneliness: Many high-forecast respondents project a substantial expansion of use, in large part due to increasing loneliness. One notes that “the U.S. Surgeon General declar[ed] loneliness an epidemic in 2023, with about half of U.S. adults experiencing measurable levels of loneliness.” A few low-forecast respondents emphasize lower saturation limits. One writes, “About a quarter of U.S. adults go to therapy. If that's the market size, then I expect AI to eventually saturate [at] that.”18
  • Capabilities: Many high-forecast respondents believe AI capabilities are likely to improve dramatically, making companions more “sophisticated, emotionally intelligent, and capable of forming deeper connections with users,” and that “personalized AI companions that learn and adapt to individual users' preferences and needs will become more common.” Low-forecast respondents tend to argue that AI is unlikely to be able to replicate genuine human connection: “Nothing can replace real human interactions, and the vast majority of the population will be reluctant to have emotional interactions with AI.” One forecaster argues that “most people would find such companionship unfulfilling, perhaps even viewing reliance on it as a kind of failure.”
  • Technological integration: High-forecast respondents frequently cite integration with smartphone apps, wearable devices, and social media platforms as a likely driver of widespread use, even among those not searching for companionship. As one forecaster observed, “I have Grok in my car, Alexa in my kitchen, and Meta in my glasses and it's 2025. I do not use them for companionship, but I do miss them when not available.” Another emphasized that “ambient access through devices turns companionship into a series of micro-interactions throughout the day.” Low-forecast respondents often question whether the current technology can provide genuine companionship, with one arguing “it will not be a substitute for human interaction unless or until AIs achieve human levels of autonomy.”
  • Generational adoption patterns: Both sides acknowledge strong generational effects, but interpret implications differently. High-forecast respondents tend to see use among young adults (25% of 18-29 year-olds have tried AI companionship) as indicative of future mainstream adoption. One notes, “Young teens have much higher usage patterns for companionship than adults, hence giving some early indication of usage by adults in future years.” Another thinks the proportion “will increase rapidly as children grow to adulthood accepting these conversations as natural, having known interactive systems like Siri and Alexa for all of their conscious lives.” Some low-forecast respondents acknowledge youth adoption but emphasize resistance from older demographics, anticipating “generational resistance from many older adults” and that the “novelty will wear off.”
  • Regulatory and social acceptance: “Government regulation is unlikely,” is a common view among high-forecast respondents. Relatedly, they anticipate declining social stigma as AI companionship becomes normalized through integration with existing platforms. Several low-forecast respondents emphasize the potential for “persistent social/cultural resistance” that could lead to regulatory backlash. One notes, “Recent cases of suicides, possibly caused by AI companionship…may reduce trust and adoption. I assume that there will be more regulation limiting how people can/should use AI companion tools.”

Barriers to Adoption, Part II

Question. By the end of 2030, what percent of LEAP expert panelists will say that each of the following factors has significantly slowed AI adoption relative to popular expectations around AI adoption progress in 2025 in the general economy?

Results. Experts predict that AI literacy,19 social-cultural anomie,20 uses cases,21 and costs22 are not likely to be judged to significantly slow AI adoption in 2030 relative to popular expectations (medians 20%–25%). Data quality,23 regulations,24 and cultural resistance25 are judged slightly more serious (medians 30%–35%), and integration26 and unreliability27 are judged the most likely barriers to adoption, at 40%. There’s no consensus that any particular barrier to adoption will turn out to be significant. Experts and superforecasters predict similarly. The general public also predicts similarly, except for integration (20%28 vs experts’ 40%) and unreliability (25%29 vs experts’ 40%).

Barriers to Adoption, Part II. The figure above shows the distribution of forecasts by participant group, illustrating the median (50th percentile) and interquartile range (25th–75th percentiles) of each forecast.

Rationale analysis:

  • General: Forecasters widely disagree about which AI adoption barriers will persist or diminish by 2030. Low-forecast respondents tend to believe barriers will fade as capabilities improve, with one forecaster pointing to “the rapid and continuing surge of AI adoption after ChatGPT [as] a sign that improving capabilities can wash away a lot of these barriers.” High-forecast respondents largely emphasize structural challenges, arguing that “many of the issues “have no known perfect solutions” and that the pace of model deployment will likely “outstrip governance, integration capacity, and human skill adaptation.” One concludes “The next 5 years are likely to bring a reality check to the current hyped expectations.” A core divide is whether solutions exist: some view barriers as engineering problems while others see fundamental limitations requiring “entirely new architectures.”
  • Lack of reliability: Most low-forecast respondents believe reliability issues are rapidly improving through technical advances. They note that “hallucination has already dropped a lot with GPT 5 and it will keep going” and expect that “productized retrieval, tool use, and verifiers [will] cut hallucinations enough that only a minority will still call them serious.” Some also believe that “by 2030, the AI community, including users, will have a better understanding of AI limitations, and AI applications will be designed to provide a known level of accuracy that is appropriate for that application.” High-forecast respondents frequently emphasize persistent fundamental limitations: “Even with retrieval-augmented generation and tool use, hallucinations will remain a widely recognized constraint especially in high-stakes settings like medicine and law.” One notes that “research shows these issues have a statistical lower bound making them an inherent limitation rather than an occasional glitch.” Some high-forecast respondents also point to a creative-accuracy tradeoff: “For AI to be creative, it needs to hallucinate.”
  • Cultural resistance: Low-forecast respondents tend to think economic incentives will overcome resistance as “soft social factors do very little if there is a good economic case for use” and expect resistance to fade with demonstrated utility: “If the product is good/addictive enough, then cultural resistance will be muted.” Some high-forecast respondents, however, focus on the potential for labor unrest to spark resistance: “I think [cultural resistance] will be very large as job losses start appearing” notes one, and another foresees “a societal push to reward human work and avoid AI produced content… like we have already seen in the world of publishing.” Other forecasters suspect that the “breakdown of trust and norms caused by rapid AI shifts will fuel confusion and resistance.”
  • Restrictive regulations: Many low-forecast respondents believe competitive pressures will limit restrictive regulations. They note “regulation doesn't seem to be a priority now for the world, quite the opposite” and that “the competition is fierce…It seems both the U.S. and China are set to facilitate their companies to freely compete.” One argues, “Judging by how tech illiterate elected politicians are, they will be too late to put meaningful regulations in place that would hinder its adoption.” But many low-forecast respondents do expect significant regulatory barriers, especially internationally. One predicts “regulation might not be a huge problem in the U.S. but looking back in a few years, I believe most panelists would agree that the Europeans will have missed another technological change because of regulations.” Another argues that “heterogeneous, evolving rules across jurisdictions and sectoral compliance burdens will create real deployment frictions.”
  • Cost issues: Low-forecast respondents tend to suspect “the costs of deploying AI are going to continue to fall, either from the investments in energy supply, more efficient training and inference, or the reorientation towards smaller models.” One points to the potential for small modular [nuclear] reactors to change the calculus. High-forecast respondents often stress “the current rate of capex is unsustainable” and emphasize that “spiraling infrastructure and energy demands make large-scale deployment economically inefficient outside of the biggest firms.”
  • Data quality issues: Some low-forecast respondents think “data quality issues can be overcome with clever synthetic data generation” and note there is “ample…investment and [that] startups are working on improving data quality.” Several high-forecast respondents argue that “access to clean, unbiased, legally usable, and domain-specific datasets remains the single greatest obstacle to scalable adoption” and that this is especially the case in key fields like healthcare, finance and defense. They also worry about data degradation: “Without proper data management AI models will start cannibalising their own slop.”
  • Integration challenges: Most low-forecast respondents believe competitive pressures will drive successful integration: “If future profits/relevance are at stake, corporations will make the investments to keep up--no one wants to be the next Kodak.” Many also expect “tooling, APIs, and education will improve enough that most organizations can technically adopt AI once legal and data issues are resolved” and that eventually AI may be able to assist in the integration process so that it occurs “almost seamlessly.” High-forecast respondents commonly emphasize the complexity of organizational change. One observes that “changing existing business processes is generally hard (ERP projects have a notoriously high failure rate, for example) and I struggle to think of an organization that has successfully moved to predominantly using AI.” Another that “large companies have enormous inertia, and integrating AI at a corporate level requires deep workflow and process redesign.”
  • Not enough use cases: Many low-forecast respondents expect rapid expansion of viable applications and note that “the amount of use cases is already large and growing.” High-forecast respondents tend to point to ROI concerns and argue the issue is “a shortage of economically viable and trusted applications that significantly outperform existing methods.” Many also note constraints from robotics limitations: “With progress in robotics being slower than for intellectual work…I could see this [barrier] being somewhat significant.”
  • Lack of AI literacy: Low-forecast respondents typically expect to see rapid skill development materialize, driven by necessity and user-friendly, natural-language interfaces. They note “AI illiteracy [is] already fading (look at those adoption patterns)” and that “already it is easily accessed through search engines” and that “organisations are working hard to upskill staff.” Some high-forecast respondents, however, stress that “workforce training takes time” and highlight the likelihood of persistent skill gaps. One argues that “skills shortages are going to become worse due to i) demographic change, ii) restrictive immigration policy, iii) cuts in research and science funding.”
  • Social-cultural anomie: Most low-forecast respondents believe economic benefits will ultimately override concerns. They argue that “anomie and cultural resistance are usually not long-lived during technological shifts.” One notes, “I cannot think of any technological development that has been hindered by it,” and another, “TV and cars and many automations made humans lazy but that didn't stop the industry.” Some high-forecast respondents highlight the potential for growing environmental and social concerns to create resistance. One predicts: “Anomie will be a huge issue as people realize just how much electricity is needed for AI and the environmental cost of that.”

Footnotes

  1. Raw data: IQR on the 50th percentile was (169.0–268.3); median 25th and 75th percentile forecasts were 150.0 and 260.0 respectively.

  2. Raw data: IQR on the 50th percentile was (196.3–400.0); median 25th and 75th percentile forecasts were 175.0 and 400.0 respectively.

  3. Link provided to participants: https://ourworldindata.org/grapher/corporate-investment-in-artificial-intelligence-by-type?country=~Private+investment#sources-and-processing

  4. Raw data: IQR on the 50th percentile was (138.0–210.0); median 25th and 75th percentile forecasts were 139.3 and 182.7 respectively.

  5. Raw data: IQR on the 50th percentile was (150.0–300.0); median 25th and 75th percentile forecasts were 155.0 and 220.0 respectively.

  6. In some cases, the "aggregate" refers to the mean; in others, the median is used, depending on which is more appropriate for the distribution of responses. 2 3 4 5

  7. We occasionally elicit participants' quantile forecasts (estimates of specific percentiles of a continuous outcome) to illustrate the range and uncertainty of their predictions. 2 3 4 5

  8. Raw data: IQR on the 50th percentile was (3.0%–5.1%); median 25th and 75th percentile forecasts were 2.5% and 6.0% respectively.

  9. Raw data: IQR on the 50th percentile was (5.0%–15.0%); median 25th and 75th percentile forecasts were 5.0% and 14.0% respectively.

  10. Raw data: IQR on the 50th percentile was (9.0%–30.0%); median 25th and 75th percentile forecasts were 9.0% and 28.4% respectively.

  11. Raw data: IQR on the 50th percentile was (5.0%–23.0%); median 25th and 75th percentile forecasts were 6.1% and 14.0% respectively.

  12. Public medians: 31% vs 20% by 2025, 40% vs 31% by 2027, 48% vs 40% by 2030.

  13. Raw data: IQR on the 50th percentile was (8.0%–15.0%); median 25th and 75th percentile forecasts were 7.0% and 15.0% respectively.

  14. Raw data: IQR on the 50th percentile was (10.9%–25.0%); median 25th and 75th percentile forecasts were 10.0% and 25.0% respectively.

  15. Raw data: IQR on the 50th percentile was (16.0%–40.0%); median 25th and 75th percentile forecasts were 15.0% and 45.0% respectively.

  16. Raw data: IQR on the 50th percentile was (16.0%–40.0%); median 25th and 75th percentile forecasts were 15.0% and 45.0% respectively.

  17. Raw data: IQR on the 50th percentile was (11.0%–32.3%); median 25th and 75th percentile forecasts were 15.0% and 25.0% respectively.

  18. This claim may refer to the ~23% of U.S. adults who, according to a 2024 KFF (formerly Kaiser Family Foundation) study, “say they received mental health counseling and/or prescription medication for mental health concerns in the last year.” See Panchal and Lo (2024).

  19. Raw data: IQR on the 50th percentile was (10.0%–50.0%)

  20. Raw data: IQR on the 50th percentile was (10.0%–30.0%)

  21. Raw data: IQR on the 50th percentile was (10.0%–40.0%)

  22. Raw data: IQR on the 50th percentile was (10.0%–50.0%)

  23. Raw data: IQR on the 50th percentile was (15.0%–60.0%)

  24. Raw data: IQR on the 50th percentile was (20.0%–60.0%)

  25. Raw data: IQR on the 50th percentile was (15.0%–45.0%)

  26. Raw data: IQR on the 50th percentile was (15.0%–60.0%)

  27. Raw data: IQR on the 50th percentile was (20.0%–60.0%)

  28. Raw data: IQR on the 50th percentile was (10.0%–50.0%)

  29. Raw data: IQR on the 50th percentile was (12.0%–55.0%)

Cite Our Work

Please use one of the following citation formats to cite this work.

APA Format

Murphy, C., Rosenberg, J., Canedy, J., Jacobs, Z., Flechner, N., Britt, R., Pan, A., Rogers-Smith, C., Mayland, D., Buffington, C., Kučinskas, S., Coston, A., Kerner, H., Pierson, E., Rabbany, R., Salganik, M., Seamans, R., Su, Y., Tramèr, F., Hashimoto, T., Narayanan, A., Tetlock, P. E., & Karger, E. (2025). The Longitudinal Expert AI Panel: Understanding Expert Views on AI Capabilities, Adoption, and Impact (Working paper No. 5). Forecasting Research Institute. Retrieved 2025-12-27, from https://leap.forecastingresearch.org/reports/wave3

BibTeX

@techreport{leap2025,
    author = {Murphy, Connacher and Rosenberg, Josh and Canedy, Jordan and Jacobs, Zach and Flechner, Nadja and Britt, Rhiannon and Pan, Alexa and Rogers-Smith, Charlie and Mayland, Dan and Buffington, Cathy and Kučinskas, Simas and Coston, Amanda and Kerner, Hannah and Pierson, Emma and Rabbany, Reihaneh and Salganik, Matthew and Seamans, Robert and Su, Yu and Tramèr, Florian and Hashimoto, Tatsunori and Narayanan, Arvind and Tetlock, Philip E. and Karger, Ezra},
    title = {The Longitudinal Expert AI Panel: Understanding Expert Views on AI Capabilities, Adoption, and Impact},
    institution = {Forecasting Research Institute},
    type = {Working paper},
    number = {5},
    url = {https://leap.forecastingresearch.org/reports/wave3}
    urldate = {2025-12-27}
    year = {2025}
  }