The AI Moment: Federal Reserve Analysis of Artificial Intelligence, Productivity, and Economic Policy

📌 Key Takeaways

  • Transformations Take Time: Like electricity’s 100-year journey from discovery to productivity gains, AI’s macro impact may take decades to materialize
  • Adoption ≠ Transformation: Most firms use AI within existing processes rather than fundamentally redesigning operations—ideas matter more than tools
  • Productivity Evidence Limited: Macro studies show minimal AI productivity impact despite widespread adoption and micro-level efficiency gains
  • Fed Policy Framework Evolving: The 1990s tech boom offers lessons for patient monetary policy during technological transitions
  • Business Intelligence Crucial: Disaggregated data and direct business outreach provide better insights than aggregate statistics during transformation periods

The Historical Arc—Why Transformations Take Time

In February 2026, Federal Reserve Bank of San Francisco President Mary Daly delivered a masterful analysis of artificial intelligence’s economic potential by looking backward—not to the recent dot-com boom or even the computer revolution, but to the century-long transformation enabled by electricity. Her message to business leaders and policymakers is both sobering and encouraging: genuine technological transformation takes far longer than the hype cycle suggests, but when it arrives, the impact can be profound and sustained.

The electricity parallel is instructive precisely because it spans the complete lifecycle of what economists call a “general-purpose technology” (GPT). Research by economic historian Paul David traces this arc from Michael Faraday and Joseph Henry’s discovery of electric current in the 1830s through the productivity revolution that finally emerged in the 1920s–1940s. That’s roughly 100 years from foundational research to measurable economic transformation.

The timeline matters for understanding AI because it challenges the Silicon Valley narrative of instant disruption. Yes, ChatGPT’s November 2022 launch felt like a watershed moment—suddenly, millions of people had access to sophisticated AI capabilities. But the underlying technologies date back to the 1930s-1950s work of Alan Turing, the 1980s development of neural networks, and decades of machine learning research. By Daly’s framework, we’re not at the beginning of AI’s economic impact but potentially in the middle phase of a much longer transformation.

For business leaders, this historical perspective offers both caution and opportunity. The caution: don’t expect AI investments to generate transformative returns immediately. Like digital transformation more broadly, AI adoption requires patience, experimentation, and willingness to invest before clear returns emerge. The opportunity: firms that position themselves strategically during the adoption phase will be better prepared when the transformation phase arrives.

AI’s Parallel Timeline—From Turing to ChatGPT

Daly’s analysis places current AI developments within a 70+ year continuum that began with foundational theoretical work in the mid-20th century. Alan Turing’s 1950 paper “Computing Machinery and Intelligence” posed the question that still drives AI development: “Can machines think?” The subsequent decades saw the development of expert systems in the 1960s-70s, neural networks in the 1980s, and machine learning breakthroughs in the 1990s-2000s.

This extended timeline helps explain why ChatGPT’s arrival felt both sudden and inevitable. The underlying transformer architecture was published by Google researchers in 2017, building on decades of natural language processing research. The breakthrough wasn’t a single invention but the convergence of sufficient computing power, training data, and algorithmic refinements—much like how electricity required not just understanding of electric current but also developments in power generation, transmission, and motor design.

From the Fed’s perspective, this historical framing has important implications for economic analysis. Current productivity research focuses on AI’s immediate impacts, but if the electricity parallel holds, the most significant effects may not emerge until firms fundamentally reorganize their operations around AI capabilities—a process that could take years or decades.

The Adoption Surge—What Businesses Are Actually Doing with AI

The Federal Reserve’s business outreach through the Twelfth District reveals widespread AI experimentation across industries and company sizes. Agricultural companies are using AI to develop new crop varieties, IT and finance teams are scaling routine tasks, healthcare organizations are automating administrative processes, and marketing departments are leveraging AI for consumer research and content creation.

The adoption data is impressive. According to McKinsey’s 2025 survey on AI adoption, businesses across size categories are investing substantially in AI capabilities. The San Francisco Fed’s own EmergingTech Economic Research Network (EERN), launched in 2024, documents how enthusiasm has converted into real capital deployment and operational changes.

However, Daly’s analysis draws a crucial distinction between adoption and transformation. Most current AI applications involve automating existing processes—using AI to speed up document review, enhance customer service responses, or optimize supply chain logistics. While valuable, these applications represent what economists call “factor-augmenting” improvements: they make existing processes more efficient without fundamentally changing how value is created.

Transform your business documents into engaging, interactive experiences that drive results and capture attention.

Try It Free →

The Productivity Puzzle—Why Macro Data Hasn’t Moved

Despite widespread AI adoption and compelling case studies of efficiency gains, aggregate productivity statistics tell a different story. Most macro-level studies, including research by Dani Rodrik, Daron Acemoglu, and Philippe Aghion, find limited evidence of AI-driven productivity growth at the national level. The Fed’s own trend productivity models, updated as recently as January 2026, do not show acceleration that can be attributed to artificial intelligence.

This disconnect between micro-level success stories and macro-level data is not unprecedented. During the early phases of the computer revolution, similar gaps existed between firm-level efficiency gains and national productivity statistics. The phenomenon has several potential explanations, each with different implications for business strategy and economic policy.

First, timing lags may be longer than anticipated. If AI is truly analogous to electricity, the transformation phase—where fundamental business reorganization occurs—may still be years away. Current productivity research may be capturing only the early adoption effects, missing the larger structural changes that drive sustained growth. Innovation research shows that breakthrough technologies often require complementary innovations and organizational changes that take time to develop and implement.

Second, measurement challenges may obscure AI’s actual impact. Productivity calculations rely on output measures that may not capture quality improvements, customer experience enhancements, or new services enabled by AI. If AI primarily improves service quality rather than quantity, traditional metrics might understate its economic contribution.

Third, the rapid pace of AI technology development may create a perpetual adoption phase. Firms may be continuously acquiring and learning new AI capabilities rather than reaching the steady-state deployment needed for transformation. This “moving target” effect could delay the productivity payoff indefinitely if the technology evolves faster than organizations can adapt.

The Financial Sector Case Study—Incremental vs. Transformative

Daly uses the financial services industry as a detailed case study of the adoption-transformation distinction. Banks and other financial institutions have been aggressive AI adopters, deploying the technology across multiple business functions: loan application processing, document review, compliance checking, fraud detection, and customer service automation.

These applications demonstrate clear efficiency gains. AI can process loan documents faster than human underwriters, identify compliance issues more consistently, and handle routine customer inquiries around the clock. Financial technology research shows measurable improvements in processing times, accuracy rates, and operational costs.

But Daly argues that these improvements, while valuable, represent incremental enhancement rather than fundamental transformation. The lending process itself remains largely unchanged: applications are submitted, reviewed, approved or denied, and funded through traditional mechanisms. AI makes each step faster and more accurate, but doesn’t reimagine what lending could become with truly AI-native approaches.

The electricity analogy is apt here. Early industrial adoption of electric power involved simply replacing steam engines with electric motors while leaving factory layouts unchanged. The transformative impact came later, when manufacturers realized electric power enabled entirely new production architectures: distributed motors, continuous-flow processes, and flexible manufacturing systems that were impossible with centralized steam power.

Applied to financial services, a truly transformative AI approach might involve real-time risk assessment integrated with dynamic pricing, automated contract generation and negotiation, or AI-mediated peer-to-peer lending that bypasses traditional intermediaries entirely. These possibilities require not just better technology but different business models, regulatory frameworks, and customer relationships.

What Made Electricity Transformative—And What That Means for AI

The most profound insight in Daly’s analysis concerns the role of ideas versus technology in driving transformation. The scientific understanding of electricity was well-established decades before its economic impact materialized. What changed in the 1920s wasn’t the technology itself but how businesses thought about using it.

The breakthrough came with the development of unit-drive electric motors—small, efficient motors that could power individual machines rather than entire factory floors. This seemingly technical advancement enabled a conceptual revolution: manufacturers could design production processes around efficiency and workflow rather than the mechanical constraints of centralized power transmission. Economic historian Paul David’s seminal research shows that productivity gains accelerated only after this organizational transformation occurred.

For AI, the parallel suggests that technology capabilities are necessary but not sufficient for transformation. Current AI tools are impressive, but most applications involve plugging AI into existing business processes rather than reimagining those processes from first principles. The transformative moment will come when organizations develop new concepts for how work should be organized around AI capabilities.

Daly emphasizes that this conceptual shift requires creativity, experimentation, and willingness to challenge established practices. It also requires what she calls “patient capital”—investors and managers who understand that the highest returns may come from fundamental reorganization rather than incremental efficiency gains.

Ready to fundamentally transform how your audience engages with your content? Create interactive experiences that drive action.

Get Started →

The Greenspan Precedent—Lessons from the 1990s Tech Boom

Daly’s analysis draws extensively on the Federal Reserve’s experience during the 1990s computer revolution, particularly Alan Greenspan’s prescient recognition that technology was driving a productivity acceleration that wasn’t yet visible in aggregate data. This historical parallel offers both tactical lessons for monetary policy and strategic insights for business leaders navigating technological transitions.

In 1995-1996, standard economic models suggested the Fed should raise interest rates to prevent the economy from overheating. Unemployment was falling, capacity utilization was rising, and traditional indicators pointed toward potential inflation. However, Greenspan challenged the conventional analysis by focusing on disaggregated data and business intelligence that suggested something fundamental was changing in the economy’s productive capacity.

The key insight was that official productivity statistics were lagging real economic changes. Businesses were reporting efficiency gains from computer technology, but these improvements weren’t yet reflected in national accounts. Greenspan’s speeches from the period describe extensive Fed outreach to business leaders to understand how technology was affecting operations, costs, and competitive dynamics.

This business intelligence proved correct. Subsequent data revisions showed that productivity acceleration had indeed begun before 1995, validating Greenspan’s decision to maintain accommodative policy despite traditional models suggesting tightening was needed. The result was the “roaring ’90s”—a period of sustained growth with low inflation that might have been truncated by premature policy tightening.

For today’s AI moment, Daly draws three specific lessons from the Greenspan experience: First, aggregate data may not capture the early phases of technological transformation. Second, business outreach and disaggregated analysis can provide leading indicators of structural change. Third, patient monetary policy that gives room for productivity-driven growth to emerge can enable sustained expansion without inflationary pressure.

Monetary Policy in an AI Era—The Fed’s Current Approach

The Federal Reserve’s current approach to AI reflects the lessons learned from the 1990s while acknowledging the uncertainty inherent in technological transitions. Daly describes a framework based on evidence-gathering, analytical humility, and policy flexibility rather than predetermined responses to AI developments.

The centerpiece of the Fed’s AI research efforts is the EmergingTech Economic Research Network (EERN), launched in 2024 by the San Francisco Fed and the Federal Reserve System Innovation Office. EERN supports research on how generative AI and other emerging technologies are shaping economic outcomes, with particular focus on productivity, labor markets, and financial stability implications.

The research approach emphasizes multiple data sources and analytical perspectives. Traditional aggregate productivity measures are supplemented with firm-level surveys, industry case studies, and direct business outreach. The San Francisco Fed has conducted specialized roundtables on AI applications in healthcare, venture capital, and product development, gathering qualitative intelligence that complements quantitative analysis.

Importantly, the Fed is not building AI-driven productivity gains into baseline economic forecasts. Daly emphasizes that extraordinary claims require extraordinary evidence, and the macro-level productivity impact remains speculative despite compelling micro-level case studies. This conservative analytical stance provides flexibility to adjust forecasts and policy as evidence accumulates without committing to specific AI-driven growth scenarios.

The monetary policy implications are nuanced. If AI does drive sustained productivity growth, the economy’s non-inflationary speed limit would rise, potentially allowing lower interest rates and faster growth without triggering inflation. However, premature accommodation based on speculative productivity gains could lead to economic imbalances if the AI transformation proves slower or smaller than anticipated.

Public Sentiment and the Human Dimension

While Daly’s analysis focuses primarily on economic and policy dimensions, she acknowledges the crucial role of public sentiment in determining AI’s ultimate economic impact. Technological transformations require not just business adoption but broader social acceptance and workforce adaptation.

Current public opinion research reveals mixed sentiment about AI’s implications. Pew Research Center surveys show Americans express both excitement about AI’s capabilities and concern about its impact on employment, privacy, and social equity. Reuters/Ipsos polling suggests particular worry about permanent job displacement, even among workers whose jobs may be enhanced rather than replaced by AI.

Daly draws historical parallels to public reactions to previous transformative technologies. Electricity faced significant resistance when first introduced—people feared electric lighting would cause blindness, electric streetcars were seen as dangerous, and labor unions opposed electric motors in manufacturing. Automobiles faced similar skepticism, with early “red flag laws” requiring cars to be preceded by someone on foot carrying a warning flag.

These historical parallels suggest that public resistance to transformative technologies is normal and often diminishes as familiarity grows. However, the scale and speed of AI development may create unique challenges. Unlike electricity or automobiles, which required significant infrastructure investment and gradual deployment, AI capabilities can be distributed rapidly through software, potentially accelerating both adoption and disruption.

The workforce implications are particularly complex. While historical evidence suggests technological revolutions ultimately create more jobs than they destroy, the transition periods can be difficult for affected workers. Successful AI adoption will likely require substantial investment in workforce retraining and education—similar to the educational expansion that accompanied industrialization and electrification.

What Comes Next—Scenarios for AI’s Economic Impact

Daly’s analysis concludes with a framework for thinking about AI’s potential economic futures while acknowledging the fundamental uncertainty about which scenario will emerge. Rather than making specific predictions, she outlines the range of possibilities and the factors that might determine the actual outcome.

The baseline scenario treats AI as a useful tool that generates modest efficiency gains without fundamental economic transformation. In this scenario, AI augments human productivity in specific applications but doesn’t restructure how goods and services are produced. Productivity growth would improve modestly, similar to the impact of previous information technologies, but without the transformative effects seen during electrification.

The transformation scenario envisions AI as a true general-purpose technology that enables new forms of economic organization. This would involve not just better automation but fundamentally different approaches to production, distribution, and value creation. Economic research on GPTs suggests such transformations can drive decades of sustained productivity growth once the reorganization phase begins.

The key factors distinguishing these scenarios include the pace of complementary innovation (new business models and organizational forms), the scale of investment in AI-enabled restructuring, the development of supporting infrastructure and institutions, and the resolution of regulatory and social acceptance issues.

Be part of the transformation. Create next-generation interactive experiences that engage audiences and drive measurable results.

Start Now →

Daly emphasizes that the ultimate outcome will depend heavily on choices made by businesses, policymakers, and society more broadly. The technology itself is enabling, but not deterministic. Success in capturing AI’s transformative potential will require patience, investment, experimentation, and willingness to challenge existing approaches to work and economic organization.

For business leaders, this framework suggests focusing on capabilities and organizational flexibility rather than betting on specific AI technologies or applications. The firms that thrived during electrification were those that invested early in understanding the technology’s possibilities while maintaining adaptability as new applications and business models emerged.

The Federal Reserve’s role in this process is to provide the macroeconomic stability and policy framework that enables productive investment and experimentation while avoiding the boom-bust cycles that can derail technological transitions. Daly’s analysis suggests this requires maintaining the analytical humility to recognize what is unknown while preserving the flexibility to respond as evidence of transformation accumulates.

Frequently Asked Questions

Is AI actually boosting productivity right now?

Not yet at the macro level. Most studies of aggregate productivity growth find limited evidence of a significant AI effect. However, micro-level case studies show cost savings in specific applications like call centers, software development, and financial management. The disconnect between micro and macro evidence may reflect timing lags, measurement issues, or that current AI use is incremental rather than transformative.

How does the Federal Reserve think about AI when setting interest rates?

The Fed views AI primarily through its potential impact on productivity growth, which affects how fast the economy can grow without generating inflation. If AI significantly boosts productivity, the economy’s non-inflationary speed limit rises, potentially allowing lower interest rates. Currently, evidence doesn’t support building AI-driven productivity gains into baseline forecasts, but the Fed actively monitors through disaggregated data and business outreach.

Why does the Fed compare AI to electricity rather than the internet?

The electricity comparison spans the full lifecycle of a general-purpose technology—from scientific discovery through commercial application to economic transformation. It illustrates that even the most impactful technologies take decades to produce sustained productivity gains because transformation requires not just technology but fundamental reorganization of production, business processes, and workforce skills.

What’s the difference between AI adoption and AI transformation?

Adoption means using AI tools within existing business processes—automating loan processing steps, speeding document review, or assisting customer service. Transformation means fundamentally redesigning how a business operates with AI at its foundation—analogous to how electricity enabled entirely new factory layouts rather than simply powering existing steam-era machinery. Most firms are currently in the adoption phase.

What lessons from the 1990s technology boom apply to today’s AI moment?

Three key lessons: (1) Official productivity data may lag real economic changes—the 1990s acceleration was only visible through data revisions. (2) Business intelligence and disaggregated data are more revealing than aggregate statistics. (3) Patient monetary policy that gives room for productivity-driven growth to emerge can enable sustained expansion without inflationary pressure.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup