Chapter 1: The AI Paradox and the Sandpile Solution

The AI landscape today feels like financial markets in 2007 – everyone knows something doesn’t add up, but no one can quite articulate what. We’re living in a world where multiple contradictory realities coexist, and somehow they’re all true at once.

 

In 2009, economist John Mauldin wrote about a concept from physics called “self-organized criticality.” He was explaining why financial markets behave in ways that seem both random and inevitable – why small events sometimes cascade into market crashes while other significant events produce no reaction.
 
His framework: Imagine a sandpile. You drop grains of sand onto the pile, one at a time. Most grains settle without incident. Occasionally, a single grain triggers a small cascade. But at some unpredictable moment, one grain triggers a massive avalanche that reshapes the entire pile.
 
The critical insight: You cannot predict which grain triggers the avalanche. You cannot predict when. You can only observe the system approaching criticality, and that when it tips, transformation is sudden.
 
AI is a sandpile.
 
 

Defining the AI Sandpile

 
A grain of sand is a true capability – not a 20-50% process improvement, but a transformation that delivers 10-100X return measured against implementation and usage costs¹, or does something previously impossible, or operates at a scale that fundamentally changes the economics of a business model.
 
The pile is the accumulation of all capabilities emerging across the market – from thousands of organizations, vendors, and POCs happening simultaneously.
 
The avalanche is the cascade of transformation when enough capabilities reach deployment that the economic landscape shifts overnight.
 
If AI follows sandpile dynamics, we should observe specific patterns in how organizations invest, succeed, struggle, and position themselves. Let’s test six predictions against the data.
 
 

Six Predictions and What the Data Shows

 

Prediction #1: Investment Should Accelerate Despite Uncertain Returns

In a sandpile approaching criticality, rational actors add grains not because individual grains produce returns, but because missing the avalanche is catastrophic. We should see investment accelerating even when ROI is unclear.
 
The data: 88% of organizations are increasing AI budgets (PwC 2025). 67% plan to increase GenAI investment specifically (Forrester 2024). Average GenAI spend reached $1.9M (Gartner 2025).
 
But here’s the paradox: 80%+ see no EBIT impact from these investments (McKinsey 2025). Fewer than 30% of CEOs are satisfied with ROI (Gartner 2025). And 64% of organizations invest before understanding the value they expect to capture (IBM 2025).
 
Why would rational executives do this? IBM found the answer: 64% say fear of falling behind drives their investment decisions.
 
Validated. Investment behavior matches sandpile dynamics, not traditional ROI logic.
 
 

Prediction #2: Success Rates Should Stay Flat Despite Adoption Surging

In a sandpile, adding more grains doesn’t produce proportional results. Most grains build the pile higher without triggering avalanche. If AI follows sandpile dynamics, we should see adoption increasing without corresponding improvement in success rates.
 
The data: In 2021, roughly 50% of organizations had adopted AI, with 12% qualifying as “Achievers” (Accenture 2021). By 2024-2025, adoption reached 72-88% across studies. But success rates? McKinsey 2025 found only 6% are “high performers.” Deloitte identified 17% as “Underachievers” – organizations with high deployment (5.5 out of 10 AI types) but low outcomes (1.4 out of 17 benefits achieved).
 
More input is not producing more output. The relationship is non-linear.
 
Validated. Success rates are flat or declining despite adoption doubling.

 

Prediction #3: No Consensus Should Exist on What Works

In a sandpile, context determines outcome. The same grain stabilizes one pile and triggers avalanche in another. If AI follows these dynamics, experts should disagree on what drives success because they’re observing different pile configurations.
 
The data: The consultancies directly contradict each other:
  • BCG recommends fewer use cases with deeper investment (BCG 2025)
  • Bain finds winners deploy MORE use cases – 4.5 vs 3.3 (Bain 2024)
  • McKinsey identifies workflow redesign as the #1 differentiator (McKinsey 2025)
  • Accenture says success comes from “combination, not sophistication” (Accenture 2024)
Meanwhile, 64% of organizations invest before understanding value (IBM 2025). Only 36% have a vision with roadmap (Bain 2024). Fewer than 20% track KPIs for their GenAI initiatives (McKinsey 2025).
 
Everyone is describing what successful piles look like after avalanche – correlation, not causation. No one can prescribe how to become successful because the same actions produce different results in different contexts.
 
Validated. Expert disagreement is a feature of the system, not a failure of analysis.
 
 

Prediction #4: Technical Barriers Should Persist Despite Intensive Effort

In a sandpile, friction prevents premature cascade. We should see barriers persisting not because organizations aren’t trying, but because friction is functional – it prevents uncontrolled transformation before readiness.
 
The data: After two years of intensive focus:
  • 71% still can’t trust autonomous agents (Capgemini 2025)
  • 60-63% cite hallucination/accuracy concerns (Capgemini 2025, Gartner 2025)
  • Only 18-39% have governance frameworks (McKinsey 2025, IBM 2025)
Barriers evolved but didn’t resolve. In 2024, validation strategy was the top concern at 50%. By 2025, it shifted to data privacy at 67% (Capgemini 2025). Organizations address one barrier and another emerges.
 
But here’s the counterintuitive finding: High achievers report 2X more fear than low achievers (Deloitte 2025). They’re also 3X more likely to trust AI over their own intuition, and show low desire to reduce headcount combined with high training investment.
 
In sandpile terms: organizations closer to criticality (steeper piles) experience more instability. Fear isn’t a sign of failure – it’s an appropriate response to being near transformation. High achievers invest in stabilization because they recognize the instability.
 
Validated. Persistent barriers and heightened fear among leaders indicate system approaching criticality.
 
 

Prediction #5: Layoffs Should Precede Demonstrated Impact

If organizations are positioning for anticipated avalanche rather than responding to demonstrated impact, workforce decisions should show speculation rather than evidence-based planning.
 
The data: McKinsey 2025 found a three-way split: 32% expect workforce decreases, 43% expect no change, 13% expect increases. No consensus means this is speculation, not evidence-based projection.
 
Meanwhile, 61% push AI adoption faster than their people are comfortable with (IBM 2025). But 64% acknowledge success depends on people adoption (IBM 2025). And high performers actually increase headcount while increasing output (McKinsey 2025).
 
Organizations are removing the resources they need for success – positioning for an avalanche whose shape they can’t predict.
 
Validated. Workforce decisions reflect anticipated disruption, not demonstrated impact.
 
 

Prediction #6: Monetization Should Look Like Speculation, Not Planning

In self-organized criticality, you can predict avalanches will happen but not where or when. If AI follows sandpile dynamics, organizations should treat monetization as speculation rather than plannable.
 
The data: 64% invest before understanding value (IBM 2025). Only 36% have a roadmap (Bain 2024). Fewer than 20% track KPIs (McKinsey 2025).
Yet 50% expect their business models to be “unrecognizable” within two years (PwC 2025). 75% believe AI will be bigger than the internet (PwC 2025).
 
Salesforce and others are giving away “flex credits” – essentially free AI usage – hoping customers stumble upon use cases that justify future spending. They’re building supply before understanding demand, trying to lock customers into platforms before alternatives mature.
 
This is betting logic, not investment logic. Organizations are wagering on transformation they can’t specify, on timelines they can’t predict, for returns they can’t quantify.
 
Validated. Monetization behavior reflects emergent value expectations, not predictable returns.
 
 

The Resolution

Traditional business logic can’t explain these paradoxes. Sandpile dynamics can.
 
Traditional Logic
Sandpile Logic
Investment should follow demonstrated ROI
Investment positions for anticipated avalanche
More adoption should improve success rates
Most grains build pile without triggering transformation
Experts should converge on best practices
Context determines outcome; same action, different results
Technical barriers should yield to sustained effort Friction prevents premature cascade (feature not a bug)
Layoffs should follow demonstrated impact Workforce decisions anticipate disruption, instead of responding to it
Monetization should be plannable, not speculation Monetization follows betting behavior, not investment behavior
 
The AI landscape isn’t irrational. It’s following a different logic – the logic of self-organized criticality.
 
The pile is still building. The avalanche is coming. But it won’t arrive because of a single breakthrough or dominant vendor. It will emerge from the accumulated weight of thousands of capabilities reaching deployment simultaneously.
 
The question isn’t whether the avalanche will happen. The question is: How do capabilities form, and what do you do while the pile is still building?
 
 

Notes
¹ On measuring ROI: The consultancies don’t agree on how to measure AI returns. McKinsey tracks EBIT impact. IDC measures returns per dollar invested but doesn’t specify the denominator. IBM measures against total capital investment – a methodology almost guaranteed to show poor returns.
 
For this series, we define capability ROI as: annual value delivered ÷ (implementation costs + annual usage costs). This focuses on per-capability economics rather than portfolio-wide investment. A capability delivering $500K in annual value at $25K in usage costs represents a 20X return – a grain of sand worth adding to the pile.
 
Others may measure differently – time compression, productivity gains, revenue impact. There’s no universal standard. What matters is having a standard you can apply consistently.
 

References
  • Accenture, “The Art of AI Maturity” (2021)
  • Accenture, “Reinventing Enterprise Operations with GenAI” (2024)
  • Bain & Company, “Prioritizing Generative AI Investments” (2024)
  • BCG, “From Potential to Profit with GenAI” (2025)
  • Capgemini, “Accelerating AI: The State of AI in the Enterprise” (2025)
  • Deloitte, “State of Generative AI in the Enterprise Q4 2024” (2025)
  • Forrester, “The State of AI” (2024)
  • Gartner, “AI in the Enterprise Survey” (2025)
  • IBM, “Global AI Adoption Index” (2025)
  • John Mauldin, “Ubiquity, Complexity, and Sandpiles,” Frontline Thoughts (2009)
  • McKinsey, “The State of AI: How Organizations Are Rewiring to Capture Value” (2025)
  • PwC, “AI Business Predictions” (2025)