The Greater Fool Theory of AI Adoption

Talking about AI adoption reminds me of a feeling I had when I was serving in the Army. It was the first day for new recruits and I was one of them. We arrived early and were processed quickly—getting uniforms, assigned tasks to help process more new recruits through the day. I found myself in the boots and rucksack section giving these out to new recruits just a couple of hours after I had joined myself. I still remember them asking how things work around here without realizing I was myself just in.

That feeling—of being mistaken for someone who knows, when you’ve barely figured out which door you came through—comes back every time I hear a practice talk about their “AI capability.”

You know the moment. Procurement asks if your practice uses AI and you watch the business development team scramble to say yes before anyone’s worked out what that means. Someone who ran a pilot last month is suddenly the expert. The subscription was approved last quarter and now it’s a credential.

Nobody believes this is transformation. But nobody wants to be the practice that admits it publicly.

This isn’t about technology. It’s about position. And once you see it, AI adoption in architecture starts looking less like digital maturity and more like a very polite Ponzi scheme.

The Mechanics

The greater fool theory is straightforward. You buy an overvalued asset—not because you believe in its intrinsic worth, but because you believe someone more naive will buy it from you at an even higher price.

You’re not being irrational. You’re being strategically irrational. You know it’s overvalued. You just believe you won’t be the last one holding it when reality catches up.

The scheme works because everyone is making the same calculation: I’m not the fool—the fool is the one after me.

Architecture is running the same logic with AI, except nobody’s admitting it.

What’s Actually Being Bought

Practices are adopting AI tools at a pace that doesn’t match the economic fundamentals. Fee compression continues. Liability hasn’t been renegotiated. Insurance doesn’t cover algorithmically assisted errors any more generously than manual ones.

And yet: subscriptions get approved, pilots get commissioned, training gets allocated.

What’s being purchased isn’t productivity. Faster output without better pricing just compresses margins further. It’s not competitive advantage—when every practice buys the same tools, differentiation flattens. It’s not reduced liability—accelerated iteration without governance just increases the surface area for undetected error.

What’s being purchased is position. The ability to signal “we’re ahead of this” before clients start asking why you’re not.

The asset isn’t the tool. It’s the perception of not being left behind.

Who’s Passing What to Whom

Suppose your path to winning work now runs through a procurement question about digital capability. You could explain that AI isn’t appropriate for your project typology, that the risk profile doesn’t support it, that the fee structure hasn’t adjusted to accommodate validation overhead.

Or you could just tick the box.

Ticking the box is easier. Not because you believe AI will transform the project—because you believe the client values the signal more than they’ll notice the absence of substance. You’re not lying. You do have AI capability. You subscribed. You piloted. You can demonstrate it in the pitch.

Whether it actually delivered value is a question for later. And by later, you mean: after the contract is signed.

You’re not the fool. The fool is whoever believes this signal represents transformation rather than elimination avoidance.

But here’s where the scheme gets interesting. Because you’re also not the one who’ll be asked to validate it.

Where It Cashes Out

The AI-accelerated massing study looked impressive in the presentation. The parametric facade options tested well in early coordination. The optimized floor plate got client approval.

Three months later, someone is staring at a coordination model full of geometry that looked technically convincing but wasn’t validated against the thousand small realities that determine whether a building actually goes together. Optimized layouts that don’t account for services coordination. Parametric logic that works in isolation but breaks when other disciplines load in. Design decisions that were made by algorithm and approved by people who don’t remember what constraints were input.

That someone is not the partner who won the commission by signaling AI capability. It’s not the project architect who accelerated the design phase. It’s not the client who asked for innovation.

It’s the BIM manager trying to work out which walls are provisional and which failures are about to become visible.

BIM managers saw this coming because they’ve already lived it. When BIM was the new cargo, the same pattern formed. Practices bought licenses, set up workflows, hired coordinators—all before the fees adjusted. Coordination became “included.” Clash detection became “expected.” The model became the deliverable, the evidence, and the liability, all at once.

And the people managing that system absorbed responsibility faster than they accumulated authority or compensation.

AI is following the same script, except faster. Architects adopt it for option generation, massing studies, facade optimization. It accelerates early-stage design. That looks like value. But the coordination layer—the people who translate design intent into coordinated, buildable information—inherit the fragility. They’re the ones who discover that accelerated doesn’t mean validated.

The greater fool isn’t the client who commissioned the work. It’s not even the architect who signed off on the fee. The greater fool is the BIM manager three months later, trying to coordinate decisions that nobody remembers making, trying to validate outputs that nobody budgeted time to check.

That’s where the scheme cashes out. Not at the pitch. Not at the contract. At the coordination review when someone realizes the accelerated design process didn’t leave time for anyone to work out whether the thing can actually be built.

And by then, everyone upstream has moved on.

The Admission Nobody Makes

Here’s what you won’t hear at a conference: “We adopted AI because we were afraid not to. We haven’t worked out how to price it. We’re not sure it’s making us more competitive. We’re just hoping that when clients start demanding it, we’ll already be in the room.”

That’s the pub table version. That’s what people admit when the PowerPoint is off.

And it’s not cynicism. It’s pattern recognition. BIM ran the same script. Cloud coordination ran the same script. The promise was strategic advantage. The delivery was table stakes. And the people left carrying unpriced responsibility were the ones who couldn’t afford to refuse it.

AI didn’t invent this dynamic. It just made it faster.

The scheme works because everyone has a reason to believe they’re not the fool—they’re just early.

But there’s no difference between early and foolish until someone asks who’s actually holding the risk.