The industry has spent the last two years acting as if Large Language Models are the most “large” and complex thing to have arrived in architecture. Meanwhile, practices have been running on a Large Model for decades. It just doesn’t use language. It uses geometry.
The Revit file sitting on your server — the one that crashes when you look at it wrong, the one three people are too nervous to touch and two more don’t fully understand — that is a Large Model in every sense that matters. Not trained on text scraped from the internet. But large in the way that actually counts inside a practice: dense, interconnected, full of decisions that compound in ways no single person can track. It doesn’t describe a building. It embodies one. It is a machine for holding relationships that would dissolve the moment you tried to put them into words.
The confusion — and it is a confusion worth taking seriously — is that LLMs and BIM models are both called “models,” and both involve processing enormous amounts of information. But they do opposite things. One flattens everything into language and regenerates something plausible from the description. The other holds spatial relationships that language was never equipped to carry in the first place.

What Prompting Actually Does to a Design
There is a specific moment in every design process where something important happens, and it almost never appears in any project narrative. It is the moment you try to draw the thing you just described.
You have the concept. A compact courtyard section, a stair that wraps a lightwell, a structural grid that doubles as a facade rhythm. It sounds coherent. It is coherent, in the way that sentences are coherent. Then you start modelling it, and the model immediately starts asking questions the concept never had to answer. How thick is the slab? Where does the duct go? What happens at the corner where the facade meets the return wall? How does the stair land on the floor below without eating into the accessible route?
None of these questions are hostile to the idea. They are the idea, resolved into the material world. And the process of resolving them — the friction, the iteration, the small adjustments that accumulate into something that actually holds together — is not a production task. It is a design task. Drawing is not how you record a decision. It is how you discover what the decision actually requires.
A prompt bypasses that discovery entirely. You describe what you can articulate, and the system generates something that matches the description. Which means it also inherits everything you didn’t articulate — filled in by statistical likelihood, by the training set’s sense of what usually comes next, by the accumulated average of buildings that have already been built. The prompt describes the concept. The gaps are filled with defaults.
Language Is a Lossy Format for Space
Suppose you try to describe a staircase in words. You can do it. You can specify the rise, the going, the floor-to-floor height, the structural approach. All of that is transmissible through language without much loss. But the moment you start drawing it, the geometry starts talking back. Set the floor-to-floor height and the rise, and the number of treads is determined. Set the tread width and the inner radius, and the going at the newel is determined. Move the landing, and the handrail must follow. None of those dependencies were in the verbal description. You didn’t have to specify them because drawing enforced them for you. They emerged from the geometry the way consequences emerge from decisions — not listed, but inevitable.
This is the distinction the current conversation about AI and architecture keeps sliding past. Language has to make everything explicit. Geometry can be inferential. A model can hold a thousand dependency chains simultaneously — ceiling height constraining duct routing, duct routing constraining room widths, room widths constraining whether three meeting rooms fit along that wall or four — without any of that being written down anywhere, because it doesn’t need to be written. It needs to be drawn. The moment you shift to a language interface, all of those invisible constraints have to either be stated explicitly or be left to chance. And no one states them all, because no one knows them all in advance. You find them by drawing.
The Accumulated Intelligence Nobody Wrote Down
Every Revit model contains geometry nobody designed. Not explicitly, anyway. The ceiling void that tapers toward the core because MEP needed more clearance at the perimeter. The structural bay that ended up 200mm narrower than the others because the grid shifted when the site boundary was confirmed and nobody had the programme time to regularise it. The facade panel that is fractionally different from the one next to it because it spans a movement joint and needs different fixings. None of this appears in the design intent document. None of it was a deliberate compositional move. It accumulated — the sediment of fifty people touching the model over eighteen months, each change having consequences that compounded quietly in the background.
That accumulated intelligence is not noise. It is the evidence that the design survived coordination. It is what makes the model useful to the contractor and the subcontractors and the specialist fabricators who come after. And it is precisely what gets lost when you regenerate rather than revise.
LLMs don’t revise. They regenerate. Ask one to update a design and it produces a new version that looks like it evolved from the old one, but didn’t. The embedded decisions from the previous version — the reasons things were sized that way, placed there, detailed like that — are gone. The prompt describes a state. The model produces a new state. But states don’t have memory. States don’t know why they are the way they are.
The Part That Doesn’t Survive Translation
The concept survives. The diagram survives. The things you can name clearly enough to put in a brief — spatial sequence, orientation, structural rhythm, programme adjacencies — all of that translates without too much loss. An LLM can produce a plausible diagram faster than most architects can sketch one, and in some cases it will produce a better one, simply because it has processed more of them.
What doesn’t survive is the load-bearing middle. The corner condition that only becomes a problem once you try to detail it. The stair that is entirely reasonable until the head height, the landing geometry, the handrail transition, and the fire strategy all arrive at the same 900mm and start competing. The service coordination that rewrites the ceiling design without anyone noticing until the fit-out contractor opens the model. These are not edge cases. They are the substance of delivery. The concept gets you perhaps 20% of the way to a building. The remaining 80% is negotiation — with gravity, with programme, with the work of every other discipline, with the gap between what was intended and what can actually be built.
And here is the uncomfortable part. A language interface doesn’t just fail to capture that 80%. It actively trains you to stop thinking about it. When you prompt, you specify what you can articulate. If the system cannot use a constraint, you stop articulating it. Over time, you stop noticing it. The interface doesn’t just filter what gets generated — it reshapes what the designer thinks they need to consider.
What the Model Remembers
There is a well-worn line in BIM circles about the model remembering how you treated it on Friday night. It is usually said about worksets, or sync warnings, or the week before issue when someone decided to rationalise the level structure. But it applies more broadly than that. The model remembers because the geometry is connected. Move one thing and the dependencies propagate. Sometimes that propagation is visible immediately. Sometimes it shows up two weeks later, in a clash you weren’t expecting, in a dimension that no longer works, in a detail that was fine until something upstream changed.
That memory is not a bug. It is the mechanism by which the model keeps the design honest. It is the thing that prevents you from making two incompatible decisions simultaneously without noticing. It is, in a fairly literal sense, the intelligence of the model — not stored in any parameter field or data schema, but embedded in the spatial relationships between objects that move together.
A language model has no equivalent of this. It has associations, not dependencies. It knows that ducts and beams and ceilings tend to appear together in building descriptions. It does not know what happens to the ceiling grid when the beam moves. That knowledge only exists in the large model — the one that already exists, the one the practice has been building for the last decade without calling it that.
The diagram will survive. The rendering will look convincing. The concept will translate, perhaps even improve, in the hands of a system trained on every building ever documented.
But buildings are not diagrams. They are the product of ten thousand small negotiations between intent and constraint, most of which were never written down because they didn’t need to be — the drawing was doing that work instead.
If architecture becomes a promptable medium, the profession will get very good at stating intentions. It will get progressively worse at the part that comes after: the part where the intentions meet reality and have to be resolved into something that actually stands up.
Architecture has always had a Large Model. It just ran on geometry, not tokens. And geometry, it turns out, knows things that language doesn’t.
