Skip to main content
AI DECONSTRUCTED No. 001 / 2026
[AI DECONSTRUCTED — ISSUE 001]

AI Deconstructed

Reading AI companies as systems — capital, compute, and the logic of moats.

Earnings calls and breathless coverage tend to bury the actual question. Why does a frontier lab raise billions before it ever turns a profit, then flip to extraordinary margins the moment its API hits scale? Why can a small applied-AI team beat a much larger incumbent inside a narrow vertical? Why do open-weights communities and closed-API providers both keep winning, on different terrain? None of this is mystique — it is structure. This is an independent field guide that takes apart the AI industry along four axes: where revenue comes from, what the cost shape looks like, who keeps using the product, and what makes the moat hold. By the end, the goal is that you can draw any AI company on a napkin in your own words.

Preface · 00

Preface

The goal of this issue is to replace "watch the famous AI companies because they are famous" with "watch them because their machinery is now legible to you." We are not interested in founder mythologies or shipping-cadence anecdotes. We are interested in four loops — where the revenue arrives, what the running cost is made of, who keeps coming back, and what protects the business from being copied — and how those loops compose into a working AI company.

There is no required reading order. Chapters one through five build the analytical frame, and chapter six applies the same four axes to eight archetypal AI businesses. Theory-first readers can move straight through; pattern-first readers can open chapter six first. Both paths end facing the same direction.

What this issue tries to hand you is not trivia about specific labs or products — those age within months. It is a habit: the habit of putting the same four questions to any AI company that crosses your feed. Once that habit takes hold, the news cycle starts to read differently. Funding rounds, product launches, and benchmark wars resolve into shapes you have already seen before.


CHAPTER · 01

Anatomy of an AI growth curve

Chapter 01 / Growth Anatomy

AI companies do not grow on a smooth diagonal. Their expansion is a sequence of distinct regimes — research demo, narrow product fit, distribution flywheel, defensible scale — and each regime rewards different bets. Teams that confuse the regime they are in for the one they want to be in tend to stall halfway up the curve.

  1. 01

    Finding capability–market fit

    Before raw model quality matters, an AI product has to land on a real piece of work that someone would notice losing. The signal is not benchmark improvement but retention inside a specific workflow — users returning because the system actually closes a loop they care about. Spending on ads or sales before this regime is reached is pouring water into a cracked beaker. Measure activation, repeat use, and how loudly early users describe the product to peers. Revenue is a secondary instrument here.

  2. 02

    Building repeatable acquisition

    Once capability–market fit is real, expansion finally pays off — but expansion here does not mean raw revenue growth. It means holding cost per acquired user roughly constant while volume grows. The work of this regime is finding a reliable acquisition channel (developer onboarding, product-led free tier, partner integrations, content surface) and designing the team and tooling that can run it without breaking.

  3. 03

    Building self-reinforcing loops

    AI businesses become hard to dislodge when usage itself improves the product — more queries surface more failure modes that feed evals; more enterprise deployments yield more domain-specific traces that sharpen fine-tunes; more developers building on an API expand the integration surface that newcomers must replicate. These loops do not appear by accident. They are designed in: telemetry, feedback collection, model–data flywheels, and developer-platform structure are all the same problem viewed from different angles.

  4. 04

    Turning advantage into a moat

    Once the business is large enough to matter, the question becomes what is still here in five years. Switching cost, capacity advantage, proprietary data layers, distribution embeddedness, and trust in regulated domains — the deeper these stack, the harder the company is to clone. Compute alone is not a moat: it can be rented. Models alone are not a moat: open weights catch up. Moats in AI are almost always composite, and the moment just before peak growth is the cheapest time to deepen them.

CHAPTER · 02

Reading the model economy

Chapter 02 / Business Model

A business model is a choice about who pays, in what unit, at what frequency. In AI, that choice also encodes a position on compute exposure, gross margin trajectory, and how exposed the company is to the next price drop in inference. Read the shape of the revenue before you read the size.

  1. 01

    The four pricing primitives

    Most AI revenue resolves to four shapes: per-token / per-call usage pricing (variable, tracks compute), per-seat or per-workspace subscription (predictable, decoupled from inference cost), outcome- or volume-based contracts (charge per resolved ticket, per generated asset, per closed lead), and hardware-led models where a device or chip carries the value, with software and services trailing. The same workflow can be sold in any of these four shapes — and the resulting income statement looks completely different in each case.

  2. 02

    Unit economics in an inference-priced world

    In AI, unit economics has an extra term most software businesses do not: cost of serving a customer is not flat. Every active user generates inference cost that can move with model size, prompt length, and retrieval depth. Healthy unit economics in this category require either holding inference cost down (smaller models, caching, distillation), pricing it through (usage-aligned billing), or extracting enough downstream value (workflow lock-in) that gross margin survives anyway. Companies that price subscription but pay for inference per token live or die on this math.

  3. 03

    Two-sided structure in AI platforms

    Model marketplaces, agent platforms, and AI developer clouds are not selling a product directly — they are arranging traffic between two sides. Model providers and application builders. Tool authors and agent runtimes. Annotators and labs. Once each side reliably pulls the other in, the platform itself becomes the standard, and the cost for a newcomer is no longer building a comparable product but rebuilding the surrounding ecosystem.

  4. 04

    Fixed-cost leverage in compute-heavy businesses

    The reason a mature AI lab can swing from cash-burning to high-margin so quickly is the same reason it bled cash on the way in: pre-training, research staff, and reserved capacity are mostly fixed. Below break-even those fixed costs dominate; above it, every additional dollar of API revenue drops disproportionately into operating profit. This asymmetric curve explains why so many AI businesses look terrifying for a few years and then suddenly look obvious.

CHAPTER · 03

The operator's calculus

Chapter 03 / Founder Logic

AI operators come from a narrower distribution than software founders generally — research backgrounds, infrastructure backgrounds, applied-ML at scale, or domain experts who only became technical because their problem demanded it. The way they decide is recognisably different from the operating playbook of an enterprise-software CEO, and reading that difference makes the next move of a company much easier to anticipate.

  1. 01

    Why they started

    AI founders cluster into three motivations: a research conviction that a particular capability is now within reach, a product conviction that a specific painful workflow can finally be automated, or an infrastructure conviction that someone has to build the missing layer (tooling, evals, agents, governance) before everything else stabilises. The motivation persists into every later decision — what they over-invest in, which trade-offs they refuse, and whose criticism they take seriously.

  2. 02

    Deciding under model uncertainty

    AI operators have to bet before the substrate they depend on has settled. Tomorrow's frontier model could obsolete this quarter's product. Strong operators handle this by being explicit about which assumption is load-bearing — "this only works if context windows keep growing," "this only works if open weights stay six months behind frontier" — and by running the cheapest experiment that would falsify that assumption first. They do not run a balanced analysis; they sequence kills.

  3. 03

    Designing risk, not avoiding it

    What looks from outside like a bold bet — pre-training your own model, going closed-source, committing to a single vertical — is usually the visible end of a series of smaller, contained experiments. AI operators treat risk as something to allocate: where can we lose, and how do we keep that loss reversible? The skill is not boldness; it is structuring exposure so that one bad call does not end the run.

  4. 04

    What they do with failure

    In AI culture, failed training runs, abandoned model lines, and shipped-then-pulled features are not stains — they are tuition already paid. What matters is not the count but the density: which decision improved because of what was learned, and how quickly did that learning propagate to the rest of the team. An operator's relationship with failure quietly sets the entire organisation's tolerance for ambition.


CHAPTER · 04

The first hundred days of an AI company

Chapter 04 / From Zero

The earliest stage of an AI company is not a coding sprint and not a fundraising sprint — it is a sprint to find out whether the problem you are pointed at actually exists in the shape you imagined. The quality of work done in these weeks compounds into every later decision.

  1. 01

    Falsify before you train

    The fastest way to learn whether anyone needs what you are about to build is to ask them before you build it. Sit with the intended users, ask how they handle the workflow today, ask what they would pay to make it disappear, ask where their current tools fail. The goal of this stage is not "find the right answer" but "narrow the surviving hypotheses." Training a model before this work is done is the most expensive way to discover you were solving the wrong problem.

  2. 02

    Co-founder composition

    The combination of co-founders sets the cultural and technical durability of an AI company more than any single hire afterward. The strongest pairings cover three angles — research / model work, product and applied engineering, and the customer or domain — and have shared experience under pressure, not just shared interests. The capacity to disagree sharply and recover by next week is rarely visible in casual interviews; it is worth the months it takes to confirm.

  3. 03

    Demos before training

    Minimum-viable in AI does not mean a smaller model — it means the cheapest thing that creates real usage signal. A wrapper around an existing API, a curated prompt, a thin agent, a Notion form behind a "magic" backend that a human runs — any of these can produce more learning per week than a from-scratch fine-tune. If shipping the first version feels embarrassing, you have probably already missed the optimal moment to ship it.

  4. 04

    Living close to the first users

    Early users in AI are not revenue — they are co-developers of the product's direction. Who you choose to deploy with first determines which failure modes you see, which evaluations you build, and which traces fund your eventual fine-tunes. The intimate conversations with the first thirty users produce the insight that lets you reach the next three thousand. Outsourcing this contact to a sales team too early is the most common avoidable mistake of this stage.

METHOD · 05

The four-axis read

Chapter 05 / Analysis Framework

Understanding an AI company is not the moment you have collected the most information about it — it is the moment you can describe its mechanism and its reason to persist in your own words. The four questions below apply to any AI business, regardless of stage, sector, or open-vs-closed positioning.

  1. Q1

    Revenue design — where does money enter?

    Who is paying, for what unit, at what frequency? Is this per-token, per-seat, per-outcome, or hardware-led? If there is more than one stream, which is the engine and which is the rider? You should be able to describe the revenue surface in one sentence before you read anything else about the company.

  2. Q2

    Cost shape — what carries the operation?

    Is this a research-heavy business burning fixed cost on training, an inference-heavy business whose cost scales with usage, or a human-in-the-loop business where labelling and review dominate? Where do the GPUs go? Where do the headcount dollars go? This axis tells you where growth will first break and what level of margin is even structurally possible.

  3. Q3

    Demand — who keeps coming back, and why?

    New users are easy to acquire when a category is hot. Returning users are the only honest signal. Who is the cohort that uses this every week six months in? What problem actually got solved for them — and would they pay more if it disappeared? Acquisition and retention are entirely different problems, and the latter is what determines long-run intensity.

  4. Q4

    Moat — why can't the next team catch up?

    When several products do the same thing and one keeps winning, something is impeding replication. In AI, that "something" is usually a stack: head-start in capacity, proprietary or contracted data, deep workflow integration, brand trust in a regulated domain, or a developer ecosystem with switching cost. Single-layer moats erode fast. Composite ones — three of these at once — are what hold for a decade.

DOSSIER — Eight AI archetypes read on four axes

Case 01 — 08 / Field notes

Chapter · 06 — Case Studies

Eight archetypal AI businesses

Eight recurring shapes in today's AI industry, each sketched against the same four axes from chapter five. These are not portraits of named companies; they are abstractions of the structures multiple companies share. Once these shapes are familiar, the daily news cycle stops being a flat list of launches and starts resolving into a small number of recognisable patterns.

Editorial note — every entry in this chapter is the editors' structural observation, drawn from publicly available information. None of it is a recommendation to invest in, hire, or transact with any company.

Model marketplace / hosted-inference hub

STR · 02
Business-structure illustration for Model marketplace / hosted-inference hub
Revenue design
A two-sided platform that hosts many models from many providers and bills application builders for usage, taking a margin on inference. The platform does not produce the models; it produces the surface where developers find, compare, and switch between them.
Cost shape
Variable inference cost passes through to customers; the platform's own spend goes into capacity management, autoscaling, quality monitoring, and abuse / safety review. Aggregate utilisation across many tenants is what unlocks gross margin — empty GPUs are the enemy.
Demand
Application developers who want optionality across model families and do not want to operate inference themselves. As more models arrive on the platform, the surface becomes the natural place to ship; as more developers ship there, more providers list their models. The classic two-sided pull.
Moat
Listed model diversity, aggregated demand, and the developer tooling layer (routing, evals, observability, billing). Once a meaningful share of new AI applications start on the platform by default, the cost of replicating it is not technical — it is rebuilding both sides of the marketplace at once.

Enterprise AI deployment

STR · 03
Business-structure illustration for Enterprise AI deployment
Revenue design
Large multi-year contracts with regulated or process-heavy organisations — banks, insurers, hospitals, governments — typically structured as a platform fee plus per-seat or per-workload usage. Land in one department with a contained pilot, expand sideways across the enterprise as results compound.
Cost shape
Heavy investment in solutions engineering, compliance (SOC 2, HIPAA, regional data residency), enterprise security review, and customer success. Revenue per logo is large, but reaching that revenue requires building the kind of organisation that can survive an enterprise procurement cycle.
Demand
Executive sponsors looking to compress cost, reduce regulatory risk, or unblock a strategic backlog. Sales cycles are long and committee-driven; renewals, once secured, tend to extend for years. The contract is to the organisation; the daily users are the operators inside it.
Moat
Depth of integration with systems of record, regulatory certifications that newcomers must re-earn from scratch, and the trust relationships built over multi-year deployments. Replacing the system means rewriting business processes — the switching cost is procedural, not technical.

AI developer platform / API

STR · 04
Business-structure illustration for AI developer platform / API
Revenue design
Direct API access to foundation models, agents, or specialised capabilities (speech, vision, embeddings, code), billed per token / per call. The platform reaches end users indirectly — through every application its developers build on top.
Cost shape
Spending concentrates on capacity, model training and updates, developer documentation, and the safety / abuse layer. Once docs and SDKs are strong, support cost per developer drops sharply — community and references take over for routine questions.
Demand
Engineering teams who would rather rent intelligence than train it. The decisive variables are time to first successful integration, predictability of latency and cost, and how cleanly the API composes with existing stacks. Developers churn quickly if any of these degrade.
Moat
Layered: model capability and frontier proximity at the bottom; documentation, SDKs, example libraries, and ecosystem in the middle; pricing and reliability on top. Each layer alone is replicable; together they create migration cost — the more code already written against this API, the higher the bar to move off it.

Freemium AI consumer product

STR · 05
Business-structure illustration for Freemium AI consumer product
Revenue design
A capable free tier — chat, image, voice, generation — funded by a smaller paying tier with higher limits, faster models, or pro-grade capabilities. The free surface produces the distribution; the paid surface produces the revenue. The conversion rate from free to paid decides whether the whole equation closes.
Cost shape
Inference cost runs ahead of monetisation, sometimes by a large factor. The viability of this archetype turns on where the free / paid line is drawn, how aggressively inference cost is reduced through smaller models and caching, and how quickly the conversion ramp matures.
Demand
Individual users and small teams who pull the product into their workflow voluntarily, then drag it into the organisations they belong to. Bottom-up adoption replaces enterprise sales for the early years and creates a referral surface that paid acquisition cannot match.
Moat
The scale of the free surface itself, the behaviour data it generates, and the brand familiarity of being the consumer-default for a capability. Followers must spend years and large inference subsidies to even appear at the same starting line.

AI-native hardware + cloud

STR · 06
Business-structure illustration for AI-native hardware + cloud
Revenue design
A purpose-built hardware product — chip, board, edge device, wearable — sold (sometimes at thin margin) into customers who then buy compute, software, or services around it. Hardware is the entry point; the recurring layer is where margin lives.
Cost shape
Hardware brings serious fixed cost back into the picture: design, fab capacity, supply chain, inventory, logistics. Software margin is high in isolation, but the combined business looks healthy only if the hardware side is engineered for cost over its full lifecycle.
Demand
Customers whose workload demands silicon-level optimisation — large-scale inference operators, robotics platforms, edge-AI deployments, AI-first consumer devices. Once the hardware is bought, the customer is structurally tied to whatever software ecosystem ships with it.
Moat
The tight coupling between silicon, drivers, model runtimes, and developer tooling. A competitor can match the chip or match the software, but matching the integration — the way the whole stack performs together — is what takes years to reproduce.

Open-weights community / model commons

STR · 07
Business-structure illustration for Open-weights community / model commons
Revenue design
A community-anchored model ecosystem — open weights, public fine-tunes, public evals — monetised through hosted inference, enterprise support, premium fine-tuning, or sponsored research access. The community itself does much of the work that closed labs do behind walls.
Cost shape
Foundation training is still expensive, but downstream cost of distribution, education, and validation is partly absorbed by the community. Sustaining the commons requires real investment in moderation, governance, and quality control — the moment the community feels neglected, it forks.
Demand
Developers, researchers, and organisations that need to inspect, modify, or self-host their models. Many are not paying users directly; their contribution is the gravity they create — published fine-tunes, evaluation harnesses, and architectural improvements that compound across the ecosystem.
Moat
The body of contributions — fine-tunes, datasets, tutorials, derivative tools — that cannot be relocated wholesale to a competitor. Forking the code is trivial; forking the community that wrote the code is functionally impossible. The graph itself is the moat.

AI-native replacement of incumbent service

STR · 08
Business-structure illustration for AI-native replacement of incumbent service
Revenue design
A service category previously delivered by armies of human operators — outsourced support, basic legal work, accounting close, content moderation, low-end translation — rebuilt on an AI substrate with a thin human layer on top. Pricing undercuts the incumbent operating model by a large multiple while preserving or improving SLAs.
Cost shape
A blended cost structure: inference, the residual human-in-the-loop layer, quality assurance, and regulatory compliance specific to the replaced industry. Pure software margins are not achievable here, but the spread against the displaced labour model is wide enough to fund growth.
Demand
Buyers who already pay for the legacy service and are squeezed on price, latency, or quality. The pitch is rarely "AI-powered" — it is "same outcome, faster, cheaper, fewer surprises." The fact that the workforce underneath is increasingly model-driven is an implementation detail.
Moat
The operational know-how of running an AI-mediated service safely in a specific industry — escalation paths, regulatory posture, quality gates, brand reliability. Replicating the technology is straightforward; replicating the operational maturity required to be trusted with the work is what takes years.
Further Reading · 07

Where to read further

Chapter 07 / Further Reading

This issue is meant to stand on its own, but for readers who want to push their resolution on the AI industry further, the categories below are more useful than any specific title. Read along the seam these categories form, and the news cycle starts to feel much less random.

  1. 01

    Primary documents from the labs themselves

    Model cards, system cards, technical reports, evaluation suites, and safety filings released by labs are the most direct source on what is and is not claimed. Read them slowly. The interesting information is usually in the limitations sections and in what the report does not measure, not in the headline benchmark.

  2. 02

    Founder and operator long-form

    Memoirs, essays, and long interviews from AI founders, infrastructure operators, and senior researchers carry decision logic that no external analysis recovers. Focus on the sections where they discuss what they got wrong and how they corrected. Success narratives compress; failure narratives expand.

  3. 03

    Classic competitive-strategy literature

    The canon on competitive advantage, industry structure, and business design predates AI by decades but provides the conceptual scaffolding for the four axes in this issue. The examples are old; the questions still work. Read it for the way of asking, not for the cases.

  4. 04

    History of compute, networks, and platforms

    Long-arc accounts of semiconductors, the internet, cloud infrastructure, and prior platform shifts make today's AI landscape less mysterious. Most of the structural moves currently being made in AI have prior analogues. Knowing those analogues makes the next move easier to anticipate and harder to be surprised by.

EPILOGUE

Keep the habit

The ability to read an AI company structurally is not innate — it is the by-product of repetition. Every time a company crosses your feed, run the same four questions: where does revenue enter, what does the cost shape look like, who keeps coming back, what keeps the moat in place. The first dozen passes feel mechanical. After enough repetitions, the read happens in the background.

No archetype or framework in this issue is the final word. Real AI companies blend modes and shift weight between them as conditions change. The point is not to land on a single correct analysis but to keep asking the same questions of every new launch, every funding round, every quietly excellent product that appeared last week. Try it tomorrow on whichever AI story shows up first. You should find one extra layer in the picture that was not visible yesterday.

↑ Back to top