All Posts
AI Trends2026-05-0910 min read

Naval Just Described KnowMine's Target User — From His Own Workflow

In two early-2026 episodes, Naval Ravikant outlined a 3-tier market collapse, the death of pure-software investability, and what solo builders should anchor on. The detail nobody quoted: he runs four AI models in parallel, with no shared memory between them. That's not a power-user habit. That's a missing product layer — the one KnowMine has been building toward.

Naval RavikantAI knowledge basemulti-model workflowshared memory layerMCPsolo builder

In the last week, two Naval Ravikant podcast episodes lit up English AI Twitter and the Chinese AI scene at the same time.

The English side keeps quoting one line: "pure software is uninvestable." The Chinese repackaging has already mutated it into "the godfather of investing says programmers will explode from 0.1% to 3% of the population" and "Apple is finished." Each translation lossier than the last.

Both sides missed the same thing.

Across nearly three hours, the most-quoted material is Naval's market-structure prediction, his vibe-coding take, and his Apple call. Almost nobody stopped to listen carefully to the part where he describes his own daily workflow with AI.

That's the part worth stopping on. Because if you actually listen, Naval is — without quite naming it — drawing a user persona for a product that doesn't exist yet.

Pinning down the sources, before any more telephone

I don't trust the second-hand repackaging of either language's AI media. So every Naval quote in this post is traced back to the English source.

The two episodes:

  • Episode 1: "A Motorcycle for the Mind", published 2026-02-22. Naval and Nivi. The topic: what AI means for builders, coders, and everyone else; what vibe coding actually is; whether traditional software engineering is dead. Notes at podcastnotes.org.
  • Episode 2: "A Return to Code", published 2026-05-02. The topic: the personal app store, the uninvestability of pure software, the end of Apple's dominance, coding agents replacing customer support. Notes at podcastnotes.org.

If you want to see how badly second-hand sources mangle this stuff, here are the typical drift patterns:

Translated/repackaged versionWhat Naval actually said
"Apple will be finished"Margins compressed, market cap re-rated
"Programmers will jump to 3%""0.1% to maybe 1%–3%" — the upper bound got turned into the prediction
"AI is an eager-to-please retriever"Not in either episode's notes
"Pure software is uninvestable"Verbatim — this one is accurate

Everything below cites podcastnotes or transcript directly.

Three things Naval actually said

Claim 1: The market is collapsing into three tiers

Naval frames AI's effect on software the way the internet hit content:

"The market structure that follows will mirror what happened with the internet: one or two giant aggregator platforms, a few massive winner-take-all apps at the top, and a huge long tail of niche apps filling every corner. The middle gets blown apart."

In product language:

  • Top: one or two giant aggregator platforms plus a handful of winner-take-all apps
  • Bottom: a massive long tail of niche apps built by vibe coders and solo builders
  • Middle: 5–20 person SaaS teams get squeezed flat — category leaders crush from above, vibe coding eats from below

This is exactly what happened to video and podcasting. The "good but not the best" middle layer of content companies got erased. Naval's claim is that software is on the same trajectory, just compressed.

Claim 2: Pure software is uninvestable

This is the line everyone quotes:

"Pure software is uninvestable, full stop."

His two reasons:

  1. Anyone can hack it together today — vibe coding lowered the bar from "knows how to write software" to "can describe what they want"
  2. Coding agents will absorb scalable architecture within a year — the engineering work you're doing now is next year's default agent capability

Note: he didn't say "pure software can't make money." Uninvestable is a VC frame — you can't raise an LP fund to bet on code-only moats. But solo builders don't need LPs. A lot of takes online conflate those two things.

So where should VCs look? Naval points at hardware, network effects, and AI models themselves. Pure code is a commodity now.

Claim 3: Where the moat moves for solo builders

If pure code isn't a moat anymore, who wins? Naval's answer is blunt:

"Become the best in the world at what you do. Keep redefining what you do until this is true."

This sounds like a fortune cookie until you try to apply it to a product decision:

  • Not "find a niche" — find the niche where you can plausibly be #1 in the world
  • The definition of "what you do" is allowed to keep narrowing until the claim becomes true
  • The moat moves to domain expertise, network effects, taste, and judgment — everything AI hasn't outpaced humans on yet

He has a near-throwaway line elsewhere in the same episode: when everyone has the same AI tools, the alpha cancels out, and what's left is human vision, taste, and judgment.

These three claims would be enough for one essay. But they're not what made me stop.

The part nobody quoted

In both episodes, Naval describes how he personally uses AI.

According to the podcastnotes summary: he runs four AI models in parallel — Claude, ChatGPT, Gemini, Grok — sends the same question to all four, lets them run simultaneously, then compares answers before deciding where to drill deeper. For politically sensitive questions, he cross-checks against the underlying data and discounts answers where training pressure is obvious.

This sounds normal. Naval is one of the sharpest angel investors alive. Multi-model comparison is what people like him do.

But stop and look at what's actually happening:

  1. The same question gets pasted into four different models
  2. Four answers get compared by hand, taking the best of each
  3. When he wants to drill deeper, he switches model and re-pastes context
  4. Nothing remembers "I asked this thirty minutes ago, got these four answers, and I lean toward ChatGPT's framing"
  5. New session, blank slate — start over

In this workflow, every agent runtime is rebuilding memory from scratch. When he switches to Claude, Claude doesn't know what Gemini just said. When he opens Grok, Grok doesn't know this is round three on the same question. Every switch is a context-rebuild tax.

Naval's judgment is good enough that he can fuse four answers in his head. But what he's describing isn't "the workflow of a power user." It's a product-shaped hole:

When you need four AIs to triangulate a trustworthy answer, and there's no shared memory between them — that's not a feature gap. That's the shape of a market.

He never names it. But he just demonstrated it on a podcast that's been heard a few million times.

This is exactly KnowMine's target user

I'm not retrofitting Naval onto our pitch. Let me lay out the bet KnowMine has been making for the last year, then map it onto what he described.

The bet:

The future is not "one AI for everything." It's "one user wired into a dozen specialized agents." Each agent is great at its own thing. None of them shares memory with the others. The shared-memory layer becomes infrastructure that doesn't belong to any single agent vendor.

Now overlay that against Naval's actual day:

What Naval is doingWhat KnowMine is solving
4 AI models running the same questionMultiple agents on the same shared context
No shared memory between themMCP protocol layer + pgvector persistence
Re-pasting context on every model switch"One knowledge base, every AI"
Cross-checking sensitive answers against raw dataAuditable, traceable knowledge sources
Trusting taste and judgment to decideUser owns the write-back and edit rights

This isn't a retrofit. It's the positioning we shipped a month agoone knowledge base, every AI. Naval is, unintentionally, validating it with his own workflow.

Here's the part that closes the loop: Naval himself argues that vision, taste, and judgment are the scarce resources. But vision, taste, and judgment only compound if they get persisted somewhere durable, accessible across agents, and auditable. Otherwise every new session is a fresh prompt of your own framing — and taste gets consumed instead of accumulated.

Compounding judgment is the thing KnowMine is actually betting on.

A solo builder's tool stack in the long-tail era

Push Naval's three-tier prediction one layer deeper.

If the middle dies and the long tail belongs to vibe coders and solo builders, what does the solo builder's tool stack look like?

I run two products solo — KnowMine and KnowSales — on top of ten years of B2B trade domain expertise. The stack I've settled into looks like this:

  • The replaceable layer: models, IDEs, agent shells (Claude Code, Cursor, ChatGPT — pick whichever is best this quarter)
  • The non-replaceable layer: your knowledge, your SOPs, your customer corpus, your judgment frames

The replaceable layer's "moat" gets reset every three months. The non-replaceable layer, if it lives outside any single agent (user-owned, portable, MCP-accessible), compounds across vendors for the next ten years.

When Naval says code is no longer a moat, this is the same observation: the moat has to be something AI hasn't outrun and that doesn't disappear when you switch models. Domain expertise qualifies. A cross-agent interop standard like MCP qualifies. A persistent semantic layer like pgvector qualifies. Those aren't things a vibe coder reproduces in one evening — they require either a decade of vertical context, the nerve to bet on a standard early, or a long-term wager at the data layer.

The "files + local Markdown + Obsidian" stack is a real answer too. But it folds the moment your workflow looks like Naval's — because your agent isn't on your laptop anymore. It's on a VPS, in your phone, in a Slack bot, in a sub-agent spawned by another agent.

You're already in the position Naval described

I built KnowMine for over a year, and the positioning has moved each time the market did.

In 2024 we said "AI-native second brain." That word is owned by Tiago Forte, so we re-anchored. When Karpathy's LLM Wiki gist went viral, we wrote about its ceiling and the protocol layer above it. Each time, we were chasing an outside signal.

These two Naval episodes are different. He's not telling people what tool they should have. He's describing what he himself already uses, and what's already missing from it. The product gap isn't a future concept — it's a present-tense fact.

So here's the question to leave you with:

  • Are you switching between multiple AIs by hand?
  • Do you start every new session by re-pasting your background context?
  • Have you noticed ChatGPT remembers things Claude doesn't, and Gemini operates in yet another universe?

If three for three, you're already in the position Naval was describing. The only difference is that Naval is Naval, and he can fuse those four answers in his head. The rest of us need a layer of infrastructure to do it for us.

What that layer looks like, who owns it, what protocol it speaks, who defines its standards — those questions get answered in the next twelve months.

Our bet is user-owned + MCP-native + persistent semantic layer. If you're already shuttling between two or more AIs, give it a try and see what the workflow looks like once shared memory becomes infrastructure instead of copy-paste.


Further reading

Start building your AI-native knowledge base

Free to start. Connect to Claude, ChatGPT, and more.

Get Started Free
Naval Just Described KnowMine's Target User — From His Own Workflow - KnowMine Blog