Compounding in the age of generative software
Seven first-principles for backing durable AI companies—Buffett style
Warren Buffett announced his retirement this weekend, closing one of the most remarkable investing careers in history. He famously avoided most technology bets—but his clarity, discipline, and long-term mindset have quietly reshaped how I think about AI startups today.
This isn’t a tribute. I wrote this essay before the news—but it’s informed by months of rereading his writing and asking a simple question:
What would a truly long-term investor—think Buffett, minus the hero worship—look for in today’s AI companies?
How would they decide which ones will compound for a decade or more?
That question has guided my thinking since the 2022–2024 reset, when early-stage valuations fell ~40%. What follows is a plain-English framework I now use to evaluate every early AI investment.
Why bother with first principles?
Venture capital has always prized speed—but generative AI has cranked the tempo to 11: model launches every month, benchmark wins every week, and rounds that close before the term sheet ink is dry.
Amid all this, it’s easy to forget the real goal is still the one any patient investor shares:
Own—or help build—businesses that grow cash flows faster than they burn capital, year after year.
That premise led me back to seven core ideas. They borrow from Buffett’s playbook—but stay rooted in the realities of 2025: GPU shortages, open-weight models, and customers who can switch providers with two API calls.
Rather than dictating rules (“Only invest in X”), I’ll share how each principle shows up in real AI companies, why it matters, and where founders or investors may get it wrong.
1. Circle of competence
A decade ago, you could bluff with buzzwords. Today, technical volatility means even smart backers lose the plot.
If I can’t sketch the user pain, the data flows, and a rough unit economics model on a napkin, I’ll struggle to support the company when the first curveball hits.
Founder tip: Teach your workflow like you’re explaining chess to someone who’s only watched football. Precision builds conviction faster than vision slides.
2. Durable moat
“Proprietary model” used to impress. Now anyone can fine-tune an open model over the weekend.
The moats that matter deepen with use: unique data exhaust, embedded workflows, and partnerships a new entrant can’t replicate.
Red flag: If an open-source model plus $10M can recreate 80% of the product within a year, the edge may be thinner than it looks.
3. Predictable economics and compounding cash
Great AI startups bend the cost curve with compression, smart caching, or edge inference—until gross margins start to resemble classic software. Not at launch, but over time.
Think 65–75% margins, not 30%. That’s what unlocks capital efficiency and long-term compounding.
Without that destination, your valuation rides on GPU pricing and the kindness of future investors—both fickle.
Founder tip: Treat compute the way a retailer treats inventory. Every millisecond shaved or cost trimmed translates to enterprise value.
4. Founder allocation IQ
Buffett talks about backing managers you “trust and admire.” In AI, that shows up in how founders spend marginal dollars and GPU hours.
The best ones ship narrow v1s, pump usage data back into the loop, and delay headcount until metrics demand scale.
Investor move: In your internal memo, name the single smartest capital allocation decision this team has made. If you can’t, that’s a yellow flag.
5. Margin of safety
Early rounds are priced on potential, not cash flow—but the entry price still sets the bar for the next raise.
I look for rounds sized just large enough to prove the moat—not to live like a post-Series C company at seed.
If milestones slip 25%, you should still be able to reprice up—not sideways.
Founders: A lean seed at fair terms often outperforms the flashier one that boxes you in later.
6. Built-in optionality
Optionality doesn’t mean “we might pivot.” It’s the hidden second act that builds on your data or users—doubling LTV or unlocking a new flywheel.
Analytics layered on workflow automation.
A marketplace emerging from transaction data.
Distribution that compounds with usage.
Investor lens: Discount roadmaps chasing shiny sectors. Back adjacencies that deepen the moat you already have.
7. Time-horizon discipline
Even VCs don’t literally hold forever—but underwriting to a 10-year view forces better questions:
What if privacy rules shift?
What if on-device inference crushes costs?
What if a better model drops next quarter?
If staying competitive requires staying first every year, the edge probably won’t last.
Founders: Design for a world where today’s model is tomorrow’s commodity. Data portability, modular swaps, and a rock-solid privacy posture won’t show up in a pitch deck—but they’ll matter when the platform shifts.
Putting it into action
I used to track investments on a grid. It worked—but it made conversations feel like compliance.
Now I write narrative memos. Same discipline. More humanity.
🧠 Moat-clone test: Ask a capable engineer how they’d copy the product in 6 months. If their answer sounds too easy, dig deeper.
🧪 Customer stickiness stories: Ask users what would make them rip it out tomorrow. Switching-cost anecdotes > NPS.
🔍 Anti-portfolio review: Revisit your misses. If they cluster around one blind spot, evolve your principle—not your hindsight.
Why LPs should care
Hand-wavy “pattern recognition” drove big paper returns in the bull market—then melted down when the tide went out.
A first-principles lens gives you a durable way to judge discipline, capital efficiency, and true return potential.
Why founders should care
Yes, ambition still matters. But the best investors today reward:
Smart capital allocation
Moat-deepening product design
Clear unit economics
Bake these into your deck and you’ll accelerate fundraising—and run better board meetings when markets get choppy.
A final Buffett-ism
“Build a business so good that an idiot could run it—because someday one will.”
In AI, model weights will commoditize.
The enduring companies? They’ll keep compounding because of workflow lock-in, proprietary data loops, and sharp founder judgment.
That’s how we avoid reliving the next 40% markdown.
If you see gaps or counterexamples, send them my way.
First principles only get stronger when tested in the open.