Sequoia AI Ascent 2026: Andrej Karpathy
The future of AI
Hey everyone, my name is Guillermo Flor and I’m an entrepreneur and investor writing about AI, startups, and where the next wave of value will be created.
Last year, Sequoia AI Ascent felt like the clearest map of the AI market.
The big ideas were obvious but important:
AI was not just another software cycle.
The application layer could capture massive value.
Agents were becoming the next platform shift.
And the best founders had to move from building tools to building systems that do real work.
One year later, Sequoia opened AI Ascent 2026 with Andrej Karpathy.
That choice matters.
Karpathy is not just another AI commentator. He helped cofound OpenAI, led important AI work at Tesla, helped make concepts like Software 2.0 mainstream, and last year gave the world the phrase “vibe coding.”
But this talk was not just about vibe coding.
It was about something much bigger:
Software is changing again.
Karpathy’s core point is that we are moving into Software 3.0.
Software 1.0 was code written by humans.
Software 2.0 was neural networks trained with data.
Software 3.0 is software programmed through prompts, context, agents, tools, memory, and verification.
That sounds abstract, but the implications are very practical.
This is the full breakdown of Karpathy’s opening conversation at Sequoia AI Ascent 2026.
We’ll cover:
What Software 3.0 actually means
Why Karpathy says he has never felt more behind as a programmer
Why some AI apps are temporary wrappers around model limitations
Why verifiability explains where AI will automate first
The difference between vibe coding and agentic engineering
Why the best founders should build for agent-native workflows
And why understanding, taste, and judgment become more valuable as intelligence gets cheaper
Vibe coding raised the floor. Agentic engineering raises the ceiling
Hiring also has to change
Human taste, judgment, and understanding become more valuable
The agent-native world is coming
The final lesson: intelligence gets cheap, but understanding stays scarce
👋 Just a quick note before you continue!
Building great AI systems and agents is quickly becoming one of the most important advantages a company can have. So we’ve been building non-stop and this will become a much bigger part of AIMF and PMF going forward.
From now on, as a PMF + AI subscriber, you’ll get access not just to the best resources on fundraising and growth (160+ Playbooks, 65+ Success Stories, 30+ Pitch Deck Collections, 25+ Investor Resources), but also:
🤖 3+ New AI agent Templates Every Week
📚 A Growing Library of 20+ AI Agents
Here, few examples:
We’re already 70,000+ founders and operators inside.
1. Karpathy felt behind because the agentic shift became real
Open with the surprising line:
Andrej Karpathy, one of the people who helped build modern AI, said he has never felt more behind as a programmer.
Not because he forgot how to code.
Because the tools crossed a threshold.
His point: around late 2025, coding agents stopped being “helpful autocomplete” and started producing large chunks of useful code that often worked without correction.
This matters because it changed the emotional experience of programming:
before: the model helped with snippets
now: the model can push entire workflows forward
before: you corrected the model constantly
now: you increasingly supervise it
before: AI felt like a tool
now: it feels closer to a junior team
2. Software 3.0 means prompts are becoming programs
Karpathy’s framework:
Software 1.0: humans write explicit code
Software 2.0: humans train neural networks with data
Software 3.0: humans program models through context
The important part is not “prompt engineering.”
The important part is that the context window becomes the new programming surface.
You are no longer only writing deterministic instructions for a computer. You are giving context to an intelligent interpreter that can read, reason, call tools, inspect environments, debug errors, and adapt.
This changes what building means.
In Software 1.0, you write a shell script.
In Software 3.0, you give the agent a piece of text and say:
“Install this. Inspect my environment. Fix what breaks. Make it work.”
3. The OpenClaw installer shows why agent-native software will look different
Karpathy’s example: installing OpenClaw.
In the old world, you would write a complex bash script that tries to handle every environment, platform, dependency, and edge case.
In the new world, the installation is just a block of instructions for an agent.
The agent reads the instructions, looks at your computer, understands what is missing, runs commands, sees errors, and adapts.
That is much more powerful than a rigid script.
The bigger point:
Most software today is still built for humans clicking buttons and reading docs.
But agents need something else:
clear instructions
structured context
machine-readable documentation
direct permissions
APIs and workflows designed for agent execution
fewer human-only menus and settings pages
Founder opportunity:
The next wave of infrastructure will be agent-native. Not user-native.
4. MenuGen shows the biggest trap: building apps that should not exist
This is the best story in the talk.
Karpathy built MenuGen: an app that lets you take a picture of a restaurant menu and generate images of the dishes.
The traditional app flow was:
upload photo
OCR the menu
extract the dishes
generate dish images
rebuild the menu
display everything in a UI
Then he saw the Software 3.0 version:
Take a photo of the menu. Give it to Gemini. Ask it to overlay images of the food directly onto the menu.
No app.
No OCR pipeline.
No UI.
No complex workflow.
Just model input and model output.
Key insight:
A lot of AI apps are temporary wrappers around model limitations.
As models get better, entire product categories collapse into a prompt.
This is the most important founder lesson in the talk:
Don’t just ask what AI can help you build faster. Ask what AI makes unnecessary.
5. The next obvious companies will come from things that were previously impossible
Karpathy says the exciting part is not just speeding up existing workflows.
It is creating things that could not exist before.
Example: an LLM-generated knowledge base.
Before AI, there was no normal software program that could take a messy pile of documents, understand them, restructure them, rewrite them, and turn them into a useful wiki.
Now that becomes possible.
This is the category to watch:
information transformation.
Not just search.
Not just chat.
Not just summarization.
But systems that can take unstructured information and recompile it into new forms:
company wikis
investor memos
research maps
product specs
operating manuals
market maps
internal training systems
strategic briefs
personal knowledge bases
6. Verifiability explains where AI will automate first
Karpathy’s framework on verifiability is one of the most important parts of the talk.
Traditional software automates what you can specify.
AI automates what you can verify.
That is why AI is so strong in domains like:
code
math
tests
security
data workflows
certain research tasks
structured business processes
These domains are easier because the model can be trained with clear feedback.
The model can try something, get a reward, and improve.
But this also explains why AI feels so weird.
It can refactor a huge codebase and still fail at a simple common-sense question.
That is what Karpathy calls jagged intelligence.
Key insight:
AI capability is not evenly distributed. It spikes in places where labs have data, rewards, and verification loops.
7. The founder playbook: find valuable verifiable environments
The big founder question is:
Where can you create a valuable verification loop that the foundation labs are not focused on yet?
The labs are already going hard after coding, math, reasoning, and general agents.
But there are many high-value business domains where outputs can be verified and models can improve:
finance operations
tax workflows
compliance checks
insurance claims
procurement
contract review
medical admin
customer support QA
cybersecurity
internal reporting
sales operations
logistics
accounting
enterprise data cleaning
The opportunity is not “build another wrapper.”
The opportunity is:
Find a valuable workflow
Break it into tasks
Define what good output means
Create verification loops
Collect edge cases
Fine-tune or orchestrate models around that domain
Build the system that gets better every time it runs
Key insight:
The next strong AI companies may look less like SaaS products and more like domain-specific reinforcement learning environments.
8. Vibe coding raised the floor. Agentic engineering raises the ceiling
Karpathy makes a clear distinction.
Vibe coding is when anyone can build software by describing what they want.
It raises the floor.
A non-technical person can now create apps, websites, tools, automations, prototypes, and internal systems.
But agentic engineering is different.
Agentic engineering is about using agents while preserving professional quality.
That means:
no security holes
no messy architecture
no broken abstractions
no fragile systems
no random code nobody understands
no “it works somehow” production software
The human is still responsible for quality.
Key insight:
The best engineers will not be the ones who write every line of code. They will be the ones who can direct agents without letting quality collapse.
9. Hiring also has to change
Karpathy makes a very sharp point:
Most companies are still hiring engineers with old-world tests.
Small coding puzzles made sense when the job was writing code line by line.
But if the job is now agentic engineering, the test should change.
A better test might be:
Build a large project with agents. Make it secure. Make it work. Then let other agents try to break it.
That tests the real skill:
problem decomposition
taste
architecture
security thinking
agent management
quality control
debugging
product judgment
Founder implication:
The best AI-native engineers may not look best in traditional coding interviews.
They will look best when asked to ship large, real systems with agentic tools.
10. Human taste, judgment, and understanding become more valuable
Karpathy’s strongest point near the end:
You can outsource thinking.
You cannot outsource understanding.
Agents can write code, generate drafts, call tools, and execute tasks.
But humans still need to decide:
what is worth building
what good looks like
what trade-offs matter
what the system should optimize for
what should be simple
what should be secure
what should not exist at all
This is where taste matters.
Karpathy gives the example of an agent making a strange product decision: matching Stripe payments to Google accounts through email addresses instead of persistent user IDs.
The code may work.
But the design is wrong.
11. The agent-native world is coming
Karpathy’s future world is simple:
Everything gets rewritten for agents.
Today, most software assumes a human user:
read this doc
click this button
go to settings
configure DNS
copy this API key
paste this value
deploy here
That is painful for agents.
The agent-native version is different:
agents get clear instructions
tools expose machine-readable interfaces
permissions are explicit
workflows are decomposed into sensors and actuators
docs are written for agents first
agents talk to other agents
This is where the world is going:
“I’ll have my agent talk to your agent.”
12. The final lesson: intelligence gets cheap, but understanding stays scarce
End with the education point.
When AI gets better, the temptation is to learn less.
Karpathy argues the opposite.
Understanding becomes the bottleneck.
You still need enough depth to direct the system. You need to know what to ask, what to inspect, what to reject, and what matters.
AI can help you think.
But it cannot replace the need to understand.
Hope this was valuable!
Cheers,
Guillermo




