Issue #1 — Week of March 15–21, 2026
This might have been the densest week in AI and robotics I can remember. NVIDIA’s GTC conference dropped a two-hour keynote that rewrote the hardware roadmap. The All-In podcast went live from Austin with Jensen Huang and Michael Dell back to back. Andrej Karpathy sat down with Sarah Guo on No Priors and basically said he hasn’t written a line of code since December. Brett Adcock gave a full Figure AI headquarters tour on Peter Diamandis’s Moonshots podcast. Travis Kalanick came out of seven years of stealth. Every conversation I had this week — with friends, with suppliers, even with my own team — touched AI in some way.
There’s a real split happening right now. People who are excited and people who are uneasy. Both are right. I’m not writing this to add to the hype. I’m writing it because I think we need more calm, grounded voices talking about what’s actually happening — from people who are using these tools in real businesses, not just commenting from the sidelines. Here’s what I consumed this week, what stood out, and what I think it means.
The December Flip
Karpathy’s No Priors episode was the one that stopped me mid-run this week. He told Sarah Guo that sometime in December, something flipped. He went from writing 80% of his code by hand and delegating 20% to agents — to the complete opposite. He said he hasn’t typed a line of code since. Not because he doesn’t want to. Because the agents are genuinely better and faster at it now.
He released an open-source project called autoresearch — a 630-line Python script that autonomously runs machine learning experiments. He let it loose overnight on a model he’d hand-tuned over two decades, and it found optimizations he’d missed. Over two days, it ran 700 experiments and discovered 20 improvements. Shopify’s CEO tried it and reported a 19% performance gain from 37 overnight experiments.
What hit me hardest was his framing. He didn’t call it “exciting” or “revolutionary.” He called it “AI psychosis” — this compulsive feeling that every minute you’re not running agents, you’re wasting tokens. You’re falling behind. He said his new productivity metric isn’t compute or flops — it’s token throughput. How many tokens are you commanding? That reframe is everything.
This resonated hard because I’m living a version of it. I run a rug importing company — Well Woven — not a software company. And this week, I rebuilt our direct-to-consumer website into a headless architecture using Claude. Two days. Limited hours — not even full work days. That’s a project that would have been a $50K agency engagement six months ago. I’m not an engineer by training. I grew up watching my dad run a computer networking business, and I’ve always been the tech person in a room full of rug people. But what I did this week with an AI coding agent would have been impossible a year ago.
Meanwhile, Garry Tan — the CEO of Y Combinator — released GStack, an open-source framework that turns Claude Code into a virtual engineering team. 15 specialist roles. CEO reviewer, staff engineer, QA lead, release manager — all prompts. He claims he wrote 600,000 lines of production code in 60 days while running YC. TechCrunch covered the polarized reaction — some people called it transformative, others called it “a bunch of prompts.” But 26,000 GitHub stars don’t lie. The tools crossed a threshold and the people paying attention know it.
Jensen’s Vision: From Cloud to Desktop to Factory Floor
GTC 2026 ran March 16–19 in San Jose with 30,000 people in attendance. Jensen Huang delivered a two-hour-plus keynote — and according to multiple sources, he had no scripted text on his teleprompter, only slides. When a transition stumbled, he joked: “This is what happens when you don’t practice.” Whether that’s showmanship or just the pace at which these CEOs have to operate, it’s telling. The man runs a $3 trillion company and wings a two-hour technical keynote.
The headline announcement was Vera Rubin — NVIDIA’s next-generation AI platform delivering 3.6 exaflops of compute. But what caught my attention more was what he’s doing at the edge. Two new products:
- DGX Station GB300 — a desktop supercomputer with 748 GB of memory that runs models up to one trillion parameters. Locally. The first unit went to Karpathy’s house.
- DGX Spark — a $3,999 personal AI computer running models up to 120B+ parameters. Attendees could buy them on-site.
Jensen’s thesis is clear: AI has to move to the edge. It can’t all live in the cloud. Cars, robots, telecom base stations, your desk — AI needs to run locally, and NVIDIA is building the hardware to make that happen. He projected at least $1 trillion in high-confidence demand through 2027.
But the moment of the week came from David Friedberg on the All-In podcast’s live GTC episode. Friedberg runs Ohalo, a genomics and agriculture company. He described taking genomic data on a Friday, running Karpathy’s autoresearch tool on it, and getting results that would have been a celebrated PhD thesis — seven years of work — replicated in 30 minutes on a desktop computer. He said it would have been published in the journal Science. His team’s faces went blank watching it.
Jensen’s response: “We are literally near the ChatGPT moment of digital biology.”
And then Jensen dropped his own benchmark for the AI era: if your $500,000 engineer isn’t consuming at least $250,000 worth of AI tokens, something is wrong. That’s the new productivity equation.
On the doomer narrative, Jensen was blunt in his Stratechery interview: he was surprised by how deeply the fear-based messaging had penetrated Washington. He argued that America’s real risk isn’t the technology — it’s fear and anger preventing adoption. And he’s right. The radiologist example he gave: more AI in radiology created more demand for radiologists, not less. The technology expands what’s possible.
Dell: The Quiet Giant
The All-In podcast’s SXSW live show from Austin also featured Michael Dell, and his story is one that doesn’t get enough attention. Here’s a guy who started in a UT Austin dorm room in the 1980s and has been present for every single wave of the technology revolution — PCs, networking, internet, cloud, mobile, and now AI. Dell Technologies is now generating roughly $140 billion in annual revenue, with their AI infrastructure business scaling from about $2 billion toward a $50 billion target and a $43 billion AI server backlog.
The “up 100%” number most likely refers to their AI infrastructure business and stock performance — Dell’s shares have roughly doubled from their 2024 lows. Impressive for a company most people still associate with the beige box under their desk in 2005.
Dell said something that stuck with me: the barrier to AI adoption isn’t technology — it’s culture and leadership. That’s exactly what I see in my own industry. The tools exist. The cost is coming down. The bottleneck is people willing to learn, experiment, and change how they work.
Then Brad Gerstner — the Altimeter Capital founder, the guy with the red glasses who’s a regular on All-In — came on stage with Dell to talk about the Invest America Act. It creates $1,000 tax-deferred investment accounts for every child born between 2025 and 2028. Michael and Susan Dell pledged $6.25 billion to add $250 to accounts for 25 million children in lower-income zip codes. Gerstner framed it as “a 401k from birth.” American ingenuity meeting American generosity. I appreciate that — technology should lift people up, not just create wealth at the top.
The Agents Are Here
Three competing visions of the “AI coworker” shipped this month, and the differences tell you a lot about where this is going.
Perplexity Computer is cloud-first. It orchestrates 19+ AI models — routing to Claude for reasoning, Gemini for research, Grok for speed. CEO Aravind Srinivas said it best: “A traditional operating system takes instructions; an AI operating system takes objectives.” Their Personal Computer product runs 24/7 on your Mac mini and bridges local files with cloud AI. The enterprise version reportedly completed 3.25 years of work in four weeks during internal testing.
Claude Dispatch (from Anthropic) is local-first. It launched March 17 as part of Claude Cowork — you scan a QR code, send instructions from your phone, and come back to finished work on your desktop. Everything stays on your machine in a sandboxed environment. It works with 38+ connectors — Notion, Gmail, Slack, GitHub, Figma. Early reports say about 50% success rate on complex tasks, which is honest. It’s fast on simple stuff, unreliable on multi-step workflows. But the direction is clear.
GStack takes a different approach entirely — it’s not a product, it’s a methodology. Garry Tan’s open-source prompt system that assigns cognitive “gears” to Claude Code so it thinks like different team members depending on the task.
I’ve been running my own version of this for months. I built an agent system called OpenClaw that runs on a Mac Studio in my office. It manages Amazon advertising, audits FedEx billing, monitors inventory, and handles operational communications through WhatsApp and Slack. The concept is the same across all of these: connect the brain (AI models) to the hands and legs (APIs, tools, your desktop, your browser). Let the system take objectives, not instructions. December’s OpenClaw was early — now the major companies are shipping their versions of it.
The bottleneck isn’t the AI anymore. It’s the human trying to keep up.
AI Fatigue Is Real
Which brings me to the other side of this. A Harvard Business Review study published this month surveyed 1,488 U.S. workers and introduced a term I think we’ll be hearing a lot: “AI brain fry.” Workers managing multiple AI agents reported 14% more mental effort, 12% more fatigue, and 19% greater information overload. CNN Business picked it up. One engineering manager described it: “I had a dozen browser tabs open in my head, all fighting for attention.”
Karpathy calls it “AI psychosis.” I’ve felt it. You’re running three agent sessions, code is generating faster than you can review it, and you start to wonder if you’re the bottleneck or the quality control — and whether those are the same thing.
Here’s the nuance though: the study found that when AI is used to reduce routine work, it actually lowers burnout. The problem is specifically the overhead of supervising multiple autonomous agents. The system runs faster than the human loop.
I think about this like email in 2005. Remember when email was the new thing that was going to ruin everything? The people who mastered it pulled ahead. The people who didn’t fell behind. And everyone in the middle was overwhelmed. We’re in the same place now, just at a much faster clock speed. The answer isn’t to stop using the tools — it’s to build better guardrails, better workflows, and have honest conversations about what these systems actually do and what they can’t.
Robots Are Learning to Walk
Now let’s talk about the physical side — because this week, the AI and robotics stories became inseparable.
Jensen dedicated a huge chunk of GTC to what he calls “Physical AI.” NVIDIA announced that the four largest industrial robot manufacturers — ABB, FANUC, YASKAWA, and KUKA — are all integrating NVIDIA’s simulation and training stack. 110 robots were on the GTC show floor. Jensen’s thesis: every industrial company will become a robotics company. NVIDIA’s play is to be the platform they all build on.
But the episode that really got me this week was Brett Adcock on Peter Diamandis’s Moonshots podcast. Adcock is the founder of Figure AI, and he gave a full HQ tour while explaining their Helix 02 AI system. The headline: they deleted 109,504 lines of C++ code — all the hand-engineered locomotion and control — and replaced the entire stack with neural networks. End to end. The robot’s eyes, hands, feet, and legs all run on inference simultaneously. No more traditional code telling the robot how to walk. The network figures it out.
The demo was wild. 61 separate manipulation actions in a continuous kitchen task — loading and unloading a dishwasher with no resets and no human intervention. Then a March 9 demo showed the same system tidying a living room it had never been in. The fleet learning moat is what matters here: once one robot learns a task, every robot in the network knows it.
Adcock’s not naive about it though. He said: “Until I feel safe enough to have it there with free reign around all my kids, it’s not ready for everyone.” That’s the kind of honesty this space needs.
Figure isn’t alone. Sunday Robotics showed up on Moonshots with a completely different philosophy — their Memo robot is deliberately not humanoid. Wheels, not legs. Friendly form factor with colored caps instead of the warrior-aesthetic most humanoids are going for. Their argument: trustworthy is more important than human-shaped. And they backed it up — first robot to fold socks, handles wine glasses (transparent, reflective, fragile objects that most robots can’t touch), 33 unique interactions clearing a single table. They raised $165M at a $1.15B valuation.
And then there was Travis Kalanick. The Uber founder came out of seven years of stealth at the All-In Summit in Austin to reveal ATOMS — a company working at the intersection of physical infrastructure and AI. His framework is brilliant: Manufacturing = CPU. Real estate = Storage. Logistics = Network. Three verticals: cloud kitchens (what he calls a “food computer”), mining automation (acquiring Pronto), and a wheel-base platform for specialized non-humanoid robots. 30 countries. Thousands of employees who couldn’t even put the company name on their LinkedIn for seven years. His thesis: the ChatGPT moment for physical AI is imminent.
Meanwhile in China, Unitree Robotics filed for a $610 million IPO on the Shanghai STAR Market. Revenue up 335% year-over-year. They shipped 5,500 humanoid robots in 2025. Their CEO predicted humanoid robots will run faster than Usain Bolt by mid-2026. The US-China robotics race is real and accelerating.
What I’m Watching
A few threads I’m keeping an eye on going forward.
Chelsea Finn at Physical Intelligence is building what might be the most important project in robotics right now — a foundation model for robots. Think of it like the difference between a calculator (does one thing well) and a smartphone (does everything). Her team trained a 3-billion-parameter model across 100+ unique rooms and achieved 80% success on tasks in homes it had never seen. The key insight: a general-purpose robot that can do many things actually outperforms a specialist robot built for one task. Same lesson as LLMs — general beats narrow.
The governance question keeps me up at night. We lived through the social media era and watched algorithms reshape how people think, vote, and interact — without adequate guardrails. If you experienced what that did to public discourse, to elections, to teenagers’ mental health, you know we need to do better this time. Big company bureaucracy and government bureaucracy are both too slow to manage this on their own. Maybe it’s the models themselves that can help synthesize logic and separate signal from noise. Maybe it’s industry leaders getting together proactively rather than waiting for regulation. I don’t have the answer. But I know politicizing this — from either side — is the worst possible approach. This is about leadership, literacy, and staying ahead of the technology rather than being consumed by it.
An NBC News poll found 57% of Americans say AI risks outweigh the benefits. Only 26% hold positive views. But inside the companies building this stuff, conviction has never been higher. That gap — between public anxiety and industry certainty — is the defining tension of this moment. The optimism and the caution need to coexist.
See You Next Sunday
We’ve lived this our whole lives. I watched the generation before me get disrupted by the people who embraced computers. Even something as simple as using email instead of face-to-face meetings was a real differentiator when I was first getting into business. Each wave creates a new gap between the people who adapt and the people who wait.
This week convinced me that the gap between people watching AI and people using AI is becoming permanent. The tools are here. The infrastructure is being built. The robots are literally walking. The question isn’t whether this is real — it’s whether you’re going to engage with it or wait until it’s too late.
I’m writing this from the perspective of someone running a physical goods business who’s choosing to engage. Not a Silicon Valley insider. Not an AI researcher. A rug company CEO with a Mac Studio, a curious mind, and a conviction that we’re running through a pivotal moment that’s as significant as the PC revolution my dad’s generation lived through.
I’ll be back next Sunday.
— Adem
Sources & What I Watched This Week
Podcasts & Videos:
- Andrej Karpathy on No Priors — “Code Agents, AutoResearch, and the Loopy Era of AI”
- Jensen Huang LIVE on All-In Podcast — GTC 2026
- Brett Adcock on Moonshots #229 — “Humanoid Robots, Autonomous Manufacturing, $50T Market”
- Sunday Robotics — Memo Home Robot Demo (Moonshots)
- Travis Kalanick — ATOMS Out of Stealth (All-In Summit, Austin)
- Chelsea Finn — Developing General-Purpose Robots (Physical Intelligence)
Key Articles & Announcements:
- NVIDIA GTC 2026 Official Announcement
- VentureBeat — Karpathy’s AutoResearch Open Source Release
- TechCrunch — Why Garry Tan’s GStack Polarized the Tech World
- Figure AI — Introducing Helix 02: Full-Body Autonomy
- Perplexity — Introducing Perplexity Computer
- Anthropic Launches Claude Dispatch
- Harvard Business Review — “AI Brain Fry” Study
- Fortune — Dell’s $6.25B Invest America Pledge
- CNBC — Unitree Plans $610M Shanghai IPO
- NVIDIA — Physical AI Goes Real-World
- Stratechery — Interview with Jensen Huang on Accelerated Computing
- NBC News — Majority of Voters Say AI Risks Outweigh Benefits
- GitHub — GStack by Garry Tan
This is a weekly roundup of AI and robotics news as seen through the lens of a founder building at the intersection of physical commerce and technology. Subscribe to get it every Sunday.
Leave a comment