NASA Is Going Back to the Moon — For Real This Time
This was a massive week for space. Three announcements landed within days of each other, and together they represent the most significant shift in American space strategy since the Space Shuttle program ended in 2011.
First, Artemis 2 is preparing for an April 1 launch. Artemis is NASA’s program to return humans to the Moon — the successor to the Apollo missions that landed the first astronauts on the lunar surface in 1969. Artemis 2 will be the first crewed flight of the program: four astronauts will fly around the Moon aboard NASA’s Orion spacecraft. If this launch goes as planned, it will be the first time humans have left low-Earth orbit since Apollo 17 in December 1972 — over 53 years ago. (Live updates via Space.com)
Second, NASA is pivoting away from the Gateway space station and committing to permanent Moon base construction. Gateway was a planned orbital outpost that would circle the Moon and serve as a waypoint for astronauts traveling to the surface — think of it like a rest stop in lunar orbit. NASA has now shifted its strategy toward building infrastructure directly on the Moon instead. This is a fundamental change. Gateway was a compromise. Moon bases mean we’re not just visiting — we’re setting up to stay.
Third, and maybe the most underreported: NASA unveiled Space Reactor-1 “Freedom,” a nuclear-powered spacecraft mission to Mars planned for 2028. Nuclear propulsion dramatically reduces travel time compared to traditional chemical rockets, and this announcement signals that Mars is no longer a distant aspiration — it’s on a two-year timeline. (Full NASA policy announcement)
I think NASA deserves the top spot this week because these three stories together tell a single narrative: the United States is treating space as a destination, not a demonstration. Moon bases, nuclear propulsion, crewed deep-space missions — this is the kind of stuff that sounded like a pitch deck five years ago. Now it’s on a launch schedule.
After GTC: What Stuck
GTC — short for GPU Technology Conference — is NVIDIA’s annual flagship event. NVIDIA is the company that designs and manufactures the specialized chips (called GPUs) that power virtually all artificial intelligence systems today. Their CEO, Jensen Huang, has become one of the most closely watched figures in technology. I covered his keynote in detail last week, so I won’t rehash the product announcements. But a few things from his post-keynote interviews — particularly his conversation with Lex Fridman (Podcast #494) — kept rattling around in my head all week.
The first is his framing of 100 AI agents per engineer. Not a hypothetical. His thesis is that every serious software engineer should be managing a fleet of AI agents — automated software programs that can write code, run tests, and solve problems semi-independently — that work faster than the engineer can review their output. The bottleneck has moved from what the technology can do to how fast a human can keep up with it.
The second is his productivity metric, which I keep coming back to: “If your $500K engineer isn’t burning $250K in tokens, something is wrong.” Tokens are the units that AI systems use to process text — every time you interact with an AI tool, you’re spending tokens, and they cost money. Jensen’s point is that the salary is the floor. The AI spending is the multiplier. The value is in the combination.
What stayed with me is how naturally Jensen talks about this. He doesn’t frame it as futuristic. He frames it as obvious — the way a factory owner in 1920 would’ve talked about electrification. The question isn’t whether to do it. The question is why you haven’t yet.
The AI Reality Check: Thoma Bravo, McKinsey, and the Automation Question
This is the section where I try to hold two truths at the same time.
Truth one: most companies are failing at AI.
The data is brutal. McKinsey — one of the world’s largest management consulting firms, known for publishing influential research on business and technology trends — found in their latest report that 88% of companies are failing at AI transformation. The MIT NANDA Initiative (a research program at MIT studying how organizations adopt AI) pegged GenAI pilot failure even higher — at roughly 95%. S&P Global reported that 42% of companies had abandoned most AI initiatives by mid-2025, up from 17% the year before.
McKinsey’s single biggest finding? Workflow redesign — not the technology itself — is the number one driver of whether AI actually moves the needle on earnings. Companies that fundamentally redesigned how their teams work around AI were 2.8x more likely to report meaningful financial impact. The AI isn’t the bottleneck. The organization around it is.
Truth two: Thoma Bravo thinks the market has it completely wrong.
Thoma Bravo is the largest software-focused private equity firm in the world — $183 billion in assets under management, over 565 software transactions across 40 years. When they share their view on the software industry, the investment world listens. At their annual LP (limited partner) meeting in March, Managing Partner Holden Spaht shared slides that pushed back directly on the market’s blanket AI-disruption thesis — the widespread fear that AI is about to destroy the software industry.
Their argument: public software companies grew their top line at roughly 17% last year. Gross margins run around 74%. And 80–95% of next year’s revenue is already under contract through subscriptions and long-term agreements. Those are not the numbers of a sector in distress. Spaht argued that the revenue slowdown in software between 2022 and 2025 wasn’t AI’s fault — it was rising interest rates and COVID-era overselling catching up.
At the same time, co-founder Orlando Bravo called AI and venture capital “absolutely in a bubble” and said “you just have to wait for it to pop.” So even the most bullish software investor in the world is drawing a line between software as a category (fundamentally strong) and AI as an investment theme (overheated and due for a correction).
So where does that leave us?
Here’s my take. A Harvard Business School study analyzed nearly all U.S. job postings from 2019 to 2025. Automation-prone roles — structured, repetitive cognitive tasks like data entry, basic analysis, and routine customer service — saw postings decline 17% per quarter per firm after companies adopted generative AI tools. But augmentation-friendly roles — analytical, creative, and collaborative work that requires human judgment alongside AI — saw postings increase 22%. A companion survey of 2,357 people across 940 occupations found 94% prefer AI as a collaborative tool rather than a replacement.
Erik Brynjolfsson, a Stanford economist who studies how technology affects productivity, estimated 2025 productivity growth at 2.7% — double the previous decade’s average — but attributed the gains to augmentation, not replacement. His research shows AI automates codified textbook knowledge but struggles with tacit, experiential knowledge — the kind of judgment that comes from doing a job for years.
Steve Wozniak — the co-founder of Apple — captured something real when he told CNN this week: “I don’t use AI much at all. I want something from a human being.”
And 77% of CEOs told KPMG (one of the Big Four accounting and consulting firms) that GenAI was overhyped in the past year — but its true disruptive potential over 5–10 years is under-hyped.
The pattern I keep seeing is what some analysts are calling “AI drafts, humans approve.” You can order DoorDash by voice now. But you still want to see the map. You still want to watch where your driver is. The interface — the dashboard, the visual confirmation, the human checkpoint — isn’t going away. It’s becoming the strategic layer. Autonomous AI agents still complete less than 2.5% of real-world tasks. The full-automation fantasy is just that. The real story is better tools in the hands of people who know how to use them.
Robotics: A Marathon Is Coming
Quick shoutout: ProRL (Professional Robot League) is launching America’s first robot sports league in Boston this April. Founded by David Grilk, with board member Tom Grilk (former CEO of the Boston Athletic Association, which runs the Boston Marathon), the league will debut with humanoid and quadruped robot competitions. (Forbes coverage)
As Harvard-MIT robotics researcher Alexander Wissner-Gross put it: “One of the densest robotics talent corridors in America, home to Boston Dynamics, MIT, Harvard, and hundreds of startups, has never had a public-facing showcase for its own technology. We build the most advanced robots on Earth and then hide them at trade shows.” Meanwhile, Beijing’s second humanoid robot half-marathon is also set for April 19, with teams targeting finish times under one hour — within striking distance of human records. The robotics sports era is real.
Karpathy’s AutoResearch: From Open-Source Tool to Operating Philosophy
Andrej Karpathy is one of the most respected AI researchers in the world. He was the founding member of OpenAI, led Tesla’s Autopilot AI team, and is known for making complex AI concepts accessible. Earlier this month, he open-sourced a tool called autoresearch — a system that lets AI agents autonomously run hundreds of machine learning experiments overnight on a single computer, forming hypotheses, writing code, running tests, analyzing results, and looping without human intervention. (VentureBeat deep dive)
Last week I covered the initial release and some of the jaw-dropping results. David Friedberg, a biotech entrepreneur and co-host of the All-In podcast (a popular technology and investing show), used it to replicate what would have been a seven-year PhD thesis in 30 minutes. Karpathy himself said he hasn’t typed a line of code since December.
But this week, the story for me shifted from what the tool does to how the pattern applies beyond research labs.
I spent time this week applying the autoresearch loop to my own e-commerce business. Here’s what that looks like in practice: I took my headless Shopify storefront — a modern web architecture where the visual front-end of the website is separated from the back-end commerce engine, giving you full control over design and performance — and started building an autonomous experiment loop for product page optimization. The system forms a hypothesis (for example, “moving the price higher on the mobile screen increases add-to-cart rates”), makes a single change, scores it against a rubric using automated testing tools and an AI visual judge, and either keeps or reverts the change. Then it loops again.
I’m not a machine learning researcher. I run a rug company. But the pattern — hypothesize, change one variable, score objectively, loop — is universal. It works for tuning AI models on GPU clusters. It also works for product page layouts on an online store. The abstraction is the same.
This is what I think people are missing about Karpathy’s contribution. It’s not just a tool. It’s a way of thinking about improvement: make the feedback loop tight enough and fast enough that you can run more experiments in one night than a human team runs in a quarter. Whether you’re training a language model or optimizing a checkout flow, the principle is identical.
Cursor: Why I Keep Coming Back
Cursor is an AI-powered code editor — think of it as a version of the software developers use to write code, but with an AI co-pilot built directly into it. It competes with tools like Claude Code (Anthropic’s command-line coding tool) and GitHub Copilot (Microsoft’s AI code assistant).
I’ve been using both Cursor and Claude Code for the past few weeks, and I want to share a perspective that might resonate with anyone who’s technical-curious but not a developer by training.
What I love about Cursor compared to Claude Code is the transparency. I can actually read what’s going on. I can click into the AI’s reasoning. I can see the different stages of its work — what it’s considering, why it made a choice, where it’s heading next. For someone who’s non-technical but has a deep curiosity about how these tools think, that visibility is incredibly valuable.
Claude Code is powerful. It’s fast, it’s agentic (meaning it can take actions independently), and it gets things done. But Cursor gives me something Claude Code doesn’t: the ability to learn while building. I can double-click into any stage, understand the rationale, and come away knowing more than I did before. For a founder who’s building their own infrastructure — not hiring a team to do it for them — that educational layer matters as much as the output.
A Film About the Future (That Has a Real Chance to Get Funded)
I’ll close with something personal. This week I started developing a short film concept for the Future Vision XPRIZE — a $3.5 million competition run by the XPRIZE Foundation (the organization famous for offering large cash prizes to incentivize breakthroughs in space, health, and technology). Backed by Google, ARK Invest, and Range Media Partners, this competition is looking for optimistic science fiction storytelling about humanity earning a better future. The deadline is August 15, 2026, and the deliverables include a 3-minute trailer, a 12-page treatment, and a 2-page synopsis. The grand prize winner receives $2.5 million in production funding plus $100,000 cash. (Variety | TechCrunch | Fortune)
The concept is set in the near future — around 2040 — in a world where AI agents handle routine work and orbital space transit has become normalized for certain professionals. The story follows a small ensemble of people in intimate daily moments: a morning workout with an AI agent dashboard on a smart display, a backyard capsule that launches to a low-orbit transit hub, a “Grand Central Terminal in space” where commuters travel to the Moon, Mars, and orbital workstations.
The tone I’m going for is Pursuit of Happyness meets Her — emotionally specific, relatable, grounded in real human experience, but set against technology that feels inevitable rather than fantastical. The kind of future you’d actually want to live in.
I’m sharing this because I think the best way to shape the future is to tell stories about it. And because a rug company CEO writing a science fiction screenplay feels like exactly the kind of thing that should be possible in 2026.
More on this as it develops. If you have thoughts, I’m all ears.
Sources & Further Reading
NASA & Space:
- NASA — Artemis 2 Crewed Mission Coverage
- Space.com — Artemis 2 Live Launch Updates
- NASASpaceFlight — NASA Moon Base Plans, Gateway Pivot
- NASASpaceFlight — Space Reactor-1 Freedom Mission to Mars
- NASA — National Space Policy Initiatives
NVIDIA & GTC:
- Lex Fridman Podcast #494 — Jensen Huang: NVIDIA & the AI Revolution (YouTube + Transcript)
- eWeek — GTC 2026 Keynote Recap
- CNBC — Jensen Huang Sees $1 Trillion in Orders
AI Transformation & Thoma Bravo:
- Augment — Thoma Bravo LP Meeting Slides Analysis
- Semafor — Orlando Bravo on the AI Bubble
- CNBC — Orlando Bravo: Some Software Names Deserve a Valuation Cut
Karpathy & AutoResearch:
Robotics:
- eWeek — America’s First Robot Sports League Debuts in Boston
- Forbes — PRORL: Robot Athletes Perform Modern Marathons
- Yicai Global — Beijing’s Second Robot Half-Marathon
XPRIZE Future Vision:
- Future Vision XPRIZE — Official Site
- Variety — XPrize Launches Sci-Fi Film Competition
- TechCrunch — Peter Diamandis Launches Contest to Manifest a New Star Trek
- Fortune — Peter Diamandis Offers $3.5M for Films That Portray AI as the Hero
Tools:
Leave a comment