This week the Artemis II crew spoke publicly for the first time since coming home from the Moon. If you havent watched it yet, I’d really encourage you to. Four people who just traveled 695,000 miles, farther from Earth than any human has ever been, standing together, hugging each other, high-fiving through the entire debrief. You could feel it. These people were changed.
What struck me most was this: they were inspired going up. But they were more inspired coming back down. Christina Koch said she looked out the window and saw Earth as this tiny thing surrounded by blackness and said “Planet Earth, you are a crew.” Jeremy Hansen told the crowd “When you look up here, you’re not looking at us. We are a mirror reflecting you.” Reid Wiseman, who was moved to tears talking to his daughters from 200,000 miles away, said being human is a special thing.
These are people who used the most advanced technology on the planet. And the thing that moved them most was each other.
I keep thinking about that as I write this newsletter about AI, about valuations, about code review tools and marketing playbooks. The technology matters. But the human part matters more. The people closest to the frontier seem to understand that better than anyone.
This was also the week the All-In Podcast dropped an episode about how investors are valuing AI companies right now. The numbers were so wild I kept rewinding. Anthropic going from zero to a $30 billion revenue run rate in about two years. Databricks plus Palantir combined, added in a single month. A major enterprise running $100 million in AI consumption against $5 billion in operating expenses and saying theyre near peak employment.
I caught the episode Saturday night in between the new season of Jon Hamm’s Your Friends & Neighbors, which is excellent by the way if you havent seen it. Its the number one show on Apple TV right now. Made some popcorn, started relaxing, and somewhere between episodes I fell into a rabbit hole. Sunday morning I kept going. The Dario Amodei interview on the Dwarkesh Podcast. Sundar Pichai with John Collison and Elad Gil. By the time I sat down to write this I realized everything I watched this week was pointing at the same thing: we’re in a moment thats moving way faster than most people realize, and its worth paying attention to.
Heres the roundup.
TL;DR
- How are AI companies being valued right now? The All-In Podcast broke down Anthropic’s revenue ramp and what “the TAM of intelligence” means. I try to make this accessible whether you’re an investor or you’ve never looked at a balance sheet. Also recommended: the Dario Amodei x Dwarkesh Patel episode for the bigger picture.
- What’s actually constraining AI? Google’s CEO Sundar Pichai on security risks, physical infrastructure bottlenecks, and why the AI race is now about building things fast enough.
- What I’ve been building with this week. A quick look at the AI coding tools I’m using, Cursor and Claude, and why the code review layer matters as much as the code generation layer.
- For my e-commerce friends: Marketing Operators Ep. 106 on team structure and organic distribution. If you run a brand, this ones worth your time.
- Bonus: Robots are racing this weekend. In Boston and Beijing. Same weekend as the 130th Boston Marathon.
How AI Companies Are Being Valued
The All-In episode was E225 with Chamath, Sacks, Jason, and guest Brad Gerstner from Altimeter Capital. I want to walk through what they discussed because I think it matters. Not just for people in tech or finance, but for anyone trying to understand whats happening in the economy right now. Ill explain the jargon as I go.
Anthropic’s Revenue Trajectory
Let me just lay out the numbers:
- Early 2023: Revenue turned on
- End of 2024: $1 billion annualized run rate
- Mid 2025: $4 billion
- End of 2025: $9 billion
- April 2026: ~$30 billion run rate
Thats $1B to $30B in about two years. Brad pointed out that in March 2026 alone, Anthropic added roughly $10 to $11 billion in revenue. Thats the equivalent of Databricks and Palantir combined, added in a single month. He projects they could exit the year somewhere between $80 and $100 billion.
They have 2,500 employees. Google crossed that revenue level with 120,000 people.
Full disclosure: Anthropic makes Claude, which is the AI I use to write code, run parts of my business, and yes, help me draft this newsletter. Im a customer, not a neutral observer. But the numbers speak for themselves regardless of what product you use.
A Quick Explainer: How Do Investors Value These Companies?
Chamath laid out something I found really useful. A hierarchy of metrics that investors use depending on how mature a company is:
Free Cash Flow → EBITDA → Margins → Net Revenue → Gross Revenue → Bookings
Think of it as a ladder. At the top is free cash flow, a fully mature business generating real cash. At the bottom is bookings, basically a promise of future revenue. As a company matures, it climbs the ladder.
Where are the AI frontier companies right now? Somewhere between gross revenue and net revenue. That means we’re still far from discussing whether these companies are profitable. The market is valuing them on trajectory. How fast the line is going up.
Why Gross Revenue vs. Net Revenue Matters
This sounds like accounting jargon but it actually matters if you’re trying to understand any headline about these companies.
Gross revenue is the total amount billed to customers before any deductions. Net revenue is what the company actually keeps after paying partners their cut.
Anthropic reports gross revenue. OpenAI reports net. The difference: when you buy Claude through Amazon Web Services or Google Cloud, those platforms take a commission, typically 5 to 10%. Anthropic’s headline number includes that cut. OpenAI’s doesnt.
Brad’s view was that the 5 to 10% difference is noise compared to the growth story. Chamath’s point was more cautious. You cant do clean comparisons between the two companies when they report differently. Both are right. But if you see a headline comparing Anthropic and OpenAI revenue side by side, just know the numbers arent apples to apples yet.
The TAM of Intelligence
This was the part of the conversation that stuck with me most, and its the part I think matters most for people outside of tech and finance.
TAM stands for Total Addressable Market. How big is the market these companies could eventually serve? For most tech companies, the TAM is basically IT budgets. You’re selling software to replace other software.
AI is fundamentally different. Brad put it bluntly: the TAM for intelligence is radically different than anything we’ve seen before.
The market for AI isnt IT budgets. Its intelligence itself. Labor augmentation. Labor replacement. Every task that currently requires a person to think, analyze, write, code, decide, or create.
Heres the data point that landed hardest for me. Brad described a major enterprise running a $100 million annual AI consumption budget against $5 billion in operating expenses. This company believes its approaching peak employment, meaning they dont expect to hire significantly more people, while their intelligence consumption keeps growing.
I keep coming back to what Jensen Huang said at GTC, which we covered in the last newsletter: every $500K engineer should be consuming $250K in AI tokens, and we should expect 100 AI agents per human worker. Thats not a prediction anymore. Thats how large enterprises are already planning.
Coding Is the First Domino
Sacks argued on All-In that Anthropic already has over 50% market share in coding tokens. More code is being written with Claude than with any other AI model. Theres a debate about whether that early lead compounds into a permanent advantage, but the underlying point is clear: the majority of developers building products today are using some kind of AI coding assistant. This is already happening.
Dario Amodei, Anthropic’s CEO, laid out the progression in his recent conversation with Dwarkesh Patel. He described it as a spectrum:
90% of code written by AI → 100% of code → 90% of end-to-end software engineering tasks → 100% of SWE tasks → 90% less demand for software engineers
We’re somewhere in the first stage right now. Dario says we’re proceeding through them “super fast” but each stage is “worlds apart” from the next. Writing code is not the same as engineering a system. That distinction matters.
Whats the real productivity impact today? Dario puts it at roughly 15 to 20% total factor speedup, up from about 5% just six months ago, and accelerating. Inside Anthropic, where he says theres “zero time for bullshit,” the gains are unambiguous. But hes the first to acknowledge theres a gap between what the tools can do and what the broader economy has absorbed. Legal, compliance, procurement, change management. All of that creates lag between capability and adoption.
What This Actually Means
I want to be careful here because I think this topic deserves honesty without panic.
Yes, the shift is massive. Software is getting radically cheaper to build. But heres what I think gets lost in the scary headlines: this same technology is giving small businesses access to capability they never had before.
I run a 12-person rug company. We use AI to manage advertising, audit shipping invoices, analyze supplier pricing in multiple currencies, build a headless e-commerce site, and run operational agents that handle tasks I used to do manually at midnight. Five years ago, that kind of infrastructure was only available to companies with 50-person engineering teams.
Thats not a dystopia. Thats access. Thats a rug company in Easton, Pennsylvania, competing with capabilities that used to require being a tech company.
We Are Near the End of the Exponential
This is the bigger picture. The part I find both exhilarating and humbling.
In his April 2026 interview with Dwarkesh Patel, Dario said something that stopped me. He said its absolutely wild that people, both inside the bubble and outside, are talking about the same tired political issues “when we are near the end of the exponential.”
He puts 90% probability on reaching what he calls “a country of geniuses in a data center” by 2035. AI systems matching or exceeding human expert performance across most cognitive tasks. His personal hunch is much sooner. One to three years. Hes not talking about better autocomplete. Hes talking about systems that could compress a century of biomedical progress into a decade, help cure diseases, and fundamentally change whats possible.
Will it happen that fast? I honestly dont know. But the signals are hard to ignore. The revenue trajectory tells you that enterprises are betting real money on this, not kicking tires. And the pace of improvement in what these tools can do, even just in the six months Ive been building with them seriously, has been startling.
What I find most compelling about Dario’s framing is that he rejects both extremes. He doesnt think its going to be an overnight singularity. He also doesnt think its overhyped. His prediction is that the AI industry will probably look like cloud computing. Three to four differentiated players with healthy margins, each good at different things. And the economic impact will be “much faster than any previous technology, but not infinitely fast.” He thinks well see trillions in revenue before 2030.
For me, the right posture is curiosity, not fear. Id rather be paying attention and learning alongside this than be caught off guard. Thats honestly why I write this newsletter. To think out loud and invite you along.
Google’s CEO on What’s Actually Constraining AI
I also caught the Sundar Pichai conversation with John Collison and Elad Gil this week. If the All-In episode tells you how investors are valuing AI, this one tells you whats constraining it. Different angle, equally important.
A few things jumped out.
Google Invented the Foundation
Easy to forget: the Transformer architecture, the technical breakthrough that powers ChatGPT, Claude, Gemini, and basically every AI product youve used, was invented at Google. Published in 2017. Theres a narrative that Google invented this thing and then let everyone else run with it. Sundar pushes back. He points out that Google deployed Transformers in Search immediately via BERT, then MUM, and they even had LaMDA, essentially a proto-ChatGPT, internally before OpenAI launched theirs. They just had a higher bar for what they considered acceptable product quality.
Its a reminder that the company with the research lead doesnt always get the product lead. Three people prototyping in a garage will always create surprises. Thats consumer internet. But its also worth appreciating that everything were discussing in this newsletter is built on a foundation Google’s researchers created.
Speed as a Proxy for Good Engineering
One line from Sundar that I keep thinking about: “I’ve always internalized speed. It almost always reflects the technical underpinnings of the product having been done well.”
He revealed that Google Search sub-teams have latency budgets measured in milliseconds. If you ship something that saves 3ms, you earn 1.5ms for your budget and pass 1.5ms to the user. Despite adding massive AI functionality, Search latency has actually improved 30% over the last five years.
Have most of us felt that improvement? Honestly, probably not. Many of us have been doing our searching inside AI tools instead. But the principle resonates. Speed isnt just a feature. Its a signal of engineering quality. Thats something I try to keep in mind when building our own site.
Security: The Constraint Nobody’s Talking About
This was the most striking moment in the conversation. Sundar warned that AI models are “definitely really going to break pretty much all software out there.” He said the black market price of zero-day exploits is dropping because AI is increasing the supply of discovered vulnerabilities faster than they can be patched.
Think about what that means. Every piece of software running your business, your bank, your hospital. AI can now find weaknesses faster than humans can fix them. Anthropic recently demonstrated this with their Mythos model, which can autonomously discover software vulnerabilities. Useful for defense, but the offensive capability exists too.
This is the catch-up were going to have to do. Not on one particular application. On entire systems of software. The security infrastructure built for the pre-AI era isnt designed for a world where vulnerability discovery is automated.
The Long Bets That Paid Off
What I find interesting about Google’s position is the pattern of taking long, difficult bets that look questionable at the time and then becoming essential infrastructure. Search. Gmail. Android. YouTube. Chrome. Google Maps. Cloud. TPUs. Waymo.
Sundar’s current moonshots follow the same pattern. Data centers in space, started as a tiny team with a small budget. Gemini Robotics partnering with Boston Dynamics. Isomorphic Labs doing AI drug discovery. Wing drone delivery targeting 40 million Americans. He says they start small even for big ideas, which is how you avoid betting the company while still staying at the frontier.
Google’s CapEx for 2026 is $175 to $185 billion. But Sundar says they literally couldnt spend $400 billion even if they wanted to. The constraints arent financial. Theyre physical: memory chip manufacturing, power grid permitting, construction speed, and skilled labor. He made an interesting admission: “You’re in awe of the pace in China. We need to learn to build things much faster.”
Thats a revealing constraint. The AI race isnt just about algorithms anymore. Its about building physical infrastructure fast enough to run the algorithms. And right now, nobody has enough.
What I’ve Been Building With This Week
I spent part of this week digging into how to check AI-generated code for errors. If youre using AI to write code, and most developers are now, you need something on the other side verifying the output.
Ive been using CodeRabbit for automated pull request reviews on GitHub. It catches the basics like syntax errors and security flags, but misses the bigger stuff. Intent mismatches, performance implications, whether the code actually does what you asked for.
Cursor is my main development environment. I like it because I can see every change the AI proposes, read the diffs, and approve or reject each one. It feels like pair programming where I stay in control. Im also learning from it. Seeing how it structures code teaches me patterns I wouldnt have picked up otherwise. The file structure makes sense to me, and when I want to dig deeper into why something was done a certain way, I can.
Claude Code is for the bigger tasks. Refactors, architectural changes, anything that requires executing multiple steps. Its more autonomous. I describe what needs to happen, it works through the steps. Between the two I have a generation tool and a review tool, and the combination works.
The insight that keeps coming up across developer communities: the quality of AI-generated code has less to do with which tool you use and more to do with how well you describe what you want. Planning and clear instructions beat tool selection every time.
For My E-Commerce Friends: Team Structure and Organic Distribution
Switching gears. I listened to Marketing Operators Podcast Episode 106 this week, hosted by Cody Plofker (CMO at Jones Road Beauty), Connor Rolain (VP Growth at Ridge), and Sean Frank (Ridge founder). If you run a brand, this ones worth 45 minutes of your time.
Their thesis: if youre building a brand in 2026, your first hire should be a head of creator, not a head of growth.
The old playbook (find product-market fit, turn on Meta ads, hire a media buyer to scale) built a lot of $10 to $100M brands. But it also created deep dependency on paid channels and rising acquisition costs. The new playbook: build organic distribution first through founder content, creator seeding, community, and affiliate partnerships, then layer paid on top.
Cody traced the role evolution: Head of Growth → Creative Strategist → Head of Creator. His take is that the algorithms on TikTok, Instagram, and YouTube are all heading toward rewarding authentic content. The person who can build community, manage creator relationships, and produce content that works in both organic and paid channels is more valuable than a media buyer.
Connor brought a reality check from launching Gut Culture, a new brand from the Ridge team. Even brands getting hundreds of organic TikTok posts per week are still seeing 70 to 80% of their TikTok Shop revenue come from ad dollars promoting that content. Organic is the foundation, but paid is still the accelerant at scale.
The structural insight I found most useful was what Cody called the Kizik model: own content production internally, outsource media buying. Most DTC brands do the opposite. Kizik flipped it and it worked.
Thinking about where my company is heading, this resonates. We’ve historically leaned on paid channels across our marketplace business. But this year and into next, we’ve been making some bets on the UGC and creator side, and I think we’re heading in the right direction. This podcast validated a lot of what were already starting to build toward. If youre in e-commerce and thinking about the same shift, its worth the listen.
Robots Are Racing This Weekend. In Boston and Beijing.
This is one of those weeks where you have to stop and appreciate the timing.
The 130th Boston Marathon is Monday, April 20. Over 30,000 runners from 137 countries, the worlds most iconic footrace. The day before, on April 19, two robot races are happening simultaneously on opposite sides of the planet.
In Beijing, over 100 teams will compete in the 2026 Humanoid Robot Half-Marathon, running the full 21-kilometer course through the citys E-Town development zone. This is the second year of the event and participation has surged nearly fivefold. Unitree Robotics just announced their H1 hit a sprint speed of about 10 meters per second. For context, Usain Bolts peak was 10.44 m/s. This years race features a “human-robot co-run” format where human runners and robots start simultaneously and share the same course. Around 40% of teams are now running fully autonomous.
In Boston, the ProRL Combine is launching in the Seaport District. Americas first professional robotics sports event. Humanoid and quadruped robots from leading manufacturers, universities, and research labs will compete in speed races, obstacle courses, and precision challenges on a spectator-lined course. The league is backed by the former CEO of the Boston Athletic Association, the same organization that runs the Boston Marathon. Their stated mission: build public acceptance for robotics through sports and entertainment, the same way NASCAR did for automotive engineering.
Same city. Same weekend. Humans running 26.2 miles on Monday, robots racing on Sunday. If you want a snapshot of where we are in April 2026, thats it.
Ive been following robotics closely in this newsletter, and events like these are how you track real-world capability. Lab demos are one thing. Running 21 kilometers on city streets, navigating terrain, managing battery life, maintaining balance. Thats a completely different benchmark. Ill cover the results next week.
The Thread That Connects Everything
A lot of stories this week, but one thread runs through all of them.
AI companies are being valued on revenue trajectory because the market theyre addressing, intelligence itself, is unlike anything weve ever seen. Google invented the foundational architecture and is spending $175 billion a year to scale it, but even they cant build fast enough. The constraints are now physical, not intellectual. That intelligence is reshaping how software gets built, turning code review from a manual chore into an automated layer. Its reshaping how brands think about team structure. And robots are racing in Boston and Beijing the same weekend as the Boston Marathon.
The common thread: compounding systems beat rented access. Whether thats Anthropic’s coding flywheel, Google’s decades of infrastructure bets finally converging, or the organic-to-paid marketing flywheel. The businesses and builders winning right now are the ones building things that get better with use, not just bigger with spend.
And maybe thats the real takeaway this week. Four astronauts traveled farther from Earth than anyone in history, using the most advanced technology ever built. And the thing they couldnt stop talking about was each other. The hugs, the tears, the bond. Christina Koch looked at Earth from 250,000 miles away and her conclusion was that we are a crew.
All this technology were building, all these tools, all this intelligence. Its extraordinary. But the point of it was never the technology. The point is what it lets us do together.
Next week Ill cover the robot half-marathon results and have more episodes to share. Stay tuned.
This is Issue #4 of “The Week in Technology,” a weekly newsletter at ademogunc.com.
Sources and Recommended Watching:
- Artemis II Crew Post-Mission Remarks (Apr 2026)
- All-In Podcast E225 (Apr 2026)
- Dwarkesh Podcast x Dario Amodei (Apr 2026)
- Sundar Pichai x John Collison x Elad Gil (Apr 2026)
- Marketing Operators Ep. 106 (Apr 2026)
About me: Im Adem Ogunc. I run Well Woven, a rug company based in Easton, PA, and FurniPulse, a home furnishings trade intelligence platform. Ive been in the rug industry for over 20 years and building with AI tools for the last couple of years, using them to run advertising, manage operations, and build our e-commerce infrastructure. I write this newsletter because Im genuinely fascinated by whats happening in technology right now and I wanted a place to think out loud about it. If youre curious about any of this, whether youre in tech, e-commerce, home furnishings, or just trying to figure out whats going on, Im glad youre here.

Leave a comment