Tag: llm

  • Week in Tech #6: Sixty Billion Dollars for a Code Editor

    Week in Tech #6: Sixty Billion Dollars for a Code Editor

    Wednesday, April 29, 2026.

    I run a rug company. I am not a tech reporter. I just read a lot.

    A note on the timing of this one. I usually publish on Sunday or Monday. This is going up on a Wednesday because I spent the past five days at High Point Market, the largest furniture trade show in North America and my actual day job. Rugs come first. Newsletter is something I write on flights and weekends. The trade show won this round.

    Im writing this on the way home, with a half cold coffee and a notebook full of supplier conversations that will keep me busy for the next three weeks. The week did not slow down for me. So lets get into it.

    Three numbers from the past ten days.

    SpaceX is paying up to sixty billion dollars for a code editor. If they decide not to buy it, they have to pay ten billion dollars anyway. The floor is ten billion. The ceiling is sixty billion.

    Amazon committed up to another twenty five billion dollars to Anthropic, on top of the eight billion they had already put in.

    Tesla raised its 2026 capex guidance to over twenty five billion dollars, three times what they spent last year. Most of the increase is going to AI infrastructure.

    I read these numbers and at first I tried to file them in three different folders. A code editor deal. A model lab investment. A car company spending on robots. By the time I landed I realized they are the same story.

    Compute is the new currency. Whoever owns the chips, the data centers, and the electricity to run them sets the terms for everyone else.

    Let me walk through it.

    What SpaceX actually bought

    Start with the headline because it deserves to be the headline.

    If you have not heard of Cursor, do not feel bad. Most people outside the developer world have not.

    Cursor is a piece of software that helps engineers write code. It is a fork of Microsoft Visual Studio Code, the most popular code editor on Earth, with AI built into every part of the experience. You type, the AI completes your line. You select code, the AI rewrites it. You describe what you want in plain English, the AI builds it. The company behind it is called Anysphere, founded in 2022 by four MIT engineers in their twenties.

    Heres what makes the price tag rational. Cursors revenue went from one hundred million in January 2025, to five hundred million in June, to one billion in November, to two billion in February of this year. Three years from launch to two billion in annual recurring revenue. Sixty seven percent of the Fortune 500 are paying customers. It is the fastest growing software company in history.

    Why does SpaceX want it. Two reasons.

    First, SpaceX is preparing to IPO this summer at a reported one point seven five trillion dollar valuation. Bolting two billion dollars in real software revenue from real Fortune 500 customers onto that story turns SpaceX from a rocket company into a credible AI company. Wall Street pays a much higher multiple for AI revenue than for rocket revenue.

    Second, xAI, the AI lab that merged into SpaceX in February, has been embarrassed by Grok in the developer community. Cursor is the gateway. Whoever owns the editor that developers stare at all day controls which model gets called for the highest margin task in software. That is the prize.

    For Cursor, the deal solves a real problem. Cursor has been running at near zero margins on parts of its product because it pays retail rates to call OpenAI and Anthropic models on every keystroke. Those companies are also Cursors competitors. There is a famous cautionary tale here. When OpenAI tried to buy a Cursor competitor called Windsurf in 2025, Anthropic immediately cut off Windsurfs access to Claude. Within weeks the company collapsed and was sold for ten cents on the dollar. Any AI coding company that depends on a frontier labs API is one phone call away from extinction.

    SpaceX brings something different. SpaceX brings Colossus, the supercomputer xAI built in Memphis, equivalent in scale to roughly one million Nvidia GPUs. Cursor with its own dedicated compute supply is a different company than Cursor as a tenant on someone elses API.

    One piece of color, because tech history is full of this stuff. Sam Bankman Frieds bankrupt FTX held a five percent stake in Cursor that they bought for two hundred thousand dollars in 2022. The bankruptcy estate sold it in 2023 for the same two hundred thousand dollars. That stake would be worth roughly three billion dollars today. The most expensive yard sale in financial history.

    The pattern

    Now look at the other two big deals through the same lens.

    Amazons twenty five billion to Anthropic is not really a software investment. The structure is that Anthropic gets the cash, and in return Anthropic commits to spend over one hundred billion dollars at AWS over the next ten years, running their training and inference on Amazons custom AI chips. Amazon is not investing in AI. Amazon is locking in a long term consumer for the chips and the data centers Amazon already built. Anthropic also disclosed run rate revenue greater than thirty billion. Up from nine billion at the end of 2025. Tripled in four months.

    Teslas capex bump is even more direct. Fifteen billion more next year than this year, mostly on AI infrastructure. Optimus production lines, Cybercab production lines, more Dojo training compute. The market punished the stock for it. Operators want returns on capital and Tesla just told them returns are coming later, after a much bigger bill.

    A few other deals from the same week worth knowing about.

    Bezoss new physical AI lab, Project Prometheus, closed a ten billion dollar round at a thirty eight billion dollar valuation. They are building robots and physical AI systems. Total funding now over sixteen billion.

    Vast Data, the storage company that powers most of the AI training in the West, raised one billion dollars at a thirty billion dollar valuation. More than three times its last mark. Customers include CoreWeave, xAI, Mistral, and Cursor.

    Three deals. Three different industries. One underlying bet. The bet is that demand for AI is about to explode beyond anything we have seen, and the thing in short supply will not be models or talent or capital. It will be compute. Chips, electricity, the buildings to put them in. Every one of these companies is rushing to lock up that supply now, while they still can.

    The other big story this week

    While I was at High Point Market, the lawsuit Elon Musk filed against OpenAI two years ago started its jury trial in a federal courtroom in Oakland.

    The short version. Musk co founded OpenAI in 2015 as a nonprofit, resigned from the board in 2018 after losing a power struggle, sued in 2024 alleging the company was illegally converted from a nonprofit to a for profit while the founders kept equity. The trial started Monday April 27. It is expected to run about four weeks.

    Three things matter for non lawyers.

    One, on Friday Musk dropped his fraud claims. Of the original twenty six causes of action, only two are going to trial. Breach of charitable trust, and unjust enrichment. He concentrated his fire on the strongest theory.

    Two, there is a 2017 personal journal entry from Greg Brockman, one of OpenAIs co founders, that the judge has already cited in court. The entry reads, in part, that converting the nonprofit to a for profit without Musk would be morally bankrupt. That is a problem for OpenAI.

    Three, if Musk wins anything, even on the narrow charitable trust theory, OpenAIs planned IPO this fall at a reported one trillion dollar valuation could be in serious trouble. Microsofts roughly twenty seven percent stake, worth about one hundred thirty five billion dollars, would be in question. Anthropic, which is also a public benefit corporation, is watching closely.

    If you want the deep version, court filings are public on PACER. I am not going to pretend to be a legal analyst.

    And the rest

    A few other things I found interesting.

    OpenAI shipped GPT 5.5 last Wednesday. Six weeks after the last update. ChatGPT now has nine hundred million weekly active users. The cadence between major frontier model releases is now roughly six weeks. That is the fastest weve ever seen.

    China dropped three open weight AI models in four days. DeepSeek V4. Moonshots Kimi K2.6. Alibabas Qwen 3.6. The smaller DeepSeek model has API pricing of fourteen cents per million input tokens. That is essentially free. China is racing to make frontier intelligence a commodity.

    Google held Cloud Next in Las Vegas. Two hundred sixty announcements. Sundar Pichai disclosed that seventy five percent of new code at Google is now written by AI. The Google CEO saying out loud that three quarters of Googles code is AI generated is a moment worth pausing on.

    Vercel, the company that hosts a lot of the web, got breached through a third party AI tool called Context.ai. Attackers stole OAuth tokens, pivoted into Vercel employee accounts, and made off with customer environment variables. Lesson for everyone who runs a business. Every AI productivity tool you connect to your work accounts is now a credential. Audit those connections this week.

    Why this matters for me, and probably for you

    I keep thinking about how this trickles down to a small business.

    I sell rugs. I sell them on more than a dozen channels. I built our AI infrastructure myself over the last two years, mostly because I had to. The cost of running our AI agents right now is real but tolerable.

    If the bet these companies are making is correct, the price of running that same agent in two years is going to be a tenth of what it is today. Maybe less. The supply of compute is going up faster than anyone can use it at the model layer, and DeepSeek just released a frontier class model with API pricing close to free. That changes my business. I can do things in 2027 that I cannot afford to do in 2026. The same is going to be true for whatever business you run.

    The other thing I keep thinking about is more philosophical. The companies that win this decade are not going to win because they have the best model. The model is becoming a commodity. They are going to win because they figured out, early, which interface in their business an AI agent should reach for. The interface a worker stares at all day, every day. For developers, that interface is the code editor. That is why Cursor is worth sixty billion.

    For everyone else, the answer is different and not obvious. For a rug company, I have some guesses. The buyer dashboard, maybe. Or the supplier portal. Or the internal pricing tool. I dont know yet. I am going to spend a lot of this year trying to figure it out.

    What I keep coming back to is that if you run a small or mid sized business and you are still wondering whether AI matters, the three biggest companies in tech just told you the answer in three numbers. Sixty billion. Twenty five billion. Twenty five billion.

    The companies that wait for this to be obvious are going to be twelve to eighteen months behind. The ones moving now will be fine.

    See you next week. On Sunday this time.

    This is Issue #6 of “The Week in Technology,” a weekly newsletter at ademogunc.com.

    Sources

    About me: Im Adem Ogunc. I run Well Woven, a rug company based in Easton, PA, and FurniPulse, a home furnishings trade intelligence platform. Ive been in the rug industry for over 20 years and building with AI tools for the last couple. I write this newsletter on flights and weekends because Im genuinely fascinated by whats happening in technology right now.

    Issue #6.

  • Weekly #5: The Week the Interface Started to Disappear

    April 19, 2026. A weekly technology roundup written by a founder who builds with these tools, not just reads about them.


    This is my fifth Sunday writing one of these. Every week I sit down with a coffee, go through what I read, built, and played with over the past seven days, and try to make sense of it for anyone who wants to follow along. You dont need to be technical. You dont need to be in the industry. You just have to be curious.

    This was one of those weeks where if you only looked at the headlines, you would miss the real story. The real story is that a bunch of things happened at once that all point in the same direction. The interface is starting to disappear. Salesforce is going chat first. Claude Design shipped. Robotaxis expanded in a way that was half real and half theater. Netflix posted its best quarter ever and the stock still fell. And somewhere inside all of this, if you squint, theres one architectural shift that ties it together.

    Let me walk through it.

    I tried to draw a yarn ball with Claude Design

    Anthropic shipped Claude Design this week, and I spent four or five hours in it building front ends and wireframes. To be fair, Opus has been able to produce design mockups for a while now. React components, JavaScript, HTML. Its not a net new capability for me.

    Whats different is the framework. Its built for the web, which sounds obvious until you are actually in there and you realize how much faster the iteration feels when the tool is designed for the output format. Structural UI work. Dashboards. Layouts. Components. All of that flows.

    I tried to draw our original Well Woven logo. Its this yarn ball that unrolls into the wordmark. Icons? Nailed it. Descriptive illustrations? Not quite there yet. It gave me back something that looked like either a 2003 dot com logo or a 2012 Etsy shop. Neither is the vibe.

    So the imagination heavy drawing side is not there yet. But for anything structural, its legitimately useful. If you have been meaning to try it, this is the week.

    The Claude Code app experience might be the real unlock

    I also started using the new Claude Code experience on the app this week. This might actually be my real productivity unlock of the last seven days.

    It has an auto mode where it keeps working recursively without asking permission every two seconds. It integrates with CodeRabbit so it can critique and correct its own work in loops. Running Claude Code inside the app feels meaningfully different than running it in a terminal outside. Longer working sessions. Less hand holding. More actual forward motion.

    I havent tried Codex yet. I keep hearing its better than Opus on long coding tasks. Thats the honest thing about this space right now. You cant use everything. The cost of staying current isnt just subscription fees, its cognitive overhead. For now Im in the Anthropic lane, getting results, shipping.

    Salesforce went chat first and nobody is talking about it enough

    This is the story of the week for me.

    Salesforce announced this week that every one of their tools will be accessible through a single chat interface. Think about what that actually means.

    Salesforce. The company that trained a whole generation of business people to click through 47 tabs just to update a single opportunity. Theyre now conceding that the primary way you interact with their product is going to be conversation, not clicks.

    Thats not a feature announcement. Thats a category bet. And its the clearest signal yet that the enterprise software world is going headless.

    What headless actually means

    Headless gets thrown around like jargon so let me ground it.

    Traditionally software has two parts stapled together. The engineering underneath, which is the database, the business logic, the integrations. And the interface on top, which is the screens, the buttons, the dashboards you stare at all day.

    Headless decouples them. You keep the engineering. You strip away the fixed interface. Anything, including an LLM, can query the engineering through whatever interface makes sense in the moment.

    Picture a car. For decades you bought a car and the body and engine came as one package. You liked the PT Cruiser shape? Great, you also got the PT Cruiser ride. You wanted German engineering? You took the German body with it. Headless is saying keep the engineering platform you want, and snap whatever body on top suits the job. BMW engine. Jeep body. Whatever gets you where you are going.

    Or think about it the way Ive been thinking about it. The Mac versus PC thing. Weve all had this argument. You prefer Finder, I prefer Explorer. Your folder icons look one way, mine look another. But when you actually sit down to work, you are doing the same tasks. You are writing a document. You are building a spreadsheet. You are sending an email. The operating system preferences are real, but they are texture on top of identical outcomes.

    What happens when the LLM becomes the universal interface? The OS starts to recede. It doesnt disappear. You still need something to run the thing on. But it stops being where the design argument happens. The design argument moves to the agent layer. To the workflow. To what happens in the background while you are focused on the actual work.

    Why this matters if you actually run a business

    If you run a business, you live in spreadsheets. I live in spreadsheets.

    Every Monday Im pulling Amazon advertising data, Shopify sales, Wayfair orders, inventory positions across 10 sales channels, ranking data, FedEx billing disputes, returns. All of it into Excel.

    Excel isnt where the data lives. Excel is where I force the data to converge because I need one interface to see all of it at once.

    In a headless world I dont need Excel as that convergence layer. I ask a question in plain English. The agents, or services, or whatever we are calling the worker bees this week, go query the systems. The answer comes back. No more tab switching. No more VLOOKUPs. No more reformatting a Wayfair export so it matches my Amazon SKUs. No more Monday morning spreadsheet choreography.

    Thats the promise. And Salesforce going chat first is the most expensive, most enterprise level, most “this is actually happening” version of that promise I have seen so far.

    If Salesforce is going headless, every enterprise SaaS vendor is going to have to answer the same question within 18 months. Whats our chat native story. What happens to our UI moat when the UI isnt the moat anymore.

    The operators paying attention to this right now will have an absurd advantage in 2027. The ones still buying software based on which dashboard they like best are going to wake up lapped.

    Robotaxis had a big week. Watch the units shipped.

    Tesla launched unsupervised driverless service in Dallas and Houston on April 18. Waymo opened Miami and Orlando to 150,000 waitlisted users on April 15 and started public road testing in London on April 14.

    Big headlines. But heres the operator detail that matters to me.

    Tesla launched Dallas with one active vehicle. Houston with one active vehicle. Thats not a product launch. Thats a press release with a permit attached.

    As someone who measures scale in units shipped, not announcements, I notice when the gap between what gets announced and what actually gets deployed is that wide. Waymo, to their credit, onboarded 150,000 waitlisted users into Miami and Orlando. Thats a real launch. Tesla is doing theater.

    Both companies are in the same news cycle. They are not doing the same thing.

    Netflix posted a blowout quarter and the stock dropped 10%

    Revenue up 16% to $12.25 billion. EPS of $1.23 against $0.79 consensus. Paid members over 325 million. And the market punished them because Q2 operating margins are projected to dip 1.5 points, and because Reed Hastings is leaving the board in June.

    Heres the operator takeaway. The market is done rewarding growth at any margin, even from the poster child of subscription businesses.

    If Netflix cant get a pass on a 1.5 point margin dip, nobody can. For DTC founders, this is the last reminder you need that margin discipline isnt a 2023 concern. Its the only concern.

    OpenAI and Anthropic are bifurcating into specialized models

    OpenAI dropped GPT-5.4-Cyber for security researchers on April 14, and GPT-Rosalind for life sciences on April 16. Anthropic had already announced Claude Mythos under “Project Glasswing” earlier in the month.

    The pattern. The frontier labs are splitting into general purpose consumer models and vertical specialized models.

    If you are a generalist consumer using ChatGPT or Claude for everyday tasks, this does not change much for you. If you are an operator trying to get a real workflow done, the interesting stuff is moving to the verticals. Watch this space. The next big productivity unlock for e commerce probably isnt a smarter general purpose model. Its a retail specialized one.

    The take I will get yelled at for

    Everyone in my feed right now is obsessing over agents. Agents, agents, agents. Build an agent for this. Build an agent for that. Agentic this. Agentic that.

    I think the agent conversation is a distraction from the actual architectural shift, which is the decoupling underneath.

    Agents are a consequence of headless, not a cause. Build the headless foundation first. Clean APIs. Queryable data. Documented integrations. And then the agents follow naturally.

    Try to build agents on top of a stack thats still locked behind proprietary dashboards and you are going to spend the next two years wiring bailing wire around the same old SaaS tools. That isnt an agent strategy. Thats cosplay.

    The unlock is the decoupling. The agents are downstream.

    If you are a small or mid sized operator, the most valuable thing you can do in Q2 isnt build an agent. Its audit where your data actually lives and how queryable it is from outside the system.

    Can you query your inventory from a script, or only through the UI. Can your CRM data be pulled in natural language, or do you have to export CSVs and reshape them every time. What systems are you paying for right now where the data is effectively trapped behind the dashboard.

    Thats the audit. Thats the work. Thats what sets you up to ride headless when it becomes the default instead of the exception.

    Community is where the learning is. And where it isnt.

    Final thought.

    I keep seeing people post screenshots of everything they are building, sharing prompts, comparing tool outputs. Thats the good version of the internet right now. The bad version is the guys yelling the loudest on X about what is coming. You dont learn anything from the guys yelling. You learn from the people actually shipping.

    If you havent tried Claude yet, drop a spreadsheet in there. Drop a deck. Ask it something you would normally Google. Be persistent with it. The technology is a hundred times more capable than most people realize, because most people bounce off after one lukewarm response and never come back.

    Stop consuming content about this stuff. Build something with it. Thats where the knowledge actually lives. Always has.

    See you next week.


    This is Issue #5 of “The Week in Technology,” a weekly newsletter at ademogunc.com.

    Sources and Recommended Reading:


    About me: Im Adem Ogunc. I run Well Woven, a rug company based in Easton, PA, and FurniPulse, a home furnishings trade intelligence platform. Ive been in the rug industry for over 20 years and building with AI tools for the last couple of years, using them to run advertising, manage operations, and build our e commerce infrastructure. I write this newsletter because Im genuinely fascinated by whats happening in technology right now and I wanted a place to think out loud about it. If youre curious about any of this, whether youre in tech, e commerce, home furnishings, or just trying to figure out whats going on, Im glad youre here.

    Issue #5. See you next week.


  • The Week in Technology #4: April 12, 2026

    The Week in Technology #4: April 12, 2026

    This week the Artemis II crew spoke publicly for the first time since coming home from the Moon. If you havent watched it yet, I’d really encourage you to. Four people who just traveled 695,000 miles, farther from Earth than any human has ever been, standing together, hugging each other, high-fiving through the entire debrief. You could feel it. These people were changed.

    What struck me most was this: they were inspired going up. But they were more inspired coming back down. Christina Koch said she looked out the window and saw Earth as this tiny thing surrounded by blackness and said “Planet Earth, you are a crew.” Jeremy Hansen told the crowd “When you look up here, you’re not looking at us. We are a mirror reflecting you.” Reid Wiseman, who was moved to tears talking to his daughters from 200,000 miles away, said being human is a special thing.

    These are people who used the most advanced technology on the planet. And the thing that moved them most was each other.

    I keep thinking about that as I write this newsletter about AI, about valuations, about code review tools and marketing playbooks. The technology matters. But the human part matters more. The people closest to the frontier seem to understand that better than anyone.

    This was also the week the All-In Podcast dropped an episode about how investors are valuing AI companies right now. The numbers were so wild I kept rewinding. Anthropic going from zero to a $30 billion revenue run rate in about two years. Databricks plus Palantir combined, added in a single month. A major enterprise running $100 million in AI consumption against $5 billion in operating expenses and saying theyre near peak employment.

    I caught the episode Saturday night in between the new season of Jon Hamm’s Your Friends & Neighbors, which is excellent by the way if you havent seen it. Its the number one show on Apple TV right now. Made some popcorn, started relaxing, and somewhere between episodes I fell into a rabbit hole. Sunday morning I kept going. The Dario Amodei interview on the Dwarkesh Podcast. Sundar Pichai with John Collison and Elad Gil. By the time I sat down to write this I realized everything I watched this week was pointing at the same thing: we’re in a moment thats moving way faster than most people realize, and its worth paying attention to.

    Heres the roundup.


    TL;DR

    • How are AI companies being valued right now? The All-In Podcast broke down Anthropic’s revenue ramp and what “the TAM of intelligence” means. I try to make this accessible whether you’re an investor or you’ve never looked at a balance sheet. Also recommended: the Dario Amodei x Dwarkesh Patel episode for the bigger picture.
    • What’s actually constraining AI? Google’s CEO Sundar Pichai on security risks, physical infrastructure bottlenecks, and why the AI race is now about building things fast enough.
    • What I’ve been building with this week. A quick look at the AI coding tools I’m using, Cursor and Claude, and why the code review layer matters as much as the code generation layer.
    • For my e-commerce friends: Marketing Operators Ep. 106 on team structure and organic distribution. If you run a brand, this ones worth your time.
    • Bonus: Robots are racing this weekend. In Boston and Beijing. Same weekend as the 130th Boston Marathon.

    How AI Companies Are Being Valued

    The All-In episode was E225 with Chamath, Sacks, Jason, and guest Brad Gerstner from Altimeter Capital. I want to walk through what they discussed because I think it matters. Not just for people in tech or finance, but for anyone trying to understand whats happening in the economy right now. Ill explain the jargon as I go.

    Anthropic’s Revenue Trajectory

    Let me just lay out the numbers:

    • Early 2023: Revenue turned on
    • End of 2024: $1 billion annualized run rate
    • Mid 2025: $4 billion
    • End of 2025: $9 billion
    • April 2026: ~$30 billion run rate

    Thats $1B to $30B in about two years. Brad pointed out that in March 2026 alone, Anthropic added roughly $10 to $11 billion in revenue. Thats the equivalent of Databricks and Palantir combined, added in a single month. He projects they could exit the year somewhere between $80 and $100 billion.

    They have 2,500 employees. Google crossed that revenue level with 120,000 people.

    Full disclosure: Anthropic makes Claude, which is the AI I use to write code, run parts of my business, and yes, help me draft this newsletter. Im a customer, not a neutral observer. But the numbers speak for themselves regardless of what product you use.

    A Quick Explainer: How Do Investors Value These Companies?

    Chamath laid out something I found really useful. A hierarchy of metrics that investors use depending on how mature a company is:

    Free Cash Flow → EBITDA → Margins → Net Revenue → Gross Revenue → Bookings

    Think of it as a ladder. At the top is free cash flow, a fully mature business generating real cash. At the bottom is bookings, basically a promise of future revenue. As a company matures, it climbs the ladder.

    Where are the AI frontier companies right now? Somewhere between gross revenue and net revenue. That means we’re still far from discussing whether these companies are profitable. The market is valuing them on trajectory. How fast the line is going up.

    Why Gross Revenue vs. Net Revenue Matters

    This sounds like accounting jargon but it actually matters if you’re trying to understand any headline about these companies.

    Gross revenue is the total amount billed to customers before any deductions. Net revenue is what the company actually keeps after paying partners their cut.

    Anthropic reports gross revenue. OpenAI reports net. The difference: when you buy Claude through Amazon Web Services or Google Cloud, those platforms take a commission, typically 5 to 10%. Anthropic’s headline number includes that cut. OpenAI’s doesnt.

    Brad’s view was that the 5 to 10% difference is noise compared to the growth story. Chamath’s point was more cautious. You cant do clean comparisons between the two companies when they report differently. Both are right. But if you see a headline comparing Anthropic and OpenAI revenue side by side, just know the numbers arent apples to apples yet.

    The TAM of Intelligence

    This was the part of the conversation that stuck with me most, and its the part I think matters most for people outside of tech and finance.

    TAM stands for Total Addressable Market. How big is the market these companies could eventually serve? For most tech companies, the TAM is basically IT budgets. You’re selling software to replace other software.

    AI is fundamentally different. Brad put it bluntly: the TAM for intelligence is radically different than anything we’ve seen before.

    The market for AI isnt IT budgets. Its intelligence itself. Labor augmentation. Labor replacement. Every task that currently requires a person to think, analyze, write, code, decide, or create.

    Heres the data point that landed hardest for me. Brad described a major enterprise running a $100 million annual AI consumption budget against $5 billion in operating expenses. This company believes its approaching peak employment, meaning they dont expect to hire significantly more people, while their intelligence consumption keeps growing.

    I keep coming back to what Jensen Huang said at GTC, which we covered in the last newsletter: every $500K engineer should be consuming $250K in AI tokens, and we should expect 100 AI agents per human worker. Thats not a prediction anymore. Thats how large enterprises are already planning.

    Coding Is the First Domino

    Sacks argued on All-In that Anthropic already has over 50% market share in coding tokens. More code is being written with Claude than with any other AI model. Theres a debate about whether that early lead compounds into a permanent advantage, but the underlying point is clear: the majority of developers building products today are using some kind of AI coding assistant. This is already happening.

    Dario Amodei, Anthropic’s CEO, laid out the progression in his recent conversation with Dwarkesh Patel. He described it as a spectrum:

    90% of code written by AI → 100% of code → 90% of end-to-end software engineering tasks → 100% of SWE tasks → 90% less demand for software engineers

    We’re somewhere in the first stage right now. Dario says we’re proceeding through them “super fast” but each stage is “worlds apart” from the next. Writing code is not the same as engineering a system. That distinction matters.

    Whats the real productivity impact today? Dario puts it at roughly 15 to 20% total factor speedup, up from about 5% just six months ago, and accelerating. Inside Anthropic, where he says theres “zero time for bullshit,” the gains are unambiguous. But hes the first to acknowledge theres a gap between what the tools can do and what the broader economy has absorbed. Legal, compliance, procurement, change management. All of that creates lag between capability and adoption.

    What This Actually Means

    I want to be careful here because I think this topic deserves honesty without panic.

    Yes, the shift is massive. Software is getting radically cheaper to build. But heres what I think gets lost in the scary headlines: this same technology is giving small businesses access to capability they never had before.

    I run a 12-person rug company. We use AI to manage advertising, audit shipping invoices, analyze supplier pricing in multiple currencies, build a headless e-commerce site, and run operational agents that handle tasks I used to do manually at midnight. Five years ago, that kind of infrastructure was only available to companies with 50-person engineering teams.

    Thats not a dystopia. Thats access. Thats a rug company in Easton, Pennsylvania, competing with capabilities that used to require being a tech company.

    We Are Near the End of the Exponential

    This is the bigger picture. The part I find both exhilarating and humbling.

    In his April 2026 interview with Dwarkesh Patel, Dario said something that stopped me. He said its absolutely wild that people, both inside the bubble and outside, are talking about the same tired political issues “when we are near the end of the exponential.”

    He puts 90% probability on reaching what he calls “a country of geniuses in a data center” by 2035. AI systems matching or exceeding human expert performance across most cognitive tasks. His personal hunch is much sooner. One to three years. Hes not talking about better autocomplete. Hes talking about systems that could compress a century of biomedical progress into a decade, help cure diseases, and fundamentally change whats possible.

    Will it happen that fast? I honestly dont know. But the signals are hard to ignore. The revenue trajectory tells you that enterprises are betting real money on this, not kicking tires. And the pace of improvement in what these tools can do, even just in the six months Ive been building with them seriously, has been startling.

    What I find most compelling about Dario’s framing is that he rejects both extremes. He doesnt think its going to be an overnight singularity. He also doesnt think its overhyped. His prediction is that the AI industry will probably look like cloud computing. Three to four differentiated players with healthy margins, each good at different things. And the economic impact will be “much faster than any previous technology, but not infinitely fast.” He thinks well see trillions in revenue before 2030.

    For me, the right posture is curiosity, not fear. Id rather be paying attention and learning alongside this than be caught off guard. Thats honestly why I write this newsletter. To think out loud and invite you along.


    Google’s CEO on What’s Actually Constraining AI

    I also caught the Sundar Pichai conversation with John Collison and Elad Gil this week. If the All-In episode tells you how investors are valuing AI, this one tells you whats constraining it. Different angle, equally important.

    A few things jumped out.

    Google Invented the Foundation

    Easy to forget: the Transformer architecture, the technical breakthrough that powers ChatGPT, Claude, Gemini, and basically every AI product youve used, was invented at Google. Published in 2017. Theres a narrative that Google invented this thing and then let everyone else run with it. Sundar pushes back. He points out that Google deployed Transformers in Search immediately via BERT, then MUM, and they even had LaMDA, essentially a proto-ChatGPT, internally before OpenAI launched theirs. They just had a higher bar for what they considered acceptable product quality.

    Its a reminder that the company with the research lead doesnt always get the product lead. Three people prototyping in a garage will always create surprises. Thats consumer internet. But its also worth appreciating that everything were discussing in this newsletter is built on a foundation Google’s researchers created.

    Speed as a Proxy for Good Engineering

    One line from Sundar that I keep thinking about: “I’ve always internalized speed. It almost always reflects the technical underpinnings of the product having been done well.”

    He revealed that Google Search sub-teams have latency budgets measured in milliseconds. If you ship something that saves 3ms, you earn 1.5ms for your budget and pass 1.5ms to the user. Despite adding massive AI functionality, Search latency has actually improved 30% over the last five years.

    Have most of us felt that improvement? Honestly, probably not. Many of us have been doing our searching inside AI tools instead. But the principle resonates. Speed isnt just a feature. Its a signal of engineering quality. Thats something I try to keep in mind when building our own site.

    Security: The Constraint Nobody’s Talking About

    This was the most striking moment in the conversation. Sundar warned that AI models are “definitely really going to break pretty much all software out there.” He said the black market price of zero-day exploits is dropping because AI is increasing the supply of discovered vulnerabilities faster than they can be patched.

    Think about what that means. Every piece of software running your business, your bank, your hospital. AI can now find weaknesses faster than humans can fix them. Anthropic recently demonstrated this with their Mythos model, which can autonomously discover software vulnerabilities. Useful for defense, but the offensive capability exists too.

    This is the catch-up were going to have to do. Not on one particular application. On entire systems of software. The security infrastructure built for the pre-AI era isnt designed for a world where vulnerability discovery is automated.

    The Long Bets That Paid Off

    What I find interesting about Google’s position is the pattern of taking long, difficult bets that look questionable at the time and then becoming essential infrastructure. Search. Gmail. Android. YouTube. Chrome. Google Maps. Cloud. TPUs. Waymo.

    Sundar’s current moonshots follow the same pattern. Data centers in space, started as a tiny team with a small budget. Gemini Robotics partnering with Boston Dynamics. Isomorphic Labs doing AI drug discovery. Wing drone delivery targeting 40 million Americans. He says they start small even for big ideas, which is how you avoid betting the company while still staying at the frontier.

    Google’s CapEx for 2026 is $175 to $185 billion. But Sundar says they literally couldnt spend $400 billion even if they wanted to. The constraints arent financial. Theyre physical: memory chip manufacturing, power grid permitting, construction speed, and skilled labor. He made an interesting admission: “You’re in awe of the pace in China. We need to learn to build things much faster.”

    Thats a revealing constraint. The AI race isnt just about algorithms anymore. Its about building physical infrastructure fast enough to run the algorithms. And right now, nobody has enough.


    What I’ve Been Building With This Week

    I spent part of this week digging into how to check AI-generated code for errors. If youre using AI to write code, and most developers are now, you need something on the other side verifying the output.

    Ive been using CodeRabbit for automated pull request reviews on GitHub. It catches the basics like syntax errors and security flags, but misses the bigger stuff. Intent mismatches, performance implications, whether the code actually does what you asked for.

    Cursor is my main development environment. I like it because I can see every change the AI proposes, read the diffs, and approve or reject each one. It feels like pair programming where I stay in control. Im also learning from it. Seeing how it structures code teaches me patterns I wouldnt have picked up otherwise. The file structure makes sense to me, and when I want to dig deeper into why something was done a certain way, I can.

    Claude Code is for the bigger tasks. Refactors, architectural changes, anything that requires executing multiple steps. Its more autonomous. I describe what needs to happen, it works through the steps. Between the two I have a generation tool and a review tool, and the combination works.

    The insight that keeps coming up across developer communities: the quality of AI-generated code has less to do with which tool you use and more to do with how well you describe what you want. Planning and clear instructions beat tool selection every time.


    For My E-Commerce Friends: Team Structure and Organic Distribution

    Switching gears. I listened to Marketing Operators Podcast Episode 106 this week, hosted by Cody Plofker (CMO at Jones Road Beauty), Connor Rolain (VP Growth at Ridge), and Sean Frank (Ridge founder). If you run a brand, this ones worth 45 minutes of your time.

    Their thesis: if youre building a brand in 2026, your first hire should be a head of creator, not a head of growth.

    The old playbook (find product-market fit, turn on Meta ads, hire a media buyer to scale) built a lot of $10 to $100M brands. But it also created deep dependency on paid channels and rising acquisition costs. The new playbook: build organic distribution first through founder content, creator seeding, community, and affiliate partnerships, then layer paid on top.

    Cody traced the role evolution: Head of Growth → Creative Strategist → Head of Creator. His take is that the algorithms on TikTok, Instagram, and YouTube are all heading toward rewarding authentic content. The person who can build community, manage creator relationships, and produce content that works in both organic and paid channels is more valuable than a media buyer.

    Connor brought a reality check from launching Gut Culture, a new brand from the Ridge team. Even brands getting hundreds of organic TikTok posts per week are still seeing 70 to 80% of their TikTok Shop revenue come from ad dollars promoting that content. Organic is the foundation, but paid is still the accelerant at scale.

    The structural insight I found most useful was what Cody called the Kizik model: own content production internally, outsource media buying. Most DTC brands do the opposite. Kizik flipped it and it worked.

    Thinking about where my company is heading, this resonates. We’ve historically leaned on paid channels across our marketplace business. But this year and into next, we’ve been making some bets on the UGC and creator side, and I think we’re heading in the right direction. This podcast validated a lot of what were already starting to build toward. If youre in e-commerce and thinking about the same shift, its worth the listen.


    Robots Are Racing This Weekend. In Boston and Beijing.

    This is one of those weeks where you have to stop and appreciate the timing.

    The 130th Boston Marathon is Monday, April 20. Over 30,000 runners from 137 countries, the worlds most iconic footrace. The day before, on April 19, two robot races are happening simultaneously on opposite sides of the planet.

    In Beijing, over 100 teams will compete in the 2026 Humanoid Robot Half-Marathon, running the full 21-kilometer course through the citys E-Town development zone. This is the second year of the event and participation has surged nearly fivefold. Unitree Robotics just announced their H1 hit a sprint speed of about 10 meters per second. For context, Usain Bolts peak was 10.44 m/s. This years race features a “human-robot co-run” format where human runners and robots start simultaneously and share the same course. Around 40% of teams are now running fully autonomous.

    In Boston, the ProRL Combine is launching in the Seaport District. Americas first professional robotics sports event. Humanoid and quadruped robots from leading manufacturers, universities, and research labs will compete in speed races, obstacle courses, and precision challenges on a spectator-lined course. The league is backed by the former CEO of the Boston Athletic Association, the same organization that runs the Boston Marathon. Their stated mission: build public acceptance for robotics through sports and entertainment, the same way NASCAR did for automotive engineering.

    Same city. Same weekend. Humans running 26.2 miles on Monday, robots racing on Sunday. If you want a snapshot of where we are in April 2026, thats it.

    Ive been following robotics closely in this newsletter, and events like these are how you track real-world capability. Lab demos are one thing. Running 21 kilometers on city streets, navigating terrain, managing battery life, maintaining balance. Thats a completely different benchmark. Ill cover the results next week.


    The Thread That Connects Everything

    A lot of stories this week, but one thread runs through all of them.

    AI companies are being valued on revenue trajectory because the market theyre addressing, intelligence itself, is unlike anything weve ever seen. Google invented the foundational architecture and is spending $175 billion a year to scale it, but even they cant build fast enough. The constraints are now physical, not intellectual. That intelligence is reshaping how software gets built, turning code review from a manual chore into an automated layer. Its reshaping how brands think about team structure. And robots are racing in Boston and Beijing the same weekend as the Boston Marathon.

    The common thread: compounding systems beat rented access. Whether thats Anthropic’s coding flywheel, Google’s decades of infrastructure bets finally converging, or the organic-to-paid marketing flywheel. The businesses and builders winning right now are the ones building things that get better with use, not just bigger with spend.

    And maybe thats the real takeaway this week. Four astronauts traveled farther from Earth than anyone in history, using the most advanced technology ever built. And the thing they couldnt stop talking about was each other. The hugs, the tears, the bond. Christina Koch looked at Earth from 250,000 miles away and her conclusion was that we are a crew.

    All this technology were building, all these tools, all this intelligence. Its extraordinary. But the point of it was never the technology. The point is what it lets us do together.

    Next week Ill cover the robot half-marathon results and have more episodes to share. Stay tuned.


    This is Issue #4 of “The Week in Technology,” a weekly newsletter at ademogunc.com.

    Sources and Recommended Watching:


    About me: Im Adem Ogunc. I run Well Woven, a rug company based in Easton, PA, and FurniPulse, a home furnishings trade intelligence platform. Ive been in the rug industry for over 20 years and building with AI tools for the last couple of years, using them to run advertising, manage operations, and build our e-commerce infrastructure. I write this newsletter because Im genuinely fascinated by whats happening in technology right now and I wanted a place to think out loud about it. If youre curious about any of this, whether youre in tech, e-commerce, home furnishings, or just trying to figure out whats going on, Im glad youre here.

  • The Week in Technology — March 23–30, 2026

    NASA Is Going Back to the Moon — For Real This Time

    This was a massive week for space. Three announcements landed within days of each other, and together they represent the most significant shift in American space strategy since the Space Shuttle program ended in 2011.

    First, Artemis 2 is preparing for an April 1 launch. Artemis is NASA’s program to return humans to the Moon — the successor to the Apollo missions that landed the first astronauts on the lunar surface in 1969. Artemis 2 will be the first crewed flight of the program: four astronauts will fly around the Moon aboard NASA’s Orion spacecraft. If this launch goes as planned, it will be the first time humans have left low-Earth orbit since Apollo 17 in December 1972 — over 53 years ago. (Live updates via Space.com)

    Second, NASA is pivoting away from the Gateway space station and committing to permanent Moon base construction. Gateway was a planned orbital outpost that would circle the Moon and serve as a waypoint for astronauts traveling to the surface — think of it like a rest stop in lunar orbit. NASA has now shifted its strategy toward building infrastructure directly on the Moon instead. This is a fundamental change. Gateway was a compromise. Moon bases mean we’re not just visiting — we’re setting up to stay.

    Third, and maybe the most underreported: NASA unveiled Space Reactor-1 “Freedom,” a nuclear-powered spacecraft mission to Mars planned for 2028. Nuclear propulsion dramatically reduces travel time compared to traditional chemical rockets, and this announcement signals that Mars is no longer a distant aspiration — it’s on a two-year timeline. (Full NASA policy announcement)

    I think NASA deserves the top spot this week because these three stories together tell a single narrative: the United States is treating space as a destination, not a demonstration. Moon bases, nuclear propulsion, crewed deep-space missions — this is the kind of stuff that sounded like a pitch deck five years ago. Now it’s on a launch schedule.


    After GTC: What Stuck

    GTC — short for GPU Technology Conference — is NVIDIA’s annual flagship event. NVIDIA is the company that designs and manufactures the specialized chips (called GPUs) that power virtually all artificial intelligence systems today. Their CEO, Jensen Huang, has become one of the most closely watched figures in technology. I covered his keynote in detail last week, so I won’t rehash the product announcements. But a few things from his post-keynote interviews — particularly his conversation with Lex Fridman (Podcast #494) — kept rattling around in my head all week.

    The first is his framing of 100 AI agents per engineer. Not a hypothetical. His thesis is that every serious software engineer should be managing a fleet of AI agents — automated software programs that can write code, run tests, and solve problems semi-independently — that work faster than the engineer can review their output. The bottleneck has moved from what the technology can do to how fast a human can keep up with it.

    The second is his productivity metric, which I keep coming back to: “If your $500K engineer isn’t burning $250K in tokens, something is wrong.” Tokens are the units that AI systems use to process text — every time you interact with an AI tool, you’re spending tokens, and they cost money. Jensen’s point is that the salary is the floor. The AI spending is the multiplier. The value is in the combination.

    What stayed with me is how naturally Jensen talks about this. He doesn’t frame it as futuristic. He frames it as obvious — the way a factory owner in 1920 would’ve talked about electrification. The question isn’t whether to do it. The question is why you haven’t yet.


    The AI Reality Check: Thoma Bravo, McKinsey, and the Automation Question

    This is the section where I try to hold two truths at the same time.

    Truth one: most companies are failing at AI.

    The data is brutal. McKinsey — one of the world’s largest management consulting firms, known for publishing influential research on business and technology trends — found in their latest report that 88% of companies are failing at AI transformation. The MIT NANDA Initiative (a research program at MIT studying how organizations adopt AI) pegged GenAI pilot failure even higher — at roughly 95%. S&P Global reported that 42% of companies had abandoned most AI initiatives by mid-2025, up from 17% the year before.

    McKinsey’s single biggest finding? Workflow redesign — not the technology itself — is the number one driver of whether AI actually moves the needle on earnings. Companies that fundamentally redesigned how their teams work around AI were 2.8x more likely to report meaningful financial impact. The AI isn’t the bottleneck. The organization around it is.

    Truth two: Thoma Bravo thinks the market has it completely wrong.

    Thoma Bravo is the largest software-focused private equity firm in the world — $183 billion in assets under management, over 565 software transactions across 40 years. When they share their view on the software industry, the investment world listens. At their annual LP (limited partner) meeting in March, Managing Partner Holden Spaht shared slides that pushed back directly on the market’s blanket AI-disruption thesis — the widespread fear that AI is about to destroy the software industry.

    Their argument: public software companies grew their top line at roughly 17% last year. Gross margins run around 74%. And 80–95% of next year’s revenue is already under contract through subscriptions and long-term agreements. Those are not the numbers of a sector in distress. Spaht argued that the revenue slowdown in software between 2022 and 2025 wasn’t AI’s fault — it was rising interest rates and COVID-era overselling catching up.

    At the same time, co-founder Orlando Bravo called AI and venture capital “absolutely in a bubble” and said “you just have to wait for it to pop.” So even the most bullish software investor in the world is drawing a line between software as a category (fundamentally strong) and AI as an investment theme (overheated and due for a correction).

    So where does that leave us?

    Here’s my take. A Harvard Business School study analyzed nearly all U.S. job postings from 2019 to 2025. Automation-prone roles — structured, repetitive cognitive tasks like data entry, basic analysis, and routine customer service — saw postings decline 17% per quarter per firm after companies adopted generative AI tools. But augmentation-friendly roles — analytical, creative, and collaborative work that requires human judgment alongside AI — saw postings increase 22%. A companion survey of 2,357 people across 940 occupations found 94% prefer AI as a collaborative tool rather than a replacement.

    Erik Brynjolfsson, a Stanford economist who studies how technology affects productivity, estimated 2025 productivity growth at 2.7% — double the previous decade’s average — but attributed the gains to augmentation, not replacement. His research shows AI automates codified textbook knowledge but struggles with tacit, experiential knowledge — the kind of judgment that comes from doing a job for years.

    Steve Wozniak — the co-founder of Apple — captured something real when he told CNN this week: “I don’t use AI much at all. I want something from a human being.”

    And 77% of CEOs told KPMG (one of the Big Four accounting and consulting firms) that GenAI was overhyped in the past year — but its true disruptive potential over 5–10 years is under-hyped.

    The pattern I keep seeing is what some analysts are calling “AI drafts, humans approve.” You can order DoorDash by voice now. But you still want to see the map. You still want to watch where your driver is. The interface — the dashboard, the visual confirmation, the human checkpoint — isn’t going away. It’s becoming the strategic layer. Autonomous AI agents still complete less than 2.5% of real-world tasks. The full-automation fantasy is just that. The real story is better tools in the hands of people who know how to use them.


    Robotics: A Marathon Is Coming

    Quick shoutout: ProRL (Professional Robot League) is launching America’s first robot sports league in Boston this April. Founded by David Grilk, with board member Tom Grilk (former CEO of the Boston Athletic Association, which runs the Boston Marathon), the league will debut with humanoid and quadruped robot competitions. (Forbes coverage)

    As Harvard-MIT robotics researcher Alexander Wissner-Gross put it: “One of the densest robotics talent corridors in America, home to Boston Dynamics, MIT, Harvard, and hundreds of startups, has never had a public-facing showcase for its own technology. We build the most advanced robots on Earth and then hide them at trade shows.” Meanwhile, Beijing’s second humanoid robot half-marathon is also set for April 19, with teams targeting finish times under one hour — within striking distance of human records. The robotics sports era is real.


    Karpathy’s AutoResearch: From Open-Source Tool to Operating Philosophy

    Andrej Karpathy is one of the most respected AI researchers in the world. He was the founding member of OpenAI, led Tesla’s Autopilot AI team, and is known for making complex AI concepts accessible. Earlier this month, he open-sourced a tool called autoresearch — a system that lets AI agents autonomously run hundreds of machine learning experiments overnight on a single computer, forming hypotheses, writing code, running tests, analyzing results, and looping without human intervention. (VentureBeat deep dive)

    Last week I covered the initial release and some of the jaw-dropping results. David Friedberg, a biotech entrepreneur and co-host of the All-In podcast (a popular technology and investing show), used it to replicate what would have been a seven-year PhD thesis in 30 minutes. Karpathy himself said he hasn’t typed a line of code since December.

    But this week, the story for me shifted from what the tool does to how the pattern applies beyond research labs.

    I spent time this week applying the autoresearch loop to my own e-commerce business. Here’s what that looks like in practice: I took my headless Shopify storefront — a modern web architecture where the visual front-end of the website is separated from the back-end commerce engine, giving you full control over design and performance — and started building an autonomous experiment loop for product page optimization. The system forms a hypothesis (for example, “moving the price higher on the mobile screen increases add-to-cart rates”), makes a single change, scores it against a rubric using automated testing tools and an AI visual judge, and either keeps or reverts the change. Then it loops again.

    I’m not a machine learning researcher. I run a rug company. But the pattern — hypothesize, change one variable, score objectively, loop — is universal. It works for tuning AI models on GPU clusters. It also works for product page layouts on an online store. The abstraction is the same.

    This is what I think people are missing about Karpathy’s contribution. It’s not just a tool. It’s a way of thinking about improvement: make the feedback loop tight enough and fast enough that you can run more experiments in one night than a human team runs in a quarter. Whether you’re training a language model or optimizing a checkout flow, the principle is identical.


    Cursor: Why I Keep Coming Back

    Cursor is an AI-powered code editor — think of it as a version of the software developers use to write code, but with an AI co-pilot built directly into it. It competes with tools like Claude Code (Anthropic’s command-line coding tool) and GitHub Copilot (Microsoft’s AI code assistant).

    I’ve been using both Cursor and Claude Code for the past few weeks, and I want to share a perspective that might resonate with anyone who’s technical-curious but not a developer by training.

    What I love about Cursor compared to Claude Code is the transparency. I can actually read what’s going on. I can click into the AI’s reasoning. I can see the different stages of its work — what it’s considering, why it made a choice, where it’s heading next. For someone who’s non-technical but has a deep curiosity about how these tools think, that visibility is incredibly valuable.

    Claude Code is powerful. It’s fast, it’s agentic (meaning it can take actions independently), and it gets things done. But Cursor gives me something Claude Code doesn’t: the ability to learn while building. I can double-click into any stage, understand the rationale, and come away knowing more than I did before. For a founder who’s building their own infrastructure — not hiring a team to do it for them — that educational layer matters as much as the output.


    A Film About the Future (That Has a Real Chance to Get Funded)

    I’ll close with something personal. This week I started developing a short film concept for the Future Vision XPRIZE — a $3.5 million competition run by the XPRIZE Foundation (the organization famous for offering large cash prizes to incentivize breakthroughs in space, health, and technology). Backed by Google, ARK Invest, and Range Media Partners, this competition is looking for optimistic science fiction storytelling about humanity earning a better future. The deadline is August 15, 2026, and the deliverables include a 3-minute trailer, a 12-page treatment, and a 2-page synopsis. The grand prize winner receives $2.5 million in production funding plus $100,000 cash. (Variety | TechCrunch | Fortune)

    The concept is set in the near future — around 2040 — in a world where AI agents handle routine work and orbital space transit has become normalized for certain professionals. The story follows a small ensemble of people in intimate daily moments: a morning workout with an AI agent dashboard on a smart display, a backyard capsule that launches to a low-orbit transit hub, a “Grand Central Terminal in space” where commuters travel to the Moon, Mars, and orbital workstations.

    The tone I’m going for is Pursuit of Happyness meets Her — emotionally specific, relatable, grounded in real human experience, but set against technology that feels inevitable rather than fantastical. The kind of future you’d actually want to live in.

    I’m sharing this because I think the best way to shape the future is to tell stories about it. And because a rug company CEO writing a science fiction screenplay feels like exactly the kind of thing that should be possible in 2026.

    More on this as it develops. If you have thoughts, I’m all ears.


    Sources & Further Reading

    NASA & Space:

    NVIDIA & GTC:

    AI Transformation & Thoma Bravo:

    Karpathy & AutoResearch:

    Robotics:

    XPRIZE Future Vision:

    Tools: