Author: Adem Ogunc

  • The Week in Technology #4: April 12, 2026

    The Week in Technology #4: April 12, 2026

    This week the Artemis II crew spoke publicly for the first time since coming home from the Moon. If you havent watched it yet, I’d really encourage you to. Four people who just traveled 695,000 miles, farther from Earth than any human has ever been, standing together, hugging each other, high-fiving through the entire debrief. You could feel it. These people were changed.

    What struck me most was this: they were inspired going up. But they were more inspired coming back down. Christina Koch said she looked out the window and saw Earth as this tiny thing surrounded by blackness and said “Planet Earth, you are a crew.” Jeremy Hansen told the crowd “When you look up here, you’re not looking at us. We are a mirror reflecting you.” Reid Wiseman, who was moved to tears talking to his daughters from 200,000 miles away, said being human is a special thing.

    These are people who used the most advanced technology on the planet. And the thing that moved them most was each other.

    I keep thinking about that as I write this newsletter about AI, about valuations, about code review tools and marketing playbooks. The technology matters. But the human part matters more. The people closest to the frontier seem to understand that better than anyone.

    This was also the week the All-In Podcast dropped an episode about how investors are valuing AI companies right now. The numbers were so wild I kept rewinding. Anthropic going from zero to a $30 billion revenue run rate in about two years. Databricks plus Palantir combined, added in a single month. A major enterprise running $100 million in AI consumption against $5 billion in operating expenses and saying theyre near peak employment.

    I caught the episode Saturday night in between the new season of Jon Hamm’s Your Friends & Neighbors, which is excellent by the way if you havent seen it. Its the number one show on Apple TV right now. Made some popcorn, started relaxing, and somewhere between episodes I fell into a rabbit hole. Sunday morning I kept going. The Dario Amodei interview on the Dwarkesh Podcast. Sundar Pichai with John Collison and Elad Gil. By the time I sat down to write this I realized everything I watched this week was pointing at the same thing: we’re in a moment thats moving way faster than most people realize, and its worth paying attention to.

    Heres the roundup.


    TL;DR

    • How are AI companies being valued right now? The All-In Podcast broke down Anthropic’s revenue ramp and what “the TAM of intelligence” means. I try to make this accessible whether you’re an investor or you’ve never looked at a balance sheet. Also recommended: the Dario Amodei x Dwarkesh Patel episode for the bigger picture.
    • What’s actually constraining AI? Google’s CEO Sundar Pichai on security risks, physical infrastructure bottlenecks, and why the AI race is now about building things fast enough.
    • What I’ve been building with this week. A quick look at the AI coding tools I’m using, Cursor and Claude, and why the code review layer matters as much as the code generation layer.
    • For my e-commerce friends: Marketing Operators Ep. 106 on team structure and organic distribution. If you run a brand, this ones worth your time.
    • Bonus: Robots are racing this weekend. In Boston and Beijing. Same weekend as the 130th Boston Marathon.

    How AI Companies Are Being Valued

    The All-In episode was E225 with Chamath, Sacks, Jason, and guest Brad Gerstner from Altimeter Capital. I want to walk through what they discussed because I think it matters. Not just for people in tech or finance, but for anyone trying to understand whats happening in the economy right now. Ill explain the jargon as I go.

    Anthropic’s Revenue Trajectory

    Let me just lay out the numbers:

    • Early 2023: Revenue turned on
    • End of 2024: $1 billion annualized run rate
    • Mid 2025: $4 billion
    • End of 2025: $9 billion
    • April 2026: ~$30 billion run rate

    Thats $1B to $30B in about two years. Brad pointed out that in March 2026 alone, Anthropic added roughly $10 to $11 billion in revenue. Thats the equivalent of Databricks and Palantir combined, added in a single month. He projects they could exit the year somewhere between $80 and $100 billion.

    They have 2,500 employees. Google crossed that revenue level with 120,000 people.

    Full disclosure: Anthropic makes Claude, which is the AI I use to write code, run parts of my business, and yes, help me draft this newsletter. Im a customer, not a neutral observer. But the numbers speak for themselves regardless of what product you use.

    A Quick Explainer: How Do Investors Value These Companies?

    Chamath laid out something I found really useful. A hierarchy of metrics that investors use depending on how mature a company is:

    Free Cash Flow → EBITDA → Margins → Net Revenue → Gross Revenue → Bookings

    Think of it as a ladder. At the top is free cash flow, a fully mature business generating real cash. At the bottom is bookings, basically a promise of future revenue. As a company matures, it climbs the ladder.

    Where are the AI frontier companies right now? Somewhere between gross revenue and net revenue. That means we’re still far from discussing whether these companies are profitable. The market is valuing them on trajectory. How fast the line is going up.

    Why Gross Revenue vs. Net Revenue Matters

    This sounds like accounting jargon but it actually matters if you’re trying to understand any headline about these companies.

    Gross revenue is the total amount billed to customers before any deductions. Net revenue is what the company actually keeps after paying partners their cut.

    Anthropic reports gross revenue. OpenAI reports net. The difference: when you buy Claude through Amazon Web Services or Google Cloud, those platforms take a commission, typically 5 to 10%. Anthropic’s headline number includes that cut. OpenAI’s doesnt.

    Brad’s view was that the 5 to 10% difference is noise compared to the growth story. Chamath’s point was more cautious. You cant do clean comparisons between the two companies when they report differently. Both are right. But if you see a headline comparing Anthropic and OpenAI revenue side by side, just know the numbers arent apples to apples yet.

    The TAM of Intelligence

    This was the part of the conversation that stuck with me most, and its the part I think matters most for people outside of tech and finance.

    TAM stands for Total Addressable Market. How big is the market these companies could eventually serve? For most tech companies, the TAM is basically IT budgets. You’re selling software to replace other software.

    AI is fundamentally different. Brad put it bluntly: the TAM for intelligence is radically different than anything we’ve seen before.

    The market for AI isnt IT budgets. Its intelligence itself. Labor augmentation. Labor replacement. Every task that currently requires a person to think, analyze, write, code, decide, or create.

    Heres the data point that landed hardest for me. Brad described a major enterprise running a $100 million annual AI consumption budget against $5 billion in operating expenses. This company believes its approaching peak employment, meaning they dont expect to hire significantly more people, while their intelligence consumption keeps growing.

    I keep coming back to what Jensen Huang said at GTC, which we covered in the last newsletter: every $500K engineer should be consuming $250K in AI tokens, and we should expect 100 AI agents per human worker. Thats not a prediction anymore. Thats how large enterprises are already planning.

    Coding Is the First Domino

    Sacks argued on All-In that Anthropic already has over 50% market share in coding tokens. More code is being written with Claude than with any other AI model. Theres a debate about whether that early lead compounds into a permanent advantage, but the underlying point is clear: the majority of developers building products today are using some kind of AI coding assistant. This is already happening.

    Dario Amodei, Anthropic’s CEO, laid out the progression in his recent conversation with Dwarkesh Patel. He described it as a spectrum:

    90% of code written by AI → 100% of code → 90% of end-to-end software engineering tasks → 100% of SWE tasks → 90% less demand for software engineers

    We’re somewhere in the first stage right now. Dario says we’re proceeding through them “super fast” but each stage is “worlds apart” from the next. Writing code is not the same as engineering a system. That distinction matters.

    Whats the real productivity impact today? Dario puts it at roughly 15 to 20% total factor speedup, up from about 5% just six months ago, and accelerating. Inside Anthropic, where he says theres “zero time for bullshit,” the gains are unambiguous. But hes the first to acknowledge theres a gap between what the tools can do and what the broader economy has absorbed. Legal, compliance, procurement, change management. All of that creates lag between capability and adoption.

    What This Actually Means

    I want to be careful here because I think this topic deserves honesty without panic.

    Yes, the shift is massive. Software is getting radically cheaper to build. But heres what I think gets lost in the scary headlines: this same technology is giving small businesses access to capability they never had before.

    I run a 12-person rug company. We use AI to manage advertising, audit shipping invoices, analyze supplier pricing in multiple currencies, build a headless e-commerce site, and run operational agents that handle tasks I used to do manually at midnight. Five years ago, that kind of infrastructure was only available to companies with 50-person engineering teams.

    Thats not a dystopia. Thats access. Thats a rug company in Easton, Pennsylvania, competing with capabilities that used to require being a tech company.

    We Are Near the End of the Exponential

    This is the bigger picture. The part I find both exhilarating and humbling.

    In his April 2026 interview with Dwarkesh Patel, Dario said something that stopped me. He said its absolutely wild that people, both inside the bubble and outside, are talking about the same tired political issues “when we are near the end of the exponential.”

    He puts 90% probability on reaching what he calls “a country of geniuses in a data center” by 2035. AI systems matching or exceeding human expert performance across most cognitive tasks. His personal hunch is much sooner. One to three years. Hes not talking about better autocomplete. Hes talking about systems that could compress a century of biomedical progress into a decade, help cure diseases, and fundamentally change whats possible.

    Will it happen that fast? I honestly dont know. But the signals are hard to ignore. The revenue trajectory tells you that enterprises are betting real money on this, not kicking tires. And the pace of improvement in what these tools can do, even just in the six months Ive been building with them seriously, has been startling.

    What I find most compelling about Dario’s framing is that he rejects both extremes. He doesnt think its going to be an overnight singularity. He also doesnt think its overhyped. His prediction is that the AI industry will probably look like cloud computing. Three to four differentiated players with healthy margins, each good at different things. And the economic impact will be “much faster than any previous technology, but not infinitely fast.” He thinks well see trillions in revenue before 2030.

    For me, the right posture is curiosity, not fear. Id rather be paying attention and learning alongside this than be caught off guard. Thats honestly why I write this newsletter. To think out loud and invite you along.


    Google’s CEO on What’s Actually Constraining AI

    I also caught the Sundar Pichai conversation with John Collison and Elad Gil this week. If the All-In episode tells you how investors are valuing AI, this one tells you whats constraining it. Different angle, equally important.

    A few things jumped out.

    Google Invented the Foundation

    Easy to forget: the Transformer architecture, the technical breakthrough that powers ChatGPT, Claude, Gemini, and basically every AI product youve used, was invented at Google. Published in 2017. Theres a narrative that Google invented this thing and then let everyone else run with it. Sundar pushes back. He points out that Google deployed Transformers in Search immediately via BERT, then MUM, and they even had LaMDA, essentially a proto-ChatGPT, internally before OpenAI launched theirs. They just had a higher bar for what they considered acceptable product quality.

    Its a reminder that the company with the research lead doesnt always get the product lead. Three people prototyping in a garage will always create surprises. Thats consumer internet. But its also worth appreciating that everything were discussing in this newsletter is built on a foundation Google’s researchers created.

    Speed as a Proxy for Good Engineering

    One line from Sundar that I keep thinking about: “I’ve always internalized speed. It almost always reflects the technical underpinnings of the product having been done well.”

    He revealed that Google Search sub-teams have latency budgets measured in milliseconds. If you ship something that saves 3ms, you earn 1.5ms for your budget and pass 1.5ms to the user. Despite adding massive AI functionality, Search latency has actually improved 30% over the last five years.

    Have most of us felt that improvement? Honestly, probably not. Many of us have been doing our searching inside AI tools instead. But the principle resonates. Speed isnt just a feature. Its a signal of engineering quality. Thats something I try to keep in mind when building our own site.

    Security: The Constraint Nobody’s Talking About

    This was the most striking moment in the conversation. Sundar warned that AI models are “definitely really going to break pretty much all software out there.” He said the black market price of zero-day exploits is dropping because AI is increasing the supply of discovered vulnerabilities faster than they can be patched.

    Think about what that means. Every piece of software running your business, your bank, your hospital. AI can now find weaknesses faster than humans can fix them. Anthropic recently demonstrated this with their Mythos model, which can autonomously discover software vulnerabilities. Useful for defense, but the offensive capability exists too.

    This is the catch-up were going to have to do. Not on one particular application. On entire systems of software. The security infrastructure built for the pre-AI era isnt designed for a world where vulnerability discovery is automated.

    The Long Bets That Paid Off

    What I find interesting about Google’s position is the pattern of taking long, difficult bets that look questionable at the time and then becoming essential infrastructure. Search. Gmail. Android. YouTube. Chrome. Google Maps. Cloud. TPUs. Waymo.

    Sundar’s current moonshots follow the same pattern. Data centers in space, started as a tiny team with a small budget. Gemini Robotics partnering with Boston Dynamics. Isomorphic Labs doing AI drug discovery. Wing drone delivery targeting 40 million Americans. He says they start small even for big ideas, which is how you avoid betting the company while still staying at the frontier.

    Google’s CapEx for 2026 is $175 to $185 billion. But Sundar says they literally couldnt spend $400 billion even if they wanted to. The constraints arent financial. Theyre physical: memory chip manufacturing, power grid permitting, construction speed, and skilled labor. He made an interesting admission: “You’re in awe of the pace in China. We need to learn to build things much faster.”

    Thats a revealing constraint. The AI race isnt just about algorithms anymore. Its about building physical infrastructure fast enough to run the algorithms. And right now, nobody has enough.


    What I’ve Been Building With This Week

    I spent part of this week digging into how to check AI-generated code for errors. If youre using AI to write code, and most developers are now, you need something on the other side verifying the output.

    Ive been using CodeRabbit for automated pull request reviews on GitHub. It catches the basics like syntax errors and security flags, but misses the bigger stuff. Intent mismatches, performance implications, whether the code actually does what you asked for.

    Cursor is my main development environment. I like it because I can see every change the AI proposes, read the diffs, and approve or reject each one. It feels like pair programming where I stay in control. Im also learning from it. Seeing how it structures code teaches me patterns I wouldnt have picked up otherwise. The file structure makes sense to me, and when I want to dig deeper into why something was done a certain way, I can.

    Claude Code is for the bigger tasks. Refactors, architectural changes, anything that requires executing multiple steps. Its more autonomous. I describe what needs to happen, it works through the steps. Between the two I have a generation tool and a review tool, and the combination works.

    The insight that keeps coming up across developer communities: the quality of AI-generated code has less to do with which tool you use and more to do with how well you describe what you want. Planning and clear instructions beat tool selection every time.


    For My E-Commerce Friends: Team Structure and Organic Distribution

    Switching gears. I listened to Marketing Operators Podcast Episode 106 this week, hosted by Cody Plofker (CMO at Jones Road Beauty), Connor Rolain (VP Growth at Ridge), and Sean Frank (Ridge founder). If you run a brand, this ones worth 45 minutes of your time.

    Their thesis: if youre building a brand in 2026, your first hire should be a head of creator, not a head of growth.

    The old playbook (find product-market fit, turn on Meta ads, hire a media buyer to scale) built a lot of $10 to $100M brands. But it also created deep dependency on paid channels and rising acquisition costs. The new playbook: build organic distribution first through founder content, creator seeding, community, and affiliate partnerships, then layer paid on top.

    Cody traced the role evolution: Head of Growth → Creative Strategist → Head of Creator. His take is that the algorithms on TikTok, Instagram, and YouTube are all heading toward rewarding authentic content. The person who can build community, manage creator relationships, and produce content that works in both organic and paid channels is more valuable than a media buyer.

    Connor brought a reality check from launching Gut Culture, a new brand from the Ridge team. Even brands getting hundreds of organic TikTok posts per week are still seeing 70 to 80% of their TikTok Shop revenue come from ad dollars promoting that content. Organic is the foundation, but paid is still the accelerant at scale.

    The structural insight I found most useful was what Cody called the Kizik model: own content production internally, outsource media buying. Most DTC brands do the opposite. Kizik flipped it and it worked.

    Thinking about where my company is heading, this resonates. We’ve historically leaned on paid channels across our marketplace business. But this year and into next, we’ve been making some bets on the UGC and creator side, and I think we’re heading in the right direction. This podcast validated a lot of what were already starting to build toward. If youre in e-commerce and thinking about the same shift, its worth the listen.


    Robots Are Racing This Weekend. In Boston and Beijing.

    This is one of those weeks where you have to stop and appreciate the timing.

    The 130th Boston Marathon is Monday, April 20. Over 30,000 runners from 137 countries, the worlds most iconic footrace. The day before, on April 19, two robot races are happening simultaneously on opposite sides of the planet.

    In Beijing, over 100 teams will compete in the 2026 Humanoid Robot Half-Marathon, running the full 21-kilometer course through the citys E-Town development zone. This is the second year of the event and participation has surged nearly fivefold. Unitree Robotics just announced their H1 hit a sprint speed of about 10 meters per second. For context, Usain Bolts peak was 10.44 m/s. This years race features a “human-robot co-run” format where human runners and robots start simultaneously and share the same course. Around 40% of teams are now running fully autonomous.

    In Boston, the ProRL Combine is launching in the Seaport District. Americas first professional robotics sports event. Humanoid and quadruped robots from leading manufacturers, universities, and research labs will compete in speed races, obstacle courses, and precision challenges on a spectator-lined course. The league is backed by the former CEO of the Boston Athletic Association, the same organization that runs the Boston Marathon. Their stated mission: build public acceptance for robotics through sports and entertainment, the same way NASCAR did for automotive engineering.

    Same city. Same weekend. Humans running 26.2 miles on Monday, robots racing on Sunday. If you want a snapshot of where we are in April 2026, thats it.

    Ive been following robotics closely in this newsletter, and events like these are how you track real-world capability. Lab demos are one thing. Running 21 kilometers on city streets, navigating terrain, managing battery life, maintaining balance. Thats a completely different benchmark. Ill cover the results next week.


    The Thread That Connects Everything

    A lot of stories this week, but one thread runs through all of them.

    AI companies are being valued on revenue trajectory because the market theyre addressing, intelligence itself, is unlike anything weve ever seen. Google invented the foundational architecture and is spending $175 billion a year to scale it, but even they cant build fast enough. The constraints are now physical, not intellectual. That intelligence is reshaping how software gets built, turning code review from a manual chore into an automated layer. Its reshaping how brands think about team structure. And robots are racing in Boston and Beijing the same weekend as the Boston Marathon.

    The common thread: compounding systems beat rented access. Whether thats Anthropic’s coding flywheel, Google’s decades of infrastructure bets finally converging, or the organic-to-paid marketing flywheel. The businesses and builders winning right now are the ones building things that get better with use, not just bigger with spend.

    And maybe thats the real takeaway this week. Four astronauts traveled farther from Earth than anyone in history, using the most advanced technology ever built. And the thing they couldnt stop talking about was each other. The hugs, the tears, the bond. Christina Koch looked at Earth from 250,000 miles away and her conclusion was that we are a crew.

    All this technology were building, all these tools, all this intelligence. Its extraordinary. But the point of it was never the technology. The point is what it lets us do together.

    Next week Ill cover the robot half-marathon results and have more episodes to share. Stay tuned.


    This is Issue #4 of “The Week in Technology,” a weekly newsletter at ademogunc.com.

    Sources and Recommended Watching:


    About me: Im Adem Ogunc. I run Well Woven, a rug company based in Easton, PA, and FurniPulse, a home furnishings trade intelligence platform. Ive been in the rug industry for over 20 years and building with AI tools for the last couple of years, using them to run advertising, manage operations, and build our e-commerce infrastructure. I write this newsletter because Im genuinely fascinated by whats happening in technology right now and I wanted a place to think out loud about it. If youre curious about any of this, whether youre in tech, e-commerce, home furnishings, or just trying to figure out whats going on, Im glad youre here.

  • Week in Tech #3: From the Moon to Your Shopping Cart, What Actually Happened This Week

    Week in Tech #3: From the Moon to Your Shopping Cart, What Actually Happened This Week

    April 5, 2026

    One of my favorite things to do on Sundays is go through everything I read, researched, and worked on during the week and try to make sense of it. I started doing this publicly a few weeks ago. You don’t need to be technical to follow along. You don’t need to be in the industry. You just need to be curious about where things are heading.

    This was one of those weeks where the headlines feel like they’re from different centuries. On one end, astronauts left Earth for the Moon for the first time since the Nixon administration. On the other end, the Department of Transportation said they’re not interested in AI chatbots. Both things happened in the same seven days.

    That contrast is kind of the theme. Let me walk through it.

    We sent humans to the Moon again

    Artemis II launched from Kennedy Space Center on Tuesday, April 1st. Four astronauts: Commander Reid Wiseman, Pilot Victor Glover, Mission Specialists Christina Koch and Jeremy Hansen from the Canadian Space Agency. They’re on a roughly 10 day trip around the Moon aboard the Orion spacecraft.

    This is the first time human beings have left Earth orbit in 53 years. The last time this happened was Apollo 17. December 1972. Most people reading this were not alive.

    I want to talk about the decision making behind this because I think that’s the real story, not the engineering specs. NASA had been building toward this orbital station called Gateway for years. They looked at it, decided it wasn’t the right path, and scrapped it. Pivoted to permanent Moon bases instead. They also announced a nuclear powered Mars mission for 2028. And they did all of this while the White House proposed cutting NASA’s budget by 24% and its workforce by 32%.

    And they still launched on the first attempt.

    By April 2nd, Orion completed its translunar injection burn and left Earth orbit for good. Jeremy Hansen said from space: “Humanity has once again shown what we are capable of.” By the end of this week, they were approaching the Moon and preparing for a flyby that will break Apollo 13’s all time human distance record by over 4,000 miles. They’re planning to recreate the famous “Earthrise” photograph from Apollo 8. Victor Glover sent an Easter message from space.

    I keep coming back to this because the lesson isn’t about rockets. It’s about focus. An organization under enormous pressure decided to do fewer things and do them right. They killed a project that wasn’t working, redirected toward something more ambitious, and shipped. First principles thinking. That’s what building a nation looks like. That’s what American ingenuity looks like. Science, technology, innovation, and the courage to make hard calls. I don’t care what side of any debate you’re on. That kind of leadership is inspiring. Politics agnostic. Full stop.

    It’s not just about the cool technical achievements. What hasn’t been done in decades is now happening because of focused leadership, speed in decision making, and a willingness to prioritize. That’s what I think we need to see more of everywhere. Not just at NASA.

    Also worth mentioning: SpaceX filed for what could be the largest IPO in history this same week, targeting a $1.75 trillion valuation. And ULA launched a record 29 Amazon satellites on a single Atlas V. Space is becoming a real industry, not just a government program.

    Gemma 4 and why Google deserves more credit than they get

    A little background for anyone who doesn’t track this world closely.

    Google published a research paper in 2017 called “Attention Is All You Need” that introduced the transformer architecture. That’s the core technology underneath every AI model that exists today. ChatGPT was built on it. Claude was built on it. Every one of them. Google also invented the TPU, the custom chip that made training these models at scale even possible. And back in 2014, when Sergey Brin and Larry Page were still running the company hands on, they acquired DeepMind, which became one of the most important AI research labs in the world.

    So when people talk about the AI revolution, Google literally built the foundation. Everyone else is building on top of what they created.

    And what they keep doing, which I think doesn’t get enough attention, is making it available to everyone.

    On April 2nd, they released Gemma 4. Four model sizes. Fully open source under the Apache 2 license, meaning anyone can use it for any purpose: commercial, personal, modify it, build products on it. No restrictions. The smallest version runs on a phone. The biggest competes with frontier models from any lab and handles text, images, video, and audio all in one model.

    The one that caught my eye is the 26 billion parameter version that only activates about 4 billion parameters at a time. It’s like a big engine that cruises on four cylinders when it can. It scores nearly as high as Google’s full 31 billion parameter dense model but uses way less compute. That’s meaningful for anyone thinking about running AI locally or at scale without massive costs.

    400 million+ downloads since the Gemma family launched. Over 100,000 community variants people have built on top of it. They also released specialized versions for medical, translation, security, and embeddings. And they built it into Android’s on device AI system so developers can run Gemma 4 natively on phones.

    I’m downloading it right now to test on my Mac Studio. The fact that I can run a model this capable on a machine sitting on my desk, for free, is kind of amazing.

    The company that invented this technology is supporting the community and the ecosystem by keeping it open. In a world where everyone’s trying to build walled gardens, I think that deserves to be called out.

    AI is about to change how we buy things online, and most sellers aren’t ready

    This is the section I care about most personally because I sell products across Amazon, Shopify, Wayfair, TikTok Shop, and wholesale every single day.

    Google and Shopify co-developed the Universal Commerce Protocol. In simple terms: it’s a shared set of rules that lets AI assistants actually shop for you. When you ask ChatGPT or Google Gemini to find you a product, this protocol is what allows the AI to browse a store, understand what’s available, handle checkout, and manage returns, all without the store needing a custom connection to every AI platform.

    This matters because shopping is moving into conversations. Not websites, not apps. Conversations. And Shopify’s data shows it’s happening now: traffic from AI tools up 7x since January, purchases from AI searches up 11x. They launched Agentic Storefronts to let millions of merchants sell directly inside AI chats.

    Over 20 major companies backing it: Walmart, Target, Stripe, Visa, Mastercard, Home Depot, Wayfair, Etsy. The open source spec is live at ucp.dev. Shopify’s engineering team published a deep dive on how the architecture works.

    The competitive angle: OpenAI tried building shopping directly inside ChatGPT and it didn’t work well. Walmart’s internal data showed conversion rates were 3x lower when people bought directly in ChatGPT compared to being sent to the retailer’s own site. So OpenAI is pivoting to an app model. Google and Shopify are building the universal rail. As an operator, that’s the one I’m watching.

    The question I’m asking myself: is my catalog ready for agents? If an AI can’t find my products, describe them correctly, and process a purchase, I’m invisible to whatever shopping looks like in two years.

    On Toby Lutke’s AI memo: Just turned one year old. The one where the Shopify CEO said AI usage is now a fundamental expectation and teams need to show AI can’t do the job before asking for headcount. A year later, the results are in the product. This week: new API with B2B wholesale, tariff support, translated metafields. They shipped.

    On tariffs: April 2nd was the one year anniversary of “Liberation Day” and the situation got worse, not better. 100% tariff on patented pharmaceuticals. Strengthened duties on metals, 200% on Russian aluminum. Effective tariff rate hit 11%, highest since 1943. Amazon adding 3.5% surcharges on FBA. Framework Laptop paused U.S. sales on some models because they’d sell at a loss.

    I live this. Every week I’m negotiating with Turkish factories on raw material increases caused by the same forces. The compliance paperwork alone hits small companies disproportionately.

    Robots are not slowing down

    Multiple developments worth knowing about.

    Figure AI demonstrated Figure 03. Third generation, designed for real production. They’re assembling one robot roughly every 90 minutes. That’s a production line, not a prototype shop. I’m gonna watch the full demo walkthrough this week. Excited about this one.

    Boston Dynamics and Hyundai announced plans to deploy “tens of thousands” of robots across manufacturing. Here’s the context most coverage misses: Hyundai owns Boston Dynamics. They acquired them. So the parent company just became the biggest customer of its own subsidiary. That’s not a test run. That’s a corporate level strategic bet. DHL also signed on for another 1,000 robots.

    Generalist AI released GEN-1, an embodied AI model claiming 99% task success where previous models hit 64%. The novel part: it trains on 500,000+ hours of human wearable data, not robot data, and then only needs about an hour of robot specific tuning.

    Unitree filed for their IPO. Revenue from 123 million yuan in 2022 to 1.7 billion yuan in 2025. Their humanoid starts at $29,900.

    ProRL Combine is confirmed for April 19th in Boston. America’s first robot sports competition. Same day as Beijing’s humanoid robot half marathon. I’m gonna try to go to Boston for it. If anyone else is heading there, hit me up: adem@wellwoven.com.

    The AI money race and things that made me go hmm

    OpenAI closed $122 billion at an $852 billion valuation. Largest private funding round in history. Amazon $50B, SoftBank $30B, Nvidia $30B. Over 900 million weekly active users. IPO expected later this year.

    I’m still using Claude as my daily driver. It’s been fantastic. But you can’t ignore what $122 billion means for where this industry is going.

    Microsoft launched their own AI models this week, completely separate from OpenAI: speech to text, voice, image generation. Their speech model has the lowest error rate of anything available. Mustafa Suleyman said “we’re now a top three lab.” Microsoft paying $13 billion for OpenAI and then building competing models in house. That’s corporate strategy at its finest.

    Anthropic and the Pentagon are in a legal fight after Anthropic refused to let Claude be used for mass surveillance or lethal autonomous weapons. A federal judge called the blacklisting “Orwellian.” Ninth Circuit appeal deadline is April 30th. This one is going to matter a lot.

    Anthropic also blocked third party tools from running on subscription credits. OpenClaw on your Claude subscription? Done. Pay as you go only. I think that’s whack but I get the business logic.

    OpenAI acquired TBPN, the daily tech show on YouTube. If you haven’t seen it, it’s actually a pretty fresh take on tech and business. Been growing on me. First media acquisition by an AI company. Money talks, people.

    GPT-4o officially retired.

    On the tools I’m using: Cursor 3.0 shipped this week and I just upgraded. The new worktree command and best of n command (run multiple agents on the same task, pick the winner) are cool and I’m going to test them. But I’ll be honest, I’ve been moving toward Claude Code more and more lately. I used to love Cursor for the interface, the file directory on the left side, seeing the agents work on the side. It felt more descriptive, especially getting started. But Claude Code in plan mode lets you see everything it’s about to do before it does it. That’s genuinely useful.

    And here’s the thing that surprised me this weekend: developing from my phone. I know mobile coding has existed in different forms for different tools. This is my first time really playing with it. And it’s amazing. Probably going to be addictive. I’ll report back on how it works in practice.

    Healthcare AI is starting to deliver real results

    This section makes me genuinely happy.

    Eli Lilly signed a $2.75 billion deal with Insilico Medicine for drug candidates discovered by AI. Their first compound went from identifying the target to starting human trials in under 30 months. That normally takes four to six years. As of early 2026, over 173 AI discovered programs are in clinical development, with Phase I success rates running 80 to 90% compared to the historical average of 52%.

    Butterfly Network got FDA clearance for an AI powered ultrasound tool that estimates gestational age in under two minutes without needing a trained sonographer. Nearly half of rural U.S. counties don’t have hospital obstetric services. This tool brings that capability to the places that need it most. It’s already deployed in Malawi and Uganda.

    Mount Sinai deployed AI across all seven of its hospitals, integrated directly into their electronic health records. Clinicians ask questions in natural language and get evidence backed answers with citations.

    The VA deployed agentic AI across 150 medical centers. Scheduling went from “a handful” of appointments per day to 25. VR therapy expanding to 45 more centers showing 46.7% pain reduction.

    Here’s what I hope comes from all of this. I hope costs come down. I hope quality goes up. I hope diagnosis gets faster and more accurate and more accessible to people regardless of where they live or what they can afford. The business of healthcare has been this big complicated opaque expensive machine for too long. If AI can help democratize the knowledge and wisdom of medicine, if it can bring that information and that capability down to all of us, that’s the whole point. That’s what technology is supposed to do. I don’t know the perfect way to say it but I think most people feel the same.

    On the war

    I want to say this briefly and carefully because this is a technology newsletter, not a political one.

    The conflict continues to affect everyone, and as someone who imports physical goods, it’s in my world every day. Raw material costs are spiking. Conversations with suppliers are harder. There’s a lot of uncertainty about where things are going and what costs look like over the coming months.

    I don’t want to get political about it. I just want to say I’m hoping for peace. I think most of us are. It’s a terrible situation. My heart goes out to everyone affected by it. We just hope this resolves and the world starts to heal.

    The contrast: rapid acceleration and stubborn stagnation

    I want to end here because this has been on my mind all week.

    Rockets to the Moon. $122 billion funding rounds. Robots assembled every 90 minutes. Open source AI you can run on your laptop. Healthcare diagnosis in two minutes.

    And then:

    Google found that 80% of public servants believe AI could help them do their jobs. Only 18% think their governments actually use it well. That’s a 62 point gap between believing it works and actually deploying it.

    The DOT explicitly rejected AI chatbots this week. They’re consolidating 60 apps into 7 platforms. Expected timeline: 2028 or 2029.

    3D printed homes were supposed to cut housing costs in half. They sell for $375,000 to $600,000. Real savings: 5 to 15%.

    The UK’s government data portal has 25% broken links and surfaces outdated datasets. You can’t build AI on top of broken infrastructure.

    But there are bright spots. Maryland saved $400K using AI across 40,000 employees. Dearborn, Michigan resolves 70% of city calls with an AI chatbot in over 100 languages. Connecticut cut forensic investigation times from months to hours.

    The pattern is isolated proof points, not systemic change. The innovation exists. The institutions, the procurement rules, the regulations, they’re not ready.

    It’s an interesting time to be alive. Some things are moving at a speed that’s hard to comprehend. Other things feel like they haven’t moved in years. That gap between what’s possible and what’s actually deployed is the defining story of this moment.

    And honestly, closing that gap doesn’t require more technology. It requires patience. Education. And a willingness to sit down and actually understand what’s happening.

    Be patient. Do the work. Pay attention. That’s what I’m trying to do. That’s all any of us can do.


    I’m Adem Ogunc. I’m the founder of Well Woven, a rug importing and e-commerce company based in Easton, PA, and FurniPulse, a furniture industry intelligence platform. One of my favorite things to do on Sundays is sit down and go through what happened in technology. I do it because I think understanding technology is the most important skill of the next decade, and it shouldn’t require a CS degree or insider language to get into it. Issue #3. Thanks for reading. See you next week.

  • The Week in Technology — March 23–30, 2026

    NASA Is Going Back to the Moon — For Real This Time

    This was a massive week for space. Three announcements landed within days of each other, and together they represent the most significant shift in American space strategy since the Space Shuttle program ended in 2011.

    First, Artemis 2 is preparing for an April 1 launch. Artemis is NASA’s program to return humans to the Moon — the successor to the Apollo missions that landed the first astronauts on the lunar surface in 1969. Artemis 2 will be the first crewed flight of the program: four astronauts will fly around the Moon aboard NASA’s Orion spacecraft. If this launch goes as planned, it will be the first time humans have left low-Earth orbit since Apollo 17 in December 1972 — over 53 years ago. (Live updates via Space.com)

    Second, NASA is pivoting away from the Gateway space station and committing to permanent Moon base construction. Gateway was a planned orbital outpost that would circle the Moon and serve as a waypoint for astronauts traveling to the surface — think of it like a rest stop in lunar orbit. NASA has now shifted its strategy toward building infrastructure directly on the Moon instead. This is a fundamental change. Gateway was a compromise. Moon bases mean we’re not just visiting — we’re setting up to stay.

    Third, and maybe the most underreported: NASA unveiled Space Reactor-1 “Freedom,” a nuclear-powered spacecraft mission to Mars planned for 2028. Nuclear propulsion dramatically reduces travel time compared to traditional chemical rockets, and this announcement signals that Mars is no longer a distant aspiration — it’s on a two-year timeline. (Full NASA policy announcement)

    I think NASA deserves the top spot this week because these three stories together tell a single narrative: the United States is treating space as a destination, not a demonstration. Moon bases, nuclear propulsion, crewed deep-space missions — this is the kind of stuff that sounded like a pitch deck five years ago. Now it’s on a launch schedule.


    After GTC: What Stuck

    GTC — short for GPU Technology Conference — is NVIDIA’s annual flagship event. NVIDIA is the company that designs and manufactures the specialized chips (called GPUs) that power virtually all artificial intelligence systems today. Their CEO, Jensen Huang, has become one of the most closely watched figures in technology. I covered his keynote in detail last week, so I won’t rehash the product announcements. But a few things from his post-keynote interviews — particularly his conversation with Lex Fridman (Podcast #494) — kept rattling around in my head all week.

    The first is his framing of 100 AI agents per engineer. Not a hypothetical. His thesis is that every serious software engineer should be managing a fleet of AI agents — automated software programs that can write code, run tests, and solve problems semi-independently — that work faster than the engineer can review their output. The bottleneck has moved from what the technology can do to how fast a human can keep up with it.

    The second is his productivity metric, which I keep coming back to: “If your $500K engineer isn’t burning $250K in tokens, something is wrong.” Tokens are the units that AI systems use to process text — every time you interact with an AI tool, you’re spending tokens, and they cost money. Jensen’s point is that the salary is the floor. The AI spending is the multiplier. The value is in the combination.

    What stayed with me is how naturally Jensen talks about this. He doesn’t frame it as futuristic. He frames it as obvious — the way a factory owner in 1920 would’ve talked about electrification. The question isn’t whether to do it. The question is why you haven’t yet.


    The AI Reality Check: Thoma Bravo, McKinsey, and the Automation Question

    This is the section where I try to hold two truths at the same time.

    Truth one: most companies are failing at AI.

    The data is brutal. McKinsey — one of the world’s largest management consulting firms, known for publishing influential research on business and technology trends — found in their latest report that 88% of companies are failing at AI transformation. The MIT NANDA Initiative (a research program at MIT studying how organizations adopt AI) pegged GenAI pilot failure even higher — at roughly 95%. S&P Global reported that 42% of companies had abandoned most AI initiatives by mid-2025, up from 17% the year before.

    McKinsey’s single biggest finding? Workflow redesign — not the technology itself — is the number one driver of whether AI actually moves the needle on earnings. Companies that fundamentally redesigned how their teams work around AI were 2.8x more likely to report meaningful financial impact. The AI isn’t the bottleneck. The organization around it is.

    Truth two: Thoma Bravo thinks the market has it completely wrong.

    Thoma Bravo is the largest software-focused private equity firm in the world — $183 billion in assets under management, over 565 software transactions across 40 years. When they share their view on the software industry, the investment world listens. At their annual LP (limited partner) meeting in March, Managing Partner Holden Spaht shared slides that pushed back directly on the market’s blanket AI-disruption thesis — the widespread fear that AI is about to destroy the software industry.

    Their argument: public software companies grew their top line at roughly 17% last year. Gross margins run around 74%. And 80–95% of next year’s revenue is already under contract through subscriptions and long-term agreements. Those are not the numbers of a sector in distress. Spaht argued that the revenue slowdown in software between 2022 and 2025 wasn’t AI’s fault — it was rising interest rates and COVID-era overselling catching up.

    At the same time, co-founder Orlando Bravo called AI and venture capital “absolutely in a bubble” and said “you just have to wait for it to pop.” So even the most bullish software investor in the world is drawing a line between software as a category (fundamentally strong) and AI as an investment theme (overheated and due for a correction).

    So where does that leave us?

    Here’s my take. A Harvard Business School study analyzed nearly all U.S. job postings from 2019 to 2025. Automation-prone roles — structured, repetitive cognitive tasks like data entry, basic analysis, and routine customer service — saw postings decline 17% per quarter per firm after companies adopted generative AI tools. But augmentation-friendly roles — analytical, creative, and collaborative work that requires human judgment alongside AI — saw postings increase 22%. A companion survey of 2,357 people across 940 occupations found 94% prefer AI as a collaborative tool rather than a replacement.

    Erik Brynjolfsson, a Stanford economist who studies how technology affects productivity, estimated 2025 productivity growth at 2.7% — double the previous decade’s average — but attributed the gains to augmentation, not replacement. His research shows AI automates codified textbook knowledge but struggles with tacit, experiential knowledge — the kind of judgment that comes from doing a job for years.

    Steve Wozniak — the co-founder of Apple — captured something real when he told CNN this week: “I don’t use AI much at all. I want something from a human being.”

    And 77% of CEOs told KPMG (one of the Big Four accounting and consulting firms) that GenAI was overhyped in the past year — but its true disruptive potential over 5–10 years is under-hyped.

    The pattern I keep seeing is what some analysts are calling “AI drafts, humans approve.” You can order DoorDash by voice now. But you still want to see the map. You still want to watch where your driver is. The interface — the dashboard, the visual confirmation, the human checkpoint — isn’t going away. It’s becoming the strategic layer. Autonomous AI agents still complete less than 2.5% of real-world tasks. The full-automation fantasy is just that. The real story is better tools in the hands of people who know how to use them.


    Robotics: A Marathon Is Coming

    Quick shoutout: ProRL (Professional Robot League) is launching America’s first robot sports league in Boston this April. Founded by David Grilk, with board member Tom Grilk (former CEO of the Boston Athletic Association, which runs the Boston Marathon), the league will debut with humanoid and quadruped robot competitions. (Forbes coverage)

    As Harvard-MIT robotics researcher Alexander Wissner-Gross put it: “One of the densest robotics talent corridors in America, home to Boston Dynamics, MIT, Harvard, and hundreds of startups, has never had a public-facing showcase for its own technology. We build the most advanced robots on Earth and then hide them at trade shows.” Meanwhile, Beijing’s second humanoid robot half-marathon is also set for April 19, with teams targeting finish times under one hour — within striking distance of human records. The robotics sports era is real.


    Karpathy’s AutoResearch: From Open-Source Tool to Operating Philosophy

    Andrej Karpathy is one of the most respected AI researchers in the world. He was the founding member of OpenAI, led Tesla’s Autopilot AI team, and is known for making complex AI concepts accessible. Earlier this month, he open-sourced a tool called autoresearch — a system that lets AI agents autonomously run hundreds of machine learning experiments overnight on a single computer, forming hypotheses, writing code, running tests, analyzing results, and looping without human intervention. (VentureBeat deep dive)

    Last week I covered the initial release and some of the jaw-dropping results. David Friedberg, a biotech entrepreneur and co-host of the All-In podcast (a popular technology and investing show), used it to replicate what would have been a seven-year PhD thesis in 30 minutes. Karpathy himself said he hasn’t typed a line of code since December.

    But this week, the story for me shifted from what the tool does to how the pattern applies beyond research labs.

    I spent time this week applying the autoresearch loop to my own e-commerce business. Here’s what that looks like in practice: I took my headless Shopify storefront — a modern web architecture where the visual front-end of the website is separated from the back-end commerce engine, giving you full control over design and performance — and started building an autonomous experiment loop for product page optimization. The system forms a hypothesis (for example, “moving the price higher on the mobile screen increases add-to-cart rates”), makes a single change, scores it against a rubric using automated testing tools and an AI visual judge, and either keeps or reverts the change. Then it loops again.

    I’m not a machine learning researcher. I run a rug company. But the pattern — hypothesize, change one variable, score objectively, loop — is universal. It works for tuning AI models on GPU clusters. It also works for product page layouts on an online store. The abstraction is the same.

    This is what I think people are missing about Karpathy’s contribution. It’s not just a tool. It’s a way of thinking about improvement: make the feedback loop tight enough and fast enough that you can run more experiments in one night than a human team runs in a quarter. Whether you’re training a language model or optimizing a checkout flow, the principle is identical.


    Cursor: Why I Keep Coming Back

    Cursor is an AI-powered code editor — think of it as a version of the software developers use to write code, but with an AI co-pilot built directly into it. It competes with tools like Claude Code (Anthropic’s command-line coding tool) and GitHub Copilot (Microsoft’s AI code assistant).

    I’ve been using both Cursor and Claude Code for the past few weeks, and I want to share a perspective that might resonate with anyone who’s technical-curious but not a developer by training.

    What I love about Cursor compared to Claude Code is the transparency. I can actually read what’s going on. I can click into the AI’s reasoning. I can see the different stages of its work — what it’s considering, why it made a choice, where it’s heading next. For someone who’s non-technical but has a deep curiosity about how these tools think, that visibility is incredibly valuable.

    Claude Code is powerful. It’s fast, it’s agentic (meaning it can take actions independently), and it gets things done. But Cursor gives me something Claude Code doesn’t: the ability to learn while building. I can double-click into any stage, understand the rationale, and come away knowing more than I did before. For a founder who’s building their own infrastructure — not hiring a team to do it for them — that educational layer matters as much as the output.


    A Film About the Future (That Has a Real Chance to Get Funded)

    I’ll close with something personal. This week I started developing a short film concept for the Future Vision XPRIZE — a $3.5 million competition run by the XPRIZE Foundation (the organization famous for offering large cash prizes to incentivize breakthroughs in space, health, and technology). Backed by Google, ARK Invest, and Range Media Partners, this competition is looking for optimistic science fiction storytelling about humanity earning a better future. The deadline is August 15, 2026, and the deliverables include a 3-minute trailer, a 12-page treatment, and a 2-page synopsis. The grand prize winner receives $2.5 million in production funding plus $100,000 cash. (Variety | TechCrunch | Fortune)

    The concept is set in the near future — around 2040 — in a world where AI agents handle routine work and orbital space transit has become normalized for certain professionals. The story follows a small ensemble of people in intimate daily moments: a morning workout with an AI agent dashboard on a smart display, a backyard capsule that launches to a low-orbit transit hub, a “Grand Central Terminal in space” where commuters travel to the Moon, Mars, and orbital workstations.

    The tone I’m going for is Pursuit of Happyness meets Her — emotionally specific, relatable, grounded in real human experience, but set against technology that feels inevitable rather than fantastical. The kind of future you’d actually want to live in.

    I’m sharing this because I think the best way to shape the future is to tell stories about it. And because a rug company CEO writing a science fiction screenplay feels like exactly the kind of thing that should be possible in 2026.

    More on this as it develops. If you have thoughts, I’m all ears.


    Sources & Further Reading

    NASA & Space:

    NVIDIA & GTC:

    AI Transformation & Thoma Bravo:

    Karpathy & AutoResearch:

    Robotics:

    XPRIZE Future Vision:

    Tools:


  • The Week It All Clicked: AI & Robotics, March 15–21, 2026

    The Week It All Clicked: AI & Robotics, March 15–21, 2026

    Issue #1 — Week of March 15–21, 2026

    This might have been the densest week in AI and robotics I can remember. NVIDIA’s GTC conference dropped a two-hour keynote that rewrote the hardware roadmap. The All-In podcast went live from Austin with Jensen Huang and Michael Dell back to back. Andrej Karpathy sat down with Sarah Guo on No Priors and basically said he hasn’t written a line of code since December. Brett Adcock gave a full Figure AI headquarters tour on Peter Diamandis’s Moonshots podcast. Travis Kalanick came out of seven years of stealth. Every conversation I had this week — with friends, with suppliers, even with my own team — touched AI in some way.

    There’s a real split happening right now. People who are excited and people who are uneasy. Both are right. I’m not writing this to add to the hype. I’m writing it because I think we need more calm, grounded voices talking about what’s actually happening — from people who are using these tools in real businesses, not just commenting from the sidelines. Here’s what I consumed this week, what stood out, and what I think it means.

    The December Flip

    Karpathy’s No Priors episode was the one that stopped me mid-run this week. He told Sarah Guo that sometime in December, something flipped. He went from writing 80% of his code by hand and delegating 20% to agents — to the complete opposite. He said he hasn’t typed a line of code since. Not because he doesn’t want to. Because the agents are genuinely better and faster at it now.

    He released an open-source project called autoresearch — a 630-line Python script that autonomously runs machine learning experiments. He let it loose overnight on a model he’d hand-tuned over two decades, and it found optimizations he’d missed. Over two days, it ran 700 experiments and discovered 20 improvements. Shopify’s CEO tried it and reported a 19% performance gain from 37 overnight experiments.

    What hit me hardest was his framing. He didn’t call it “exciting” or “revolutionary.” He called it “AI psychosis” — this compulsive feeling that every minute you’re not running agents, you’re wasting tokens. You’re falling behind. He said his new productivity metric isn’t compute or flops — it’s token throughput. How many tokens are you commanding? That reframe is everything.

    This resonated hard because I’m living a version of it. I run a rug importing company — Well Woven — not a software company. And this week, I rebuilt our direct-to-consumer website into a headless architecture using Claude. Two days. Limited hours — not even full work days. That’s a project that would have been a $50K agency engagement six months ago. I’m not an engineer by training. I grew up watching my dad run a computer networking business, and I’ve always been the tech person in a room full of rug people. But what I did this week with an AI coding agent would have been impossible a year ago.

    Meanwhile, Garry Tan — the CEO of Y Combinator — released GStack, an open-source framework that turns Claude Code into a virtual engineering team. 15 specialist roles. CEO reviewer, staff engineer, QA lead, release manager — all prompts. He claims he wrote 600,000 lines of production code in 60 days while running YC. TechCrunch covered the polarized reaction — some people called it transformative, others called it “a bunch of prompts.” But 26,000 GitHub stars don’t lie. The tools crossed a threshold and the people paying attention know it.

    Jensen’s Vision: From Cloud to Desktop to Factory Floor

    GTC 2026 ran March 16–19 in San Jose with 30,000 people in attendance. Jensen Huang delivered a two-hour-plus keynote — and according to multiple sources, he had no scripted text on his teleprompter, only slides. When a transition stumbled, he joked: “This is what happens when you don’t practice.” Whether that’s showmanship or just the pace at which these CEOs have to operate, it’s telling. The man runs a $3 trillion company and wings a two-hour technical keynote.

    The headline announcement was Vera Rubin — NVIDIA’s next-generation AI platform delivering 3.6 exaflops of compute. But what caught my attention more was what he’s doing at the edge. Two new products:

    • DGX Station GB300 — a desktop supercomputer with 748 GB of memory that runs models up to one trillion parameters. Locally. The first unit went to Karpathy’s house.
    • DGX Spark — a $3,999 personal AI computer running models up to 120B+ parameters. Attendees could buy them on-site.

    Jensen’s thesis is clear: AI has to move to the edge. It can’t all live in the cloud. Cars, robots, telecom base stations, your desk — AI needs to run locally, and NVIDIA is building the hardware to make that happen. He projected at least $1 trillion in high-confidence demand through 2027.

    But the moment of the week came from David Friedberg on the All-In podcast’s live GTC episode. Friedberg runs Ohalo, a genomics and agriculture company. He described taking genomic data on a Friday, running Karpathy’s autoresearch tool on it, and getting results that would have been a celebrated PhD thesis — seven years of work — replicated in 30 minutes on a desktop computer. He said it would have been published in the journal Science. His team’s faces went blank watching it.

    Jensen’s response: “We are literally near the ChatGPT moment of digital biology.”

    And then Jensen dropped his own benchmark for the AI era: if your $500,000 engineer isn’t consuming at least $250,000 worth of AI tokens, something is wrong. That’s the new productivity equation.

    On the doomer narrative, Jensen was blunt in his Stratechery interview: he was surprised by how deeply the fear-based messaging had penetrated Washington. He argued that America’s real risk isn’t the technology — it’s fear and anger preventing adoption. And he’s right. The radiologist example he gave: more AI in radiology created more demand for radiologists, not less. The technology expands what’s possible.

    Dell: The Quiet Giant

    The All-In podcast’s SXSW live show from Austin also featured Michael Dell, and his story is one that doesn’t get enough attention. Here’s a guy who started in a UT Austin dorm room in the 1980s and has been present for every single wave of the technology revolution — PCs, networking, internet, cloud, mobile, and now AI. Dell Technologies is now generating roughly $140 billion in annual revenue, with their AI infrastructure business scaling from about $2 billion toward a $50 billion target and a $43 billion AI server backlog.

    The “up 100%” number most likely refers to their AI infrastructure business and stock performance — Dell’s shares have roughly doubled from their 2024 lows. Impressive for a company most people still associate with the beige box under their desk in 2005.

    Dell said something that stuck with me: the barrier to AI adoption isn’t technology — it’s culture and leadership. That’s exactly what I see in my own industry. The tools exist. The cost is coming down. The bottleneck is people willing to learn, experiment, and change how they work.

    Then Brad Gerstner — the Altimeter Capital founder, the guy with the red glasses who’s a regular on All-In — came on stage with Dell to talk about the Invest America Act. It creates $1,000 tax-deferred investment accounts for every child born between 2025 and 2028. Michael and Susan Dell pledged $6.25 billion to add $250 to accounts for 25 million children in lower-income zip codes. Gerstner framed it as “a 401k from birth.” American ingenuity meeting American generosity. I appreciate that — technology should lift people up, not just create wealth at the top.

    The Agents Are Here

    Three competing visions of the “AI coworker” shipped this month, and the differences tell you a lot about where this is going.

    Perplexity Computer is cloud-first. It orchestrates 19+ AI models — routing to Claude for reasoning, Gemini for research, Grok for speed. CEO Aravind Srinivas said it best: “A traditional operating system takes instructions; an AI operating system takes objectives.” Their Personal Computer product runs 24/7 on your Mac mini and bridges local files with cloud AI. The enterprise version reportedly completed 3.25 years of work in four weeks during internal testing.

    Claude Dispatch (from Anthropic) is local-first. It launched March 17 as part of Claude Cowork — you scan a QR code, send instructions from your phone, and come back to finished work on your desktop. Everything stays on your machine in a sandboxed environment. It works with 38+ connectors — Notion, Gmail, Slack, GitHub, Figma. Early reports say about 50% success rate on complex tasks, which is honest. It’s fast on simple stuff, unreliable on multi-step workflows. But the direction is clear.

    GStack takes a different approach entirely — it’s not a product, it’s a methodology. Garry Tan’s open-source prompt system that assigns cognitive “gears” to Claude Code so it thinks like different team members depending on the task.

    I’ve been running my own version of this for months. I built an agent system called OpenClaw that runs on a Mac Studio in my office. It manages Amazon advertising, audits FedEx billing, monitors inventory, and handles operational communications through WhatsApp and Slack. The concept is the same across all of these: connect the brain (AI models) to the hands and legs (APIs, tools, your desktop, your browser). Let the system take objectives, not instructions. December’s OpenClaw was early — now the major companies are shipping their versions of it.

    The bottleneck isn’t the AI anymore. It’s the human trying to keep up.

    AI Fatigue Is Real

    Which brings me to the other side of this. A Harvard Business Review study published this month surveyed 1,488 U.S. workers and introduced a term I think we’ll be hearing a lot: “AI brain fry.” Workers managing multiple AI agents reported 14% more mental effort, 12% more fatigue, and 19% greater information overload. CNN Business picked it up. One engineering manager described it: “I had a dozen browser tabs open in my head, all fighting for attention.”

    Karpathy calls it “AI psychosis.” I’ve felt it. You’re running three agent sessions, code is generating faster than you can review it, and you start to wonder if you’re the bottleneck or the quality control — and whether those are the same thing.

    Here’s the nuance though: the study found that when AI is used to reduce routine work, it actually lowers burnout. The problem is specifically the overhead of supervising multiple autonomous agents. The system runs faster than the human loop.

    I think about this like email in 2005. Remember when email was the new thing that was going to ruin everything? The people who mastered it pulled ahead. The people who didn’t fell behind. And everyone in the middle was overwhelmed. We’re in the same place now, just at a much faster clock speed. The answer isn’t to stop using the tools — it’s to build better guardrails, better workflows, and have honest conversations about what these systems actually do and what they can’t.

    Robots Are Learning to Walk

    Now let’s talk about the physical side — because this week, the AI and robotics stories became inseparable.

    Jensen dedicated a huge chunk of GTC to what he calls “Physical AI.” NVIDIA announced that the four largest industrial robot manufacturers — ABB, FANUC, YASKAWA, and KUKA — are all integrating NVIDIA’s simulation and training stack. 110 robots were on the GTC show floor. Jensen’s thesis: every industrial company will become a robotics company. NVIDIA’s play is to be the platform they all build on.

    But the episode that really got me this week was Brett Adcock on Peter Diamandis’s Moonshots podcast. Adcock is the founder of Figure AI, and he gave a full HQ tour while explaining their Helix 02 AI system. The headline: they deleted 109,504 lines of C++ code — all the hand-engineered locomotion and control — and replaced the entire stack with neural networks. End to end. The robot’s eyes, hands, feet, and legs all run on inference simultaneously. No more traditional code telling the robot how to walk. The network figures it out.

    The demo was wild. 61 separate manipulation actions in a continuous kitchen task — loading and unloading a dishwasher with no resets and no human intervention. Then a March 9 demo showed the same system tidying a living room it had never been in. The fleet learning moat is what matters here: once one robot learns a task, every robot in the network knows it.

    Adcock’s not naive about it though. He said: “Until I feel safe enough to have it there with free reign around all my kids, it’s not ready for everyone.” That’s the kind of honesty this space needs.

    Figure isn’t alone. Sunday Robotics showed up on Moonshots with a completely different philosophy — their Memo robot is deliberately not humanoid. Wheels, not legs. Friendly form factor with colored caps instead of the warrior-aesthetic most humanoids are going for. Their argument: trustworthy is more important than human-shaped. And they backed it up — first robot to fold socks, handles wine glasses (transparent, reflective, fragile objects that most robots can’t touch), 33 unique interactions clearing a single table. They raised $165M at a $1.15B valuation.

    And then there was Travis Kalanick. The Uber founder came out of seven years of stealth at the All-In Summit in Austin to reveal ATOMS — a company working at the intersection of physical infrastructure and AI. His framework is brilliant: Manufacturing = CPU. Real estate = Storage. Logistics = Network. Three verticals: cloud kitchens (what he calls a “food computer”), mining automation (acquiring Pronto), and a wheel-base platform for specialized non-humanoid robots. 30 countries. Thousands of employees who couldn’t even put the company name on their LinkedIn for seven years. His thesis: the ChatGPT moment for physical AI is imminent.

    Meanwhile in China, Unitree Robotics filed for a $610 million IPO on the Shanghai STAR Market. Revenue up 335% year-over-year. They shipped 5,500 humanoid robots in 2025. Their CEO predicted humanoid robots will run faster than Usain Bolt by mid-2026. The US-China robotics race is real and accelerating.

    What I’m Watching

    A few threads I’m keeping an eye on going forward.

    Chelsea Finn at Physical Intelligence is building what might be the most important project in robotics right now — a foundation model for robots. Think of it like the difference between a calculator (does one thing well) and a smartphone (does everything). Her team trained a 3-billion-parameter model across 100+ unique rooms and achieved 80% success on tasks in homes it had never seen. The key insight: a general-purpose robot that can do many things actually outperforms a specialist robot built for one task. Same lesson as LLMs — general beats narrow.

    The governance question keeps me up at night. We lived through the social media era and watched algorithms reshape how people think, vote, and interact — without adequate guardrails. If you experienced what that did to public discourse, to elections, to teenagers’ mental health, you know we need to do better this time. Big company bureaucracy and government bureaucracy are both too slow to manage this on their own. Maybe it’s the models themselves that can help synthesize logic and separate signal from noise. Maybe it’s industry leaders getting together proactively rather than waiting for regulation. I don’t have the answer. But I know politicizing this — from either side — is the worst possible approach. This is about leadership, literacy, and staying ahead of the technology rather than being consumed by it.

    An NBC News poll found 57% of Americans say AI risks outweigh the benefits. Only 26% hold positive views. But inside the companies building this stuff, conviction has never been higher. That gap — between public anxiety and industry certainty — is the defining tension of this moment. The optimism and the caution need to coexist.

    See You Next Sunday

    We’ve lived this our whole lives. I watched the generation before me get disrupted by the people who embraced computers. Even something as simple as using email instead of face-to-face meetings was a real differentiator when I was first getting into business. Each wave creates a new gap between the people who adapt and the people who wait.

    This week convinced me that the gap between people watching AI and people using AI is becoming permanent. The tools are here. The infrastructure is being built. The robots are literally walking. The question isn’t whether this is real — it’s whether you’re going to engage with it or wait until it’s too late.

    I’m writing this from the perspective of someone running a physical goods business who’s choosing to engage. Not a Silicon Valley insider. Not an AI researcher. A rug company CEO with a Mac Studio, a curious mind, and a conviction that we’re running through a pivotal moment that’s as significant as the PC revolution my dad’s generation lived through.

    I’ll be back next Sunday.

    — Adem


    Sources & What I Watched This Week

    Podcasts & Videos:

    Key Articles & Announcements:

    This is a weekly roundup of AI and robotics news as seen through the lens of a founder building at the intersection of physical commerce and technology. Subscribe to get it every Sunday.

  • I Run a Rug Company. Here’s Why I Can’t Stop Thinking About Robots.

    The state of robotics in 2026, through the eyes of someone who moves physical goods for a living


    I’ve spent the last decade importing, warehousing, and shipping rugs. Turkish supply chains. A warehouse in Easton, PA. Pallets moving across Amazon, Wayfair, and our own DTC site. My world is physical — heavy, dusty, measured in square feet and container loads.

    So when Travis Kalanick dropped a 1,700-word manifesto on March 13 at atoms.co/vision announcing his robotics company Atoms — the culmination of eight years in stealth, thousands of employees, and over a billion dollars raised — I didn’t read it like a VC hunting for deal flow. I read it like someone who moves physical goods for a living and felt something click.

    What follows is my attempt to think out loud about what’s actually happening in robotics right now — the real numbers, the physics constraints that matter, the global geography of who’s building what, and why I believe the biggest opportunities will belong not to the people inventing robots, but to the people who understand the messy reality of deploying them. I’m not a robotics expert. I’m a rug guy who pays attention. But I think the distance between those two things is collapsing, and that’s exactly the point.


    Where I’m coming from — and why it matters

    My dad ran a computer networking business in the ’90s. He connected law offices, dry cleaners, and small businesses with networks — selling hardware, running cable, building gaming PCs on the side. I watched that transformation happen in real time. When I was in elementary school, less than 10% of kids had a computer at home. By the time I graduated high school, the iPod had just come out and was still a novelty. I saw firsthand what happens when technology meets small business operations: everything changes, slowly at first, then all at once.

    That pattern has repeated itself across every industry I’ve watched since — e-commerce, mobile, cloud computing, AI. Each time, the winners weren’t always the people who invented the technology. They were often the people who understood a specific domain deeply enough to know where the technology should go. Jeff Bezos didn’t invent the internet. He understood books and logistics.

    Today I’m the founder and CEO of Well Woven. We design and sell area rugs — thousands of SKUs, sourced primarily from Turkey, warehoused in Pennsylvania, sold across every major marketplace. I’ve also built side projects like FurniPulse, a furniture industry news aggregator, because I can’t stop tinkering. I’ve spent years optimizing operations, managing supplier relationships across continents, navigating warehouse workflows, and obsessing over the gap between how things should work and how they actually work.

    I say all this not as a résumé dump, but as context. I operate in atoms, not bits. And that orientation is precisely why the robotics moment feels personal.


    What Travis Kalanick actually said — and the idea I can’t shake

    The most interesting idea in the Atoms manifesto isn’t the robots. It’s a framework Kalanick calls “valuable unknown truths” — the idea that competitive advantage comes from knowing things others don’t know, which lets you do things others can’t do. These unknown truths compound: organizations that get good at discovering them develop advantages that widen over time.

    His framing is deliberately provocative. CPUs manipulate bits; manufacturing manipulates atoms. Storage stores bits; real estate stores atoms. Networks move bits; transport moves atoms. The digital world has been optimized relentlessly for three decades. The physical world remains, in his words, “largely untouched territory.”

    Kalanick’s stance on humanoid robots is the most contrarian part: he’s explicitly anti-humanoid. He wrote about watching a humanoid half-marathon and thinking they’d all be better off with wheels. His pancake factory analogy crystallizes it — if you need 1,000 pancakes per hour, a humanoid flipping them individually is absurd. You build a specialized machine with a heated iron cooking 100 at once. He calls them “gainfully employed robots” — machines designed for specific jobs, not designed to look like us.

    Atoms’ product is a “wheelbase for robots” — a standardized chassis with power, compute, and sensors that can be outfitted for specific industrial tasks. They’re acquiring Pronto, Anthony Levandowski’s autonomous mining vehicle startup, and targeting food production, mining, and transport. Over a billion dollars raised. Thousands of employees. Eight years of quiet building.

    But here’s the idea I can’t shake: the unknown truth in robotics isn’t whether the technology works. It’s knowing how to deploy it in a specific environment, with specific constraints, specific products, specific labor realities. That gap between “the technology exists” and “we know how to make it work here” is enormous. And it’s where the real value will be created.


    The physics are humbling — and that’s actually what makes this interesting

    One reason I find robotics fascinating rather than intimidating is that the constraints are brutally honest. Software can paper over a lot of inefficiency. Hardware can’t. Physics doesn’t care about your pitch deck.

    Here are the numbers that stopped me cold.

    Power is the binding constraint for mobile robots. Current humanoid robots achieve only 2–4 hours of operation per charge. The battery can only be about one-eighth of the robot’s total weight — any heavier and the machine can’t balance or move efficiently. Compare that to an electric vehicle, where the battery is roughly one-third of the car’s total mass. That single ratio — 12% versus 33% — reshapes everything about how you design and deploy a mobile robot. It means shorter operating windows, more charging infrastructure, and fundamentally different operational workflows than the “drop it in and forget it” fantasy.

    Battery energy density is improving at about 5–8% per year. Current lithium-ion technology delivers 200–350 Wh/kg. Projections put solid-state batteries at 600–800 Wh/kg by 2030, but those won’t reach robotics production scale until 2028 at the earliest. This isn’t a software problem that an overnight breakthrough will solve. This is a materials science problem that improves on geological timescales compared to Moore’s Law.

    Perception degrades in exactly the conditions where you need it most. LiDAR — the laser-based spatial sensing system most robots use to navigate — suffers a 56% reduction in point cloud density in heavy rain. More than half your spatial data, gone because it’s raining. At 40+ mm/h rainfall, LiDAR systems can’t reliably detect standard traffic signs regardless of their reflective material. Standard depth cameras fail on transparent and reflective objects — glass, metal, water. Factories provide controlled lighting and known objects, achieving >99.9% accuracy. Real-world environments? Variable lighting across 8 orders of magnitude, unknown objects, clutter, moving people. The gap between demo conditions and deployment conditions is where most robotics companies die.

    Manipulation reveals Moravec’s Paradox in full force. The human hand has 27 degrees of freedom, 30+ muscles, and tactile sensors at 1mm intervals in fingertips. Individual robotic tactile sensors can now exceed human resolution at the component level — vision-based tactile fingertips resolve down to 30–100 micrometers. But at the system level — where perception, actuation, control, and durability must work together seamlessly — robots remain far from human versatility. Grasp success rates hit 94–97% for known objects in structured settings. In cluttered environments with novel objects? 83–87%. For transparent objects? 84%. Production environments need >99%. That 10-point gap between demo and deployment is the gap between a YouTube video and a working business.

    But here’s the statistic that really rewired my thinking. Industrial robot arms advertise mean time between failure (MTBF) of 40,000 to 100,000 hours. Sounds incredible. But the actual robot cell MTBF — the entire working system — is only about 87 minutes. Why? Because 80% of failures come from peripheral equipment: grippers, conveyors, sensors, fixtures. The arm is fine. Everything around it breaks.

    That’s not a robotics problem. That’s an operations problem. And operations is something people like me actually understand.


    The $75–130 billion opportunity nobody talks about

    While the tech press obsesses over humanoid robots — Figure AI at $39.5 billion valuation, Tesla’s Optimus, the Beijing humanoid half-marathon — the real money in robotics is flowing through a market that sounds deeply unsexy: systems integration.

    The global robotics systems integration market is worth $75–130 billion and growing at about 10% annually. It’s the business of taking off-the-shelf robot hardware (arms, mobile bases, sensors, grippers) and assembling, programming, deploying, and maintaining them for specific applications. And it’s extraordinarily fragmented — the top three integrators control just 3% of the market.

    The reason this matters is that 80% of robots are currently sold to automotive companies and large enterprises. The other 90%+ of manufacturers — the 250,000+ small and medium factories in the United States alone — have barely been touched. Only about 12% of small manufacturers currently use any form of robotics. That’s 220,000+ factories with zero robots. Two million US manufacturing jobs are projected to go unfilled. The labor shortage is acute, the need is real, and the deployment infrastructure doesn’t exist at scale.

    This is where the Robots-as-a-Service (RaaS) model enters. Think of it as the SaaS revolution applied to physical automation. Instead of a $160,000+ capital expenditure for a palletizing system, you pay monthly. The provider handles deployment, maintenance, software updates, and performance guarantees. If the robot doesn’t perform, you don’t pay.

    Formic, a Chicago-based startup, is the clearest example. They build zero robots — they source cobots from manufacturers like Yaskawa, design custom cells, deploy, maintain, and guarantee performance. Seventy-five percent of their customers had never used a robot before signing up. They’ve logged over 400,000 production hours across their fleet. The RaaS model eliminates the 18–24 month enterprise sales cycle for CapEx purchases and opens the vast majority of manufacturers who are too small or risk-averse for traditional automation.

    The global RaaS market hit roughly $27 billion in 2025 and is growing at 17–18% annually. Fleet deployments grew 31% in 2024. Within transportation logistics specifically, RaaS grew 42%.


    The companies that won weren’t built by roboticists

    Here’s the pattern I keep returning to: the biggest value creation in commercial robotics over the last decade came from operators, not inventors.

    Locus Robotics was born when Amazon’s 2012 Kiva acquisition stranded Quiet Logistics, a 3PL. Bruce Welty, an operations guy, started an internal robotics division. Rick Faulk joined as CEO in 2016 with zero robotics background — he was a career tech executive from Cisco and Intronis. His exact quote: “We look like a robot company, but we’re actually a software company.” Locus improved DHL’s pick speed from 78 to 150 units per hour — a 92% improvement. They now operate 13,000+ robots across 350+ sites in 18 countries, at nearly a $2 billion valuation.

    6 River Systems was founded by Jerome Dubois and Rylan Hamilton, former Kiva implementation executives — not inventors. Their “Chuck” robot was deliberately designed to look like existing pick carts, reducing training friction. It delivered 80% of goods-to-person productivity at 20% of the cost. Shopify acquired them for $450 million in 2019, four years after founding.

    Formic, founded in 2020 by Saman Farid — a former VC at Baidu Ventures with 15 years in robotics/AI investing — builds zero robots. They source, integrate, deploy, and manage. Their 98% contract renewal rate proves the model. Farid’s defining insight: “In the future, companies will either be building new robots, or operating them and generating value from them, but not both.”

    In every case, the competitive advantage wasn’t a novel actuator or a breakthrough algorithm. It was understanding the workflow, the user, the economics, and the deployment reality better than anyone else. The technology was assembled from available components. The value was in the integration and the operations.


    The global map is shifting — and it matters more than you think

    You can’t understand robotics without understanding geography, and the geography is shifting fast.

    China installed 295,000 new industrial robots in 2024 — more than every other country combined. They now have over 2 million robots in operational use. Robot density has exploded from 97 per 10,000 workers in 2017 to 470 in 2024. Guangdong province alone produces 40%+ of China’s industrial robots and 80% of service robots. Unitree, based in Hangzhou, sells a functional humanoid robot — the G1 — for $16,000. Their newest model, the R1, starts under $6,000. China controls an estimated 85–90% of global humanoid unit volume and dominates the component supply chain: motors, reduction gears, sensors, batteries. Inovance Technology shipped over 5 million robotic joint servo motors in 2025, achieving 70%+ domestic replacement of Japanese suppliers.

    Japan remains the historical anchor: FANUC holds ~17% global industrial robot market share with 260+ service locations in 100+ countries. Nabtesco controls approximately 60% of the global market for precision reduction gears — the critical component in every robot joint. Yaskawa has shipped 540,000+ robots.

    South Korea has the highest robot density on Earth: 1,012 per 10,000 workers. Hyundai owns Boston Dynamics and is building a 30,000-unit/year robotics facility. Samsung and Doosan Robotics are both making major moves in collaborative and humanoid platforms.

    Germany is anchored by KUKA (owned by China’s Midea Group), Franka Robotics, Festo, and the dense Mittelstand network of precision manufacturers. The European collaborative robot market is growing faster than any other region.

    In the United States, five robotics clusters are emerging:

    • Pittsburgh — Carnegie Mellon’s Robotics Institute, 250+ advanced technology companies, 7,300+ robotics jobs. Aurora, Argo AI’s successor companies, and a deep pipeline of CMU spinouts. The repurposed steel infrastructure means lab space at $15/sq ft.
    • Boston — MIT, MassRobotics innovation hub, Amazon Robotics (formerly Kiva), Locus Robotics, the legacy of Boston Dynamics and iRobot.
    • Bay Area — AI-native robotics startups, Figure AI, Physical Intelligence ($5.6B valuation building foundation models for robots), unmatched VC access.
    • Austin — Apptronik ($350M+ raise), Tesla’s Gigafactory and Optimus development, fast-growing deep tech scene.
    • Detroit/Ann Arbor — The legacy auto cluster is pivoting to robotics, with autonomous vehicle spinoffs and manufacturing automation startups.

    These hubs recently formed the USARC Alliance (United States Alliance of Robotics Clusters) to coordinate the national ecosystem.

    For someone in my position — running a company with Turkish supply chain relationships, Pennsylvania warehousing, and customers across multiple channels — this geography matters in a very practical way. The robots that might one day transform my operations will be assembled from components made in Shenzhen, running AI models developed in San Francisco, mounted on platforms designed in Japan, deployed using business models pioneered in Chicago. The supply chain for automation has its own supply chain, and it’s every bit as global and complex as the ones I navigate for rugs.


    What the AI moment means for robots (and why it’s harder than you think)

    The most important technology shift happening in robotics right now is the emergence of Vision-Language-Action (VLA) models — AI systems that see the world, understand natural language commands, and output physical actions. Physical Intelligence’s π0 model, running at 3–5 billion parameters, demonstrated a robot bussing tables and making espresso for 13 continuous hours. Figure AI’s Helix model powers robots at BMW’s Spartanburg plant doing 10-hour daily shifts.

    These models are following the same democratization arc as image generation. HuggingFace’s LeRobot library and open-source models like OpenVLA and Octo are making this the “Stable Diffusion moment” for robotics. The barrier to entry for AI-powered robot control has dropped from approximately $10 million to $10,000.

    But — and this is the critical caveat — the data gap between language AI and robotics AI may be unbridgeable for years. Ken Goldberg at UC Berkeley estimates the gap between robot foundation model training data and mature LLM data could be up to 120,000x. You can scrape the entire internet for text. You can’t scrape the physical world for robot training data.

    Simulation helps: NVIDIA’s Isaac Lab achieves 1.6 million frames per second across 8 GPUs, enabling robots to train walking and navigation in simulation and deploy to real hardware with zero real-world experience. Boston Dynamics’ Spot learned stair climbing entirely in simulation. But zero-shot sim-to-real transfer fails for dexterous manipulation — the physics of contact, deformation, and friction are too complex to simulate accurately. Real-world fine-tuning remains necessary for anything involving hands.

    Kalanick himself acknowledged this tension. The tech stack for physical AI, he wrote, “is not for the faint of heart.” But he also offered a crucial insight: no single company has to master every layer. The more cross-stack competence you develop, the better positioned you are — but you don’t need to do it all.


    Where I think this goes

    I don’t have a grand theory or a masterplan to announce. What I have is a growing conviction shaped by a decade of operating in the physical world: the next ten years in manufacturing, warehousing, and logistics are going to look radically different from the last ten. Not because the technology is brand new, but because the deployment layer is finally maturing.

    The patterns are clear. RaaS lowers the barrier. Foundation models make robots more adaptable. The operators who understand actual work — the cluttered environments, the weird SKU edge cases, the 87-minute MTBF reality — are going to matter as much as the engineers who build the arms. Maybe more.

    I keep coming back to my dad’s business in the ’90s. He wasn’t inventing networking technology. He was a guy with a van, cable, and deep knowledge of how a dry cleaner’s back office actually worked. He understood the deployment reality. The technology existed; the unknown truth was knowing exactly how to make it useful for the specific customer standing in front of you.

    Robotics in 2026 feels like networking in 1995. The hardware is real. The software is accelerating. The infrastructure is fragmentary. And the people who will build the most lasting companies might not be the ones building the robots at all — they might be the ones who understand where the robots need to go, and why.

    I’m just getting started thinking about this. More to come.

    — Adem


    This is the first in a series of posts exploring the robotics landscape from an operator’s perspective. Subscribe to follow along, or connect with me on LinkedIn.

    I’m the founder and CEO of Well Woven, and I write about technology, physical goods, and the future of operations at ademogunc.com.

  • How Face-to-Face Wilton Weaving Works

    If you’ve ever wondered what separates a quality machine-woven rug from a bargain-bin flatweave, the answer often comes down to three words: face-to-face Wilton. It’s one of the oldest and most ingenious weaving methods still in use today — and understanding it can completely change how you shop for rugs.

    A Little History: Where “Wilton” Comes From

    The Wilton loom traces its roots to the English town of Wilton in Wiltshire, where carpet production thrived as far back as the 1740s. What made the Wilton loom revolutionary was its use of a Jacquard mechanism — a system of punched cards that controlled which colored yarns were brought to the surface and which were buried in the backing. This gave weavers the ability to produce intricate, multi-color patterns at a speed that handweaving simply couldn’t match.

    The face-to-face variation took this a step further. Rather than weaving one carpet at a time, engineers figured out how to weave two simultaneously — doubling output without doubling the equipment. This method became the industry standard for producing high-quality machine-woven rugs, and it remains so today.

    How Face-to-Face Wilton Weaving Actually Works

    The concept is elegant in its simplicity. Two separate rug backings are set up on the same loom, one above the other, with their pile surfaces facing each other. Colored pile yarns travel back and forth between the two backings, forming loops that create the pattern on each face. The yarns that aren’t needed on the surface at any given point are carried along the back of the fabric — this is what gives Wilton rugs their characteristic thickness and durability.

    Here’s the key moment: a reciprocating knife blade passes continuously between the two fabrics, slicing through the shared pile yarns. This single cut separates the two rugs and simultaneously creates the cut-pile surface that gives Wilton carpets their plush, velvety feel.

    CROSS-SECTION VIEW

    Upper
    Backing

    Lower
    Backing

    KNIFE
    Rug 1
    Rug 2
    Upper pile tufts face down
    Lower pile tufts face up

    SHARED PILE YARNS CUT HERE

    Pile Yarn (Color A)
    Pile Yarn (Color B)
    Pile Yarn (Color C)
    Pile Yarn (Color D)
    Backing / Weft
    Cutting Knife
    Cross-section of a face-to-face Wilton loom showing how shared pile yarns connect two fabrics before the knife separates them.
    1
    Dual Weave
    Two fabrics are woven simultaneously on one loom, facing each other with pile surfaces inward.
    2
    Shared Pile
    Pile yarns travel between both backings, forming tufts on each face as they loop back and forth.
    3
    Cut Apart
    A reciprocating knife passes between the fabrics, cutting the shared pile to produce two identical rugs.

    Why This Method Matters for the Rug You’re Buying

    Density and Durability

    Because unused pile colors are carried along the back of the rug rather than being trimmed away, Wilton-woven rugs tend to be denser and heavier than tufted alternatives. That extra material acts as built-in padding and structural support. Walk on a Wilton rug and you’ll feel the difference underfoot — there’s a substantiality that thinner constructions simply can’t replicate.

    Pattern Precision

    The Jacquard mechanism that controls the Wilton loom allows for extremely precise color placement. Each row of pile is programmed individually, which means complex geometric patterns, intricate florals, and detailed borders come out crisp and well-defined. This is why Wilton weaving has long been the preferred method for traditional and Oriental-inspired designs.

    Consistency Between Rugs

    Since both rugs are woven from the same yarn at the same time, face-to-face production delivers a level of consistency that’s difficult to achieve any other way. If you’re furnishing multiple rooms with the same rug or need replacements down the road, Wilton-woven products are more likely to match precisely.

    Did you know? A typical Wilton loom can work with up to five or six different colored yarns in a single design. At any point in the pattern, only one color appears on the surface — the rest are hidden in the body of the rug, adding weight and cushion. This is one reason woven rugs feel so much more substantial than printed ones.

    Wilton vs. Other Rug Construction Methods

    Not all machine-made rugs are created equal. Here’s how Wilton stacks up against other common construction types you’ll see on the market.

    Feature Face-to-Face Wilton Tufted Flatwoven
    Pile Type Cut pile (woven in) Cut or loop (punched in) No pile
    Durability Very high — yarn is integral to structure Moderate — depends on adhesive backing High for thin profile
    Pattern Detail Excellent — Jacquard controlled Good with modern equipment Limited by weave structure
    Thickness Plush, substantial Varies widely Thin, firm
    Hidden Yarn? Yes — adds density No Sometimes (dead yarn)
    Production Speed Moderate (but 2 rugs at once) Fast Moderate

    What to Look for When Shopping

    When you’re evaluating a machine-woven rug, flip it over. A Wilton rug will show you the pattern on the back — you can see each color clearly because the yarns run through the full structure of the fabric. Tufted rugs, by contrast, usually have a solid latex or fabric backing glued over the bottom, hiding the construction underneath.

    Weight is another good indicator. Wilton rugs are noticeably heavier per square foot because of all those buried yarns. If a rug feels light and flimsy relative to its size, it’s almost certainly not a woven construction.

    Finally, check the edges. Woven rugs often have a finished, bound edge that’s integral to the fabric rather than a separate binding tape glued or stitched on after the fact. This is a small detail, but it speaks to the structural integrity of the whole piece.

    The Bottom Line

    Face-to-face Wilton weaving is one of those manufacturing techniques that most people never think about — but it directly affects the look, feel, and lifespan of the rug under your feet. The next time you’re comparing rugs and wondering why one feels so much better than another at a similar price point, there’s a good chance the answer is in how it was made.

    A rug isn’t just a surface. It’s a piece of engineering. And the Wilton loom, with its elegant two-at-a-time approach, has been getting that engineering right for nearly three centuries.

  •  Beginner’s Guide to Working With Freight Brokers and Negotiating Lower Rates

    If you’ve never worked with a freight broker before, the process can seem intimidating. But the truth is—it’s actually pretty straightforward. Freight brokers are simply the middle people who connect shippers (like us) with trucking companies. Their job is to find a carrier to move your freight, and your job is to make sure you’re getting a fair rate.

    Step 1: Understand the Broker’s Role

    Think of a broker as a matchmaker. You tell them what you need moved—how many pallets, the weight, where it’s going, and when it needs to arrive—and they come back with a quote.

    Step 2: Know That Negotiating Is Normal

    This is important: in freight, negotiation isn’t rude—it’s expected. When a broker gives you a rate, your first response doesn’t need to be “yes.” Instead, you can ask questions like:

    • “Can you do better?”
    • “That seems high—what’s your best rate?”
    • “We usually pay around [X amount] for this lane. Can you match or beat that?”

    Often, just by asking, you’ll get a lower rate.

    Step 3: Keep It Polite, Clear, and Direct

    The key is to be professional and respectful while still pushing for better pricing. You don’t need long explanations—simple, confident questions usually work best.

    Step 4: Build Relationships Over Time

    The more often you work with a broker, the more likely they’ll be to automatically offer you better rates. They want repeat business, and they’ll reward you for consistency.

    Step 5: Track Your Numbers

    Keep a record of what you’ve paid for similar shipments in the past. This gives you leverage when negotiating because you can confidently say what’s “normal” for that lane.


    Quick Takeaway

    Working with freight brokers comes down to three things:

    1. Tell them what you need shipped.
    2. Ask for a lower rate—it’s expected.
    3. Be polite but firm, and build a relationship for better deals in the future.

    It’s that simple.

  • Sunday Thoughts: We’re All Becoming Builders in the Decentralization Era

    I’m not a developer. Let me just get that out there. I’m a commerce guy who gets excited about technology, and what I’m seeing right now is blowing my mind.

    Six months ago, I couldn’t have imagined writing a line of code. Today? I’m playing around with Cursor, experimenting with no-code tools, and I’ve actually got an IDE running on my Mac. Am I building the next Facebook? No. But I’m creating small, useful software for my specific needs. And that’s the point – we’re entering an era where everyone can be a builder.

    The Great Unbundling

    Here’s what hit me this week: Everything is decentralizing, and it’s happening faster than most people realize.

    Rails World 2025 just wrapped up in Amsterdam (literally two days ago!), and while I wasn’t there, I’ve been catching up on all the buzz. Last year at Rails World 2024, DHH unveiled OMAKUB – basically Ubuntu Linux that you can set up with one command. But here’s what stuck with me: Linux runs on 96.3% of the top one million web servers. This free, open-source operating system that thousands of developers have worked on for 30+ years essentially is the internet.

    Then earlier this year, 37signals dropped Campfire – a chat tool like Slack that you buy once for $299 and host yourself. No monthly fees. Forever. DHH literally ran a conference chat from a computer in his closet to prove the point. In his blog post about it, he said it best: “SaaS has been ruling the world of web-based software for two decades now… But there’s also a lot of SaaS that does not need to be a service.”

    Why does this matter? Because we’ve been sleepwalking into a world where we rent everything and own nothing.

    The “Why Are We Doing This?” Moment

    Let’s be real for a second. Why are five companies basically running the American economy? Why does every business decision, every communication, every piece of data flow through the same handful of Silicon Valley giants?

    I’m not saying these companies are evil. They built incredible things. But we’ve reached a point where the concentration of power doesn’t make sense anymore. The tools to build alternatives exist. The hardware is powerful enough. The networks are fast enough.

    So why are we still paying monthly fees forever just to chat with our coworkers?

    Chamath Palihapitiya, the former Facebook exec turned venture capitalist, has been saying this for years. He famously called social media tools “ripping apart the social fabric of how society works” and predicted big tech companies will be “taxed to death” and face regulatory breakup. Back in May 2024 on the All-In Podcast (at the 53:00 mark), he predicted Bitcoin could hit $500,000 by October 2025 – we’re just a month away from his deadline, and while we’re not quite there yet, the momentum toward decentralization he was talking about is undeniable.

    The Commerce Revolution I’m Actually Excited About

    As someone in commerce, this shift is personal for me. For too long, online selling meant choosing between Amazon, Shopify, or maybe one or two other platforms. A handful of retailers have controlled how products reach consumers.

    But that’s changing. Fast.

    We’re seeing:

    • Creator commerce where individuals build direct relationships with customers
    • Video commerce that’s more like QVC meets TikTok than traditional e-commerce
    • Decentralized marketplaces where sellers actually own their customer relationships
    • Token-based systems that let communities govern themselves while still maintaining order

    The DeFi market is projected to grow from $20.48B in 2024 to $231.19B by 2030 – that’s a 53.7% annual growth rate. This isn’t chaos – it’s organized decentralization. We can have rules and systems without having overlords.

    The Hardware Finally Catches Up (And My Personal Connection)

    Ok, here’s where it gets personal for me. I grew up in the 80s and 90s, and man, those were beautiful times for a computer nerd. From kindergarten through high school, I spent my days at my dad’s computer shop. Seven days a week, we were building PCs, servicing PCs, living and breathing PCs.

    I watched us go from DOS to Windows 3.1 (which was mind-blowing at the time), then to Windows 95, which felt like the future had arrived. We were constantly upgrading – swapping out sound cards (remember Sound Blaster?), video cards, adding memory. Going from a 256MB hard drive to 512MB felt like you’d just bought a mansion. Every component was a choice, an upgrade path, a possibility.

    Then laptops happened. And suddenly, you buy a MacBook, something breaks, you buy a new MacBook. Dell laptops? Same story. The whole beautiful ecosystem of modular computing just… died. It’s been insane, honestly.

    Enter Framework. They’re making laptops where you can swap out ANY component. Screen breaks? Replace the screen. Need a faster processor? Swap it out. Want better graphics? Pop in a new module – takes 5 minutes. They announced earlier this year you can upgrade from an RX 7700S to an RTX 5070 in their Framework 16.

    And here’s the beautiful part: AMD’s latest chips are now legitimately competitive with Apple’s M-series. You can get MacBook-level performance in a machine you actually own and can repair. For someone who grew up in the era of “open it up and tinker,” this feels like coming home.

    Linus Tech Tips invested $225,000 of his own money in Framework because he believes in this vision so strongly. The Framework laptop got a 10/10 repairability score from iFixit. For the past decade, laptops have been expensive disposable items. That’s ending.

    The Money Thing We Need to Talk About

    The U.S. is now actively moving on cryptocurrency regulation and adoption. The GENIUS Act (Guiding and Establishing National Innovation for U.S. Stablecoins of 2025) was just signed into law in July 2025 – creating the first federal regulatory framework for stablecoins. This bipartisan legislation, which passed 68-30 in the Senate and 308-122 in the House, requires stablecoins to be backed 1:1 by US dollars or low-risk assets.

    Major banks like JPMorgan Chase and retailers like Amazon and Walmart are already planning to issue their own stablecoins. The Act positions the US to lead the global digital currency revolution while strengthening the dollar’s reserve currency status – stablecoins will actually drive demand for US Treasuries. Additionally, 19 US states have introduced Bitcoin reserve legislation as of late 2024.

    Whether you love or hate crypto, you can’t ignore what this represents: even governments are recognizing that centralized control of money might not be the only way forward.

    This isn’t about becoming a crypto bro. It’s about recognizing that the fundamental structures of our economy are becoming more distributed. And that’s probably healthy.

    What This Actually Means for Regular People

    Here’s why I’m writing this on a Sunday morning, coffee in hand, genuinely excited about the future:

    We’re becoming creators, not just consumers.

    When I fire up Cursor and start building something, I’m not trying to create the next unicorn startup. I’m solving my own problems. Creating tools for my specific needs. The stats are wild – 76% of developers are using or planning to use AI coding assistants, and they’re seeing 26% productivity increases.

    But here’s the real kicker – you don’t need to be a developer anymore. I’m living proof. Six months ago, code was gibberish to me. Now I’m building stuff. Not perfect stuff, not production-ready stuff necessarily, but MY stuff. Tools that solve MY problems.

    This is the real revolution: The barriers between “technical” and “non-technical” people are dissolving. My kids will grow up in a world where creating software is as normal as creating a PowerPoint presentation is today.

    The Framework for Everything

    What we’re seeing isn’t just about Linux or laptops or Bitcoin. It’s a fundamental shift in how we think about ownership, creation, and power:

    • Own, don’t rent – Whether it’s software (Campfire for $299 once), hardware (Framework laptops), or digital assets
    • Build for yourself – The tools are accessible enough that you can solve your own problems (I’m doing it with Cursor!)
    • Participate, don’t just consume – Whether it’s open source, DAOs, or creator economies
    • Modularity over monoliths – In hardware, software, and business models

    Where We Go From Here

    The beautiful thing about this movement is that it’s not theoretical. You can participate today:

    • Try Linux: With tools like OMAKUB, it’s genuinely easy now – one command and you’re running
    • Build something: Even if you’re not technical, tools like Cursor make it possible (trust me, if I can do it…)
    • Buy hardware that lasts: Framework laptops exist today – and they’re actually good
    • Support decentralized commerce: Buy directly from creators when possible
    • Question the subscription model: Do you really need to rent that software forever?

    The Bottom Line

    We’re living through a profound shift. The age of five companies controlling everything is ending. Not because of regulation or revolution, but because better alternatives are emerging.

    The centralized model made sense when coordination was hard and resources were scarce. But we’re past that now. We have the tools, the networks, and the knowledge to build differently.

    What excites me most isn’t the technology itself – it’s what it enables. When everyone can build, when commerce is truly peer-to-peer, when we own our tools instead of renting them, we get a more vibrant, creative, resilient economy.

    That’s not just cool. That’s transformative.

    And the best part? We’re just getting started. Rails World 2025 just showed us that the community pushing for this change is stronger than ever. The momentum is real. The future is decentralized.

    These are my Sunday thoughts as a non-technical person watching the tech world transform. I’m curious – what changes are you seeing in your industry? How is decentralization showing up in your world?

    P.S. – If you’re like me and thought coding was impossible, seriously, try Cursor or another AI-assisted tool. You might surprise yourself with what you can build. I sure as hell did.

    Resources & Links

    Want to dive deeper? Here’s where I’ve been learning:

    The Linux/Open Source Revolution

    The ONCE Movement & Campfire

    Framework & Modular Hardware

    AI Democratizing Code

    Bitcoin & Decentralized Finance

    The Numbers Behind It All

    Note: Yeah, I went down quite the rabbit hole researching all this. But that’s what Sunday mornings are for, right?

  • Why I Built a Furniture Industry News Aggregator (Because 12 Browser Tabs Was Too Many)

    I’m sitting in our warehouse this Sunday morning, July 13th, surrounded by moving boxes. We just wrapped our team meeting, and I finally have time to share something I’ve been building in the evenings after the kids go to bed.

    If you work in home furnishings—whether you’re a buyer, designer, manufacturer, or retailer—you know the morning ritual. Open Furniture Today. Check Home Accents Today. Scan Architectural Digest. Pop over to Apartment Therapy. By the time your coffee’s lukewarm, you’ve got a dozen tabs open and you’re still not sure you caught everything important.

    Last month, I decided to come up with a solution. The result? FurniPulse—a simple tool that pulls 19+ furniture industry sources into one clean feed, updated every 20 minutes.

    screenshot of the homepage of furnipulse.com

    The Pulse of the Business

    In the furniture industry, any good company needs to know where the consumer mindset is and what’s happening in the competitive landscape. There’s SO much going on all the time—new collections dropping, trends shifting, competitors making moves, trade shows approaching.

    (Quick sidebar: you need to know what’s going on, but you also need your own vision. Like, I check what West Elm is doing, but I’m not trying to BE West Elm, you know? It’s more like… okay, they’re pushing curved sofas hard this season. Good to know. But maybe my customers are still loving their sectionals. You can’t just chase every trend—you’ll drive yourself crazy and lose what makes you unique.)

    If I’m being perfectly honest, there’s definitely some FOMO mixed in there too. You see a competitor mentioned in a trade pub and think “what are they up to?” Or a design blog features some trend you haven’t heard of and suddenly you’re wondering if you’re behind the curve.

    Every morning, I’d find myself with a dozen tabs open, trying to piece together the full picture. Trade news in one corner telling me about manufacturing changes. Consumer blogs in another showing me how those changes were being spun to the public. Industry announcements scattered across five different sites.

    This is just what we do. It’s part professional necessity, part curiosity, part not wanting to be the last one to know something important. But I kept thinking—there has to be a better way to stay connected to all this without the constant tab juggling.

    So I thought: why not create this for myself? One place where I could actually feel the rhythm of the industry without the morning scramble. A single feed that shows me both sides of the story—what manufacturers are saying and what consumers are reading.

    That’s how FurniPulse was born. Not from crisis or frustration. More from recognizing that this thing we all do—this mix of staying informed and maintaining our own direction—could be simpler.

    Building Without Being a Builder

    Here’s the part that still amazes me: I built this myself. Me. A furniture guy who couldn’t write a line of code six weeks ago.

    With all these AI coding tools dropping—Cursor, Claude, ChatGPT—I started wondering: could I actually build something to solve my own problem? Not hire someone. Not wait six months. Just… build it myself?

    So I did. Over the last few weeks, spending a couple hours each evening, I created FurniPulse. The process was surreal. I’m literally talking to these tools using Whisper AI, explaining what I want like I’m training a really smart intern. Share a screenshot here, describe a feature there, and somehow, working code appears.

    My wife would find me at 11 PM, muttering at my laptop about RSS feeds and Python scripts. “Are you becoming one of those tech guys?” she’d ask. Maybe I am.

    What FurniPulse Actually Does

    At its core, FurniPulse is simple:

    Every 20 minutes, it automatically:

    • Scans 19 trusted sources (and growing)
    • Collects fresh articles—currently tracking 239 articles
    • Organizes by relevance—Trade news for professionals, Consumer news for enthusiasts
    • Delivers instantly—accessible on any device at furnipulse.com

    Two viewing modes:

    • Trade Mode – Industry news, B2B updates, manufacturing insights, tariff announcements
    • Consumer Mode – Design trends, home decor, lifestyle content

    The magic is in seeing both perspectives side by side. You start to notice things.

    The Patterns You Can’t Unsee

    Once I had both trade and consumer feeds in one place, the matrix revealed itself:

    The 48-Hour Rule: Trade news hits consumer blogs 48-72 hours later, like clockwork. Tariff announced Monday? By Wednesday, Apartment Therapy has “5 Ways to Style Your Space Before Prices Rise!”

    The PR Playbook: Product launches follow a script—leaked photo, official announcement, influencer “surprise” post, consumer blog round-up. Every. Single. Time. Now I can practically predict which stories will blow up.

    Market Momentum: Vegas Market prep started ramping up exactly 6 weeks out across every publication. The cascade is so predictable, you could set your calendar by it.

    The Translation Gap: What manufacturers call “supply chain optimization” becomes “Why Your Favorite Sofa Might Cost More” in consumer media. Watching how industry jargon gets translated for consumers is a masterclass in communication.

    The Technical Journey (For Fellow Non-Techies)

    I spent three nights googling “how to parse RSS feeds” before realizing I was massively overthinking it. The fourth night, I had a working prototype.

    The tools I used:

    • Cursor on my Mac (it’s like VS Code but understands plain English)
    • Claude/ChatGPT for problem-solving
    • Whisper AI for voice-to-text (because typing is slow when you’re excited)
    • Python for the backend (which I learned as I went)
    • Some hosting service I still don’t fully understand but it works

    I almost gave up when I realized I’d have to manually categorize 47 different publications. Some use “furnishings,” others “furniture,” and don’t get me started on whether “décor” needs an accent. Then I remembered: done is better than perfect.

    The biggest surprise? It’s ridiculously cheap to run. Like, less than my monthly coffee budget cheap.

    What This Means for Our Industry

    We’re at an inflection point. When a furniture guy can build software by talking to AI at 11 PM, what else becomes possible?

    Think about all the inefficiencies in our industry:

    • Inventory management spreadsheets that haven’t changed since 2003
    • Quote processes that still involve three emails and a PDF
    • Showroom experiences that ignore everything we know about digital retail

    If I can solve my news problem in a few weeks of evenings, what could you build to solve yours?

    The Reality Check

    FurniPulse isn’t perfect. My wife thinks the name is terrible (she’s probably right). I still manually check Dwell because their RSS feed is broken. Sometimes the categorization is wonky—apparently “ottoman” can mean furniture OR the empire, depending on context.

    But every morning when I open one tab instead of twelve, when I catch trends before they hit mainstream, when I save 30 minutes that I can spend on actual business—it’s worth it.

    What’s Next

    Right now, my goal is simple: get feedback. Is this useful beyond my own morning routine? What sources am I missing? How could it be better?

    I’m not thinking about monetization. Maybe sponsors down the line, but that’s a careful road. For now, it costs almost nothing to run, and if it helps others in our industry stay informed, that’s enough.

    Once we get through this warehouse move and into fall, I want to create more tools like this. Share what works. Build in public. Because if there’s one thing I’ve learned, it’s that the gap between “I wish someone would build…” and “I built…” is smaller than ever.

    Try It Yourself

    FurniPulse is live at furnipulse.com. It’s free, it’s basic, but it works.

    If you’re drowning in furniture industry news every morning, give it a try. And if you know of publications I’m missing, please let me know. Currently tracking 19 sources, but our industry is vast and I’m always looking to improve.

    More importantly, if you’ve got your own itch to scratch—that spreadsheet that drives you crazy, that process that wastes hours, that information gap that costs money—maybe it’s time to build your own solution.

    All of us are techies now, whether we admit it or not.

    Drop me a line at adam@woven.com. I’d love to hear what you think, what you’d build, or just commiserate about the state of RSS feeds in 2024.

    Time to get back to these moving boxes. But first, one more coffee and a quick check of FurniPulse. Old habits die hard—they just get more efficient.

    —Adem

    P.S. – Still looking for someone who can fix Dwell’s RSS feed. Coffee and eternal gratitude await.

  • AI, Meet Main Street: YC Startups I’m Watching

    Real-world tools for scrappy operators like us

    I run a rug company. Not a SaaS startup. Not a VC-backed AI darling. A company that makes beautiful, easy-to-clean rugs for real people with real messes — peanut butter on a runner, juice spills at a birthday party, you name it.

    But recently, I’ve been falling down the rabbit hole of AI-powered tools — not because I think they’re flashy, but because they’re finally getting useful. And nothing caught my eye more than the last couple of Y Combinator graduating classes.

    A massive chunk of these startups are building AI tools. But here’s what gets me excited: many of them aren’t chasing the moon. They’re solving the real, unsexy, painful problems small businesses like mine deal with every day. Logistics. Invoicing. Bookkeeping. Government paperwork. Refund fraud.

    This post isn’t meant to be a full YC recap. It’s just a short list of companies I think are worth watching — and why they matter to folks who are actually running businesses, not just building pitch decks.


    Five Startups I’m Keeping an Eye On

    1.  Hazel

    What they do: Hazel helps small businesses win government contracts by automating the messy paperwork and compliance steps. Think of it like an AI-powered RFP assistant.

    Why it matters: Government work used to be a fortress unless you had connections, patience, and legal muscle. Hazel opens the door for small shops — contractors, designers, local manufacturers — to land serious clients like school districts and city agencies.

    Try it or reach out: hazeltech.ai | august@hazeltech.ai | elton@hazeltech.ai


    2. Oway

    What they do: Oway turns unused truck space into cheaper freight shipping. They basically “rideshare” pallet shipping — you toss your freight into someone else’s half-full truck going the same way.

    Why it matters: Freight is expensive. For physical product brands, it’s one of the top 3 cost drivers. This is a way to lower that cost without warehousing or bulk negotiating. I’d love something like this for our custom rugs.

    Try it or reach out: shipoway.com


    3. LedgerUp

    What they do: LedgerUp is like an AI revenue assistant. It automates your invoicing, follows up on late payments, and even answers questions like “What did Acme Co. pay last month?” in Slack.

    Why it matters: I can’t tell you how many times I’ve had to chase a payment that slipped through the cracks. This tool isn’t flashy — it’s just practical. And when you’re running lean, cash flow is everything.

    Try it or reach out: ledgerup.ai | founders@ledgerup.ai


    4. Rebolt

    What they do: Rebolt is building AI agents to replace back-of-house restaurant tasks. They dispute delivery app refunds, respond to customers, and even help hire and manage staff.

    Why it matters: One of our friends runs a restaurant and loses thousands every month to fake refund claims. If Rebolt can claw that back, that’s a game-changer. And honestly, I think their model will spread beyond restaurants.

    Try it or reach out: rebolt.ai | founders@rebolt.ai


    5. Abundant

    What they do: They provide on-demand human oversight for your AI agents. When the bot gets confused, a vetted human jumps in — and their work helps retrain your AI over time.

    Why it matters: We all want to automate more. But when AI fails silently, it costs real money. This gives you reliability and a smarter system over time.

    Try it or reach out: abundant.ai | founders@abundant.ai


    What This All Adds Up To

    A year ago, I thought AI was mostly for coders and researchers. Now, it’s clear: AI is becoming the new labor layer. Not the replacement for people — but the relief from all the stuff people hate doing anyway.

    That’s what these companies have in common. They’re not building “chatbots” — they’re building quiet systems that plug into the broken workflows we’ve all tolerated for too long.


    A Note to Founders

    If you’re working on something like this, I’d love to hear about it. I’m just a guy who sells rugs, but I think there’s power in sharing real tools with other operators. These posts aren’t paid. They’re just my way of making sense of what’s happening.

    And if you’re a small business owner like me — and you try one of these tools — let me know what you think. There’s something really special happening here, and I want to stay close to it.


    Tweet me @ademogu or drop me a line at adem@wellwoven.com. I’ll be posting more of these soon.