<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Disruption Point]]></title><description><![CDATA[An unfiltered look at how businesses actually navigate technology disruption, separating the hype of the sales pitch from the reality of digital transformation.]]></description><link>https://www.disruptionpoint.com</link><generator>Substack</generator><lastBuildDate>Sun, 17 May 2026 04:44:19 GMT</lastBuildDate><atom:link href="https://www.disruptionpoint.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mike Ianiro]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[mikeianiro@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[mikeianiro@substack.com]]></itunes:email><itunes:name><![CDATA[Mike Ianiro]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mike Ianiro]]></itunes:author><googleplay:owner><![CDATA[mikeianiro@substack.com]]></googleplay:owner><googleplay:email><![CDATA[mikeianiro@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mike Ianiro]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Cloud Won, then came the Token Economy]]></title><description><![CDATA[Why token pricing, rate limits, and local inference are making on-prem hardware relevant again.]]></description><link>https://www.disruptionpoint.com/p/the-cloud-won-then-came-the-token</link><guid isPermaLink="false">https://www.disruptionpoint.com/p/the-cloud-won-then-came-the-token</guid><dc:creator><![CDATA[Mike Ianiro]]></dc:creator><pubDate>Wed, 18 Mar 2026 23:01:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Tkf6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tkf6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tkf6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 424w, https://substackcdn.com/image/fetch/$s_!Tkf6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 848w, https://substackcdn.com/image/fetch/$s_!Tkf6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 1272w, https://substackcdn.com/image/fetch/$s_!Tkf6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tkf6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png" width="1512" height="788" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:788,&quot;width&quot;:1512,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2772955,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.disruptionpoint.com/i/191408314?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94715ca0-2b44-42da-ac82-64dbbf6afade_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tkf6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 424w, https://substackcdn.com/image/fetch/$s_!Tkf6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 848w, https://substackcdn.com/image/fetch/$s_!Tkf6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 1272w, https://substackcdn.com/image/fetch/$s_!Tkf6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa205fb4-c004-4355-bc71-3ceece45706c_1512x788.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.disruptionpoint.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.disruptionpoint.com/subscribe?"><span>Subscribe now</span></a></p><p>I spent years arguing with the people who never wanted to let go of their servers. You know the type. In every cloud migration, they had a reason a workload couldn&#8217;t move: security, latency, data residency, compliance, or some special dependency that somehow made their rack of aging hardware sacred. A lot of it boiled down to one instinct: they trusted what they could see with their own eyes. I had very little patience for that mindset. My view was simple: if a workload still needed local hardware, that usually meant the cloud hadn&#8217;t solved the problem yet. Give it time. The economics would win. For a long time, they did. Then AI changed the math.</p><h2>AI introduced a new bottleneck: TOKENS</h2><p>What I didn&#8217;t fully appreciate was that the next era of computing wouldn&#8217;t just be about storage, networking, or compute. It would be about access.</p><p>If you use frontier AI models heavily, you already know the feeling. Yes, the cost can add up. But the bigger frustration is often the rate limits, throttling, usage tiers, queueing, and the constant sense that someone else controls how much you&#8217;re allowed to do. I kept running into the same problem. I am not compute constrained. I am token constrained.</p><p>That matters because the standard model of pay per token, accept the limits, and keep swiping the card doesn&#8217;t feel great once AI becomes part of your daily workflow. When a pricing model starts to pinch, people look for another path. That is exactly what is happening now.</p><h2>The escape hatch is local inference</h2><p>That alternative is increasingly sitting on a desk. For many consumers and builders, it looks like a Mac mini. When Apple began designing its own chips, the story was battery life, thermal efficiency, and tighter integration. Almost nobody was talking about local AI inference. But Apple Silicon ended up with a feature that matters enormously for running language models, unified memory. Because the CPU and GPU share the same memory pool, these machines can run models locally in a way that feels surprisingly practical. A Mac mini with enough memory can run models like Qwen or DeepSeek through tools such as LM Studio or Ollama, with effectively zero marginal per-token cost after you buy the machine. No API billing. No rate limits. No cooldown timer. No provider decides you&#8217;ve had enough for the hour.</p><p>That does not mean local is always better. It is not. Cloud APIs still win when you need the very best frontier models, massive parallel scale, or a fully managed platform. But for many everyday use cases of summarization, coding help, extraction, drafting, classification, translation, private internal workflows, local inference is now good enough to be economically compelling. And once &#8220;good enough&#8221; gets paired with predictable cost, people pay attention.</p><h2>The same economic logic is showing up everywhere</h2><p>The shift is not just happening at the hobbyist level. For heavier inference workloads, teams are already stitching together multiple machines into a small local cluster. What used to sound eccentric now sounds practical. Buy compute once, run models locally, and stop paying a toll every time someone asks a question. That is also the logic at play in hyperscale.</p><p>When Jensen Huang said the future was accelerated computing, he was right. What many people missed was the second order effect. Once AI becomes foundational, nobody wants to depend entirely on rented access forever.</p><p>The world&#8217;s largest technology companies are responding accordingly. They are not just buying GPUs. They are designing their own silicon, building new datacenter capacity, and trying to control more of the stack. That is the clearest signal in the market.</p><h2>NVIDIA is winning, but its customers are learning the lesson</h2><p>NVIDIA remains the standard. Its hardware is exceptional, and its position is still incredibly strong. But when your annual AI infrastructure bill reaches into the billions, the same old economic conclusion starts to emerge: own more, rent less. Google has TPUs. Amazon built Trainium and Inferentia. Microsoft has Maia. Meta is investing in custom silicon. Apple has spent years perfecting its own approach to consumer hardware. That does not mean NVIDIA is in trouble tomorrow. It means the biggest buyers in the market are doing what big buyers always do when a supplier becomes too central to their cost structure. They seek leverage, alternatives, and vertical integration. That same instinct is now appearing at the individual level. The hyperscalers are solving it at a datacenter scale. Developers and small teams are solving it at the desk scale. Different budget. Same math.</p><h2>Open-weight models from China deserve to be taken seriously</h2><p>This is where some readers will disagree, but it is increasingly hard to ignore. Models such as Qwen, DeepSeek, and Kimi are better than many people assume. Not just &#8220;good for open source&#8221; or &#8220;good for the price.&#8221; In many practical workflows, they are simply good. No, they do not beat the very best proprietary models at everything. But the gap is narrower than many people think. For many business use cases, it is narrow enough that cost, privacy, and control become more important than absolute benchmark leadership. That is a meaningful shift. Much of the skepticism toward these models stems from valid concerns which are always governance, trust, and geopolitics. Those issues matter. But some of the dismissals are just reflexes from people who have not seriously tested the models in real workflows.</p><p>That reaction reminds me of the old resistance to cloud migration. Different technology, same pattern. People form strong opinions before they touch the tool. The broader point is not that every open-weight model is equal to Claude or ChatGPT. It is that model parity is arriving faster than many expected for a large share of business tasks. And when one option costs real money every time you use it, while another runs locally on hardware you already own, the economics start to matter a lot.</p><h2>Apple&#8217;s accidental AI advantage</h2><p>This might be the strangest twist in the whole story. Apple did not build Apple Silicon for the AI inference market. It built Apple Silicon to improve control, efficiency, and product performance. But that architecture turned out to be unusually well-suited to local AI. The same design choice that makes a Mac feel fast and efficient in normal use also makes it credible as a local inference machine. That is a strategic advantage Apple may not have fully intended, but it now benefits from all the same.</p><p>NVIDIA sees this opportunity too. Products like DGX Spark point in that direction. But for most individuals and smaller teams, Apple&#8217;s advantage is simple: its machines are available, familiar, and comparatively affordable.</p><p>That is why so many local AI experiments are happening on Mac Minis and Mac Studios. They are not perfect. They are not universal. But they are accessible, and accessibility matters more than elegance when a market is forming.</p><p>Projects like OpenClaw are a good example of where this is heading. Autonomous agents running locally on Apple hardware, with responsive inference, more privacy, and no per-token billing loop in the middle. Whether Apple planned that outcome or stumbled into it is almost beside the point. Strategic bets often pay off in unexpected ways. This is one of them.</p><h2>The cloud is not dead. But AI is pulling some workloads back</h2><p>So where does that leave us? The cloud still won the last era. It solved real problems, reliability, elasticity, managed services, global reach, and operational simplicity. None of that disappears because local inference got better. But AI introduced a new friction point. For many users, the bottleneck is no longer compute capacity. It is metered access to intelligence. That is why hardware is back in the conversation. Not because people suddenly miss blinking lights and server rooms. Not because on-prem is inherently superior. But because tokenized access changes behavior. When the toll booth becomes too expensive, people start looking for another road. That dynamic will likely accelerate the market for smaller, more efficient models designed for prosumer and enterprise edge hardware. For many workflows, tiny models running locally are not a toy. They are becoming the practical default. In that sense, the old server loyalists were wrong about the reason hardware mattered. But they may have been early in noticing that, sooner or later, ownership has a way of coming back.</p><h2>What this means in practice</h2><p>If you are spending meaningful money on API tokens, do the math. Compare your monthly usage against the one-time cost of local hardware. Look at where you truly need frontier performance and where you only need a capable, private, always-available model. The breakeven point may be closer than you think. If you have never tried local inference, start small. Run a good open-weight model on a machine you already own. Test real workflows, not just benchmarks. Pay attention to cost, latency, privacy, and how it feels to build without a meter running in the background. And if you lead a business, do not frame this as cloud versus on-prem. That is the wrong debate. The real question is which AI workloads you should keep renting, and which ones are now worth owning. That answer is going to reshape many technology decisions over the next few years.</p>]]></content:encoded></item><item><title><![CDATA[Speak it into existence: Your next app won't be bought it'll be built by AI]]></title><description><![CDATA[The shift from buying software to speaking it into existence is here and small business is ground zero]]></description><link>https://www.disruptionpoint.com/p/speak-it-into-existence</link><guid isPermaLink="false">https://www.disruptionpoint.com/p/speak-it-into-existence</guid><dc:creator><![CDATA[Mike Ianiro]]></dc:creator><pubDate>Sun, 22 Feb 2026 22:47:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/Gnl833wXRz0" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few years ago, people would send me software purchase requests. A project management tool here, a reporting dashboard there, a niche workflow app that solved one specific problem or another. I&#8217;d look at the price tag and think, &#8220;I&#8217;m not going to pay for this, when in eighteen months Microsoft will just bake it into my M365 subscription for free.&#8221;</p><p>I was usually right.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.disruptionpoint.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Disruption Point! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But what I didn&#8217;t predict was what came next. Because today, my instinct isn&#8217;t &#8220;Microsoft will build this eventually.&#8221; Why would I pay for this when I can ask an AI to build it?</p><p>That&#8217;s not a hypothetical. That&#8217;s my Tuesday.</p><h4>The Moment Everyone Missed</h4><p>When Brad Gerstner sat down with Sam Altman and Satya Nadella on the BGsquared podcast episode 39, most of the attention was on Sam Altman&#8217;s abrupt departure after a series of tough questions from Brad Gerstner. Sam&#8217;s combativeness was all anyone, including the media, could talk about for days, weeks, and months. That was the headline, but it wasn&#8217;t the story.</p><div id="youtube2-Gnl833wXRz0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Gnl833wXRz0&quot;,&quot;startTime&quot;:&quot;3435s),&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Gnl833wXRz0?start=3435s)%2C&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>But what Nadella revealed later, almost as an aside, is a game-changing vision where everything evolves into a CRUD database and an AI application layer on top, signaling a fundamental shift in the software industry.</p><p>In plain language, we will speak everything into existence.</p><p>Everyone was busy debating whether Altman was a genius or a liability. Meanwhile, Nadella was quietly describing the end of the software industry as we know it. That&#8217;s the moment that mattered, and almost nobody caught it.</p><h4>Three Eras of Getting the Software You Need</h4><p>Think about how small businesses have acquired software over the past two decades. It&#8217;s happened in three distinct phases, and we&#8217;re entering a fourth.</p><h5><strong>Era 1:</strong> Buy it off the shelf.</h5><p>You found a vendor, negotiated a contract, and deployed on-premise. It was expensive, slow, and rigid.</p><h5><strong>Era 2:</strong> Subscribe to it.</h5><p>SaaS changed the game. Lower upfront costs, faster deployment, and automatic updates. But you were still adapting your workflow to someone else&#8217;s product. For some, the bills added up.</p><h5><strong>Era 3:</strong> Get it bundled.</h5><p>Microsoft, Google, and other platform players started absorbing point solutions into their suites. That project management tool? Now it&#8217;s Planner inside Teams. That form builder? Google Forms. That niche reporting tool? Power BI is included in your license. This is where my old instinct lived. Patiently wait, and the feature comes to you.</p><h5><strong>Era 4:</strong> Build it with AI.</h5><p>This is where tools like Claude Code, GitHub Copilot, Cursor, Replit, and other AI coding assistants come in, allowing you to describe your needs in plain English and receive functional software in minutes, not just mockups or prototypes.&nbsp;</p><h4>Vibe Coding Is Real, and It&#8217;s Not Just for Developers</h4><p>Andrej Karpathy, one of the founders of OpenAI, coined the term &#8220;vibe coding&#8221; in early 2025 to describe a new approach to software development. You describe what you want, and the AI figures out the rest, no coding, just intent.</p><p>It sounded like a joke, but it wasn&#8217;t.</p><p>By mid-2025, Y Combinator reported that 25% of its current batch startups had codebases that were 95% AI-generated. GitHub&#8217;s own data showed that Copilot users accepted AI-generated code in nearly 30% of all suggestions. Replit reported millions of users building full applications without writing traditional code.</p><p>For SMB owners, this isn&#8217;t about becoming a developer. It&#8217;s about something bigger. The gap between what your business needs and what it actually gets is about to close.</p><p>Think about how software decisions have worked until now. Someone on your team has a problem. They either submit a request to IT and wait months, or they go rogue and buy some tool on a credit card that nobody else knows about. Shadow IT. Every business has it because the people closest to the problems have never had the power to build their own solutions.</p><p>That is about to change if it hasn&#8217;t already. Your operations manager doesn&#8217;t need to file a ticket or sneak a subscription past accounting anymore. They describe what they need from AI and have a working version by lunch. Not a workaround. Not a spreadsheet held together with duct tape. An actual tool, built exactly for their workflow, integrated into the business, that does precisely what they envisioned.</p><p>The business is finally closer to what it needs than ever before. And for the first time, the people who understand the problems best are the same people building the solutions.</p><h4>The Next Leap: Software That Builds Itself</h4><p>Here&#8217;s where it gets really interesting, and where Nadella&#8217;s vision starts to feel real.</p><p>Today, you ask an AI to build something. You describe the problem, and it writes the code. That&#8217;s powerful, but it still requires you to know what you need and ask for it.</p><p>The next phase eliminates even that step.</p><p>AI agents that live inside your business, monitoring workflows and understanding bottlenecks, will start building solutions proactively. This shift can inspire SMB owners by showing how automation will anticipate and solve problems before they arise.</p><p>Your CRM shows that follow-up emails sent within two hours of a meeting close deals 40% more often. The AI builds an automated follow-up workflow and deploys it. Your accounting data reveals that invoice disputes spike with a particular client segment? The AI creates a pre-approval checklist for those accounts.</p><p>You didn&#8217;t ask for these features. You didn&#8217;t even know you needed them. They just appear because the intelligence understood your business well enough to act.</p><p>This isn&#8217;t science fiction. Platforms like OpenClaw are already moving in this direction, building AI systems that don&#8217;t just respond to commands but anticipate needs and ship solutions autonomously.</p><p>The implications for SMB Spending are enormous, offering a future where owners feel more in control and less dependent on costly vendors, leading to greater financial confidence.The implications for SMB spending are enormous. </p><p>If AI can build custom software on demand, tailored exactly to your workflow, integrated directly into your existing stack, then the value proposition of most SaaS products inverts. You&#8217;re no longer paying for capability. You&#8217;re paying for not having to describe what you want.</p><p>That premium is shrinking fast.</p><p>I&#8217;m not saying every SaaS product is dead. Tools with massive network effects, like Slack, Salesforce, and SAPs of the world, software with deep regulatory compliance, like payroll and healthcare, and software with proprietary data moats will survive. But the long tail of niche business software, the tools that do one thing well for one type of business, is in serious trouble.</p><p>What is the average small business&#8217;s spending on SaaS? A meaningful chunk of that is going to evaporate. Not because the tools stop working, but because building your own version becomes trivially easy.</p><h4>The Three-Year Shift</h4><p>Here&#8217;s how I see the next three years playing out for SMBs:</p><h5>Now</h5><p>AI coding tools are useful but require someone technical-adjacent to drive them. Early adopters are building internal tools and automations. Cost savings are real but require effort.</p><h5>Year Two</h5><p>AI agents handle more of the specification and deployment. You describe a business problem in a message, and a working tool appears by morning. The &#8220;builder&#8221; in the loop becomes optional.</p><h5>Year Three</h5><p>Proactive AI systems monitor your business and build what you need before you ask. Software becomes a living layer that evolves with your company in real time.</p><p>Nadella saw this coming in the middle of a corporate crisis that had nothing to do with it. While everyone was debating boardroom politics, he was describing the end of software as a product category and the beginning of software as a capability of intelligence itself.</p><h4>What to Do About It</h4><p>In practice, you need to audit your software stack, experiment with new tools, and stop thinking about which software and tools you need to procure. The tool you need next probably doesn&#8217;t exist yet, and it doesn&#8217;t have to. You&#8217;ll speak it into existence.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.disruptionpoint.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Disruption Point! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>