&lt;?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Kubernetes — Lakshmi Narasimhan</title><link>https://lakshminp.com/tags/kubernetes/</link><description>I help developers build, deploy, and distribute their SaaS without hiring a team. Long-running notes on systems, AI internals, Carnatic music, fiction craft, and whatever else collides interestingly.</description><generator>Hugo + lakshminp theme</generator><language>en-us</language><lastBuildDate>Fri, 24 Apr 2026 00:00:00 +0000</lastBuildDate><managingEditor>Lakshmi Narasimhan</managingEditor><webMaster>Lakshmi Narasimhan</webMaster><copyright>© 2026 Lakshmi Narasimhan</copyright><atom:link href="https://lakshminp.com/tags/kubernetes/feed.xml" rel="self" type="application/rss+xml"/><item><title>Your Cloud Bill Is A Tax On Someone Else's Resume</title><link>https://lakshminp.com/2026/04/cloud-bill-resume-tax/</link><pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate><author>Lakshmi Narasimhan</author><guid isPermaLink="true">https://lakshminp.com/2026/04/cloud-bill-resume-tax/</guid><category>essays</category><category>kubernetes</category><description>There’s an insurance company somewhere — real, working, profitable — with 100,000 monthly users and a peak concurrent load of about 5,000.
They spend high six figures a month on Kubernetes.
They employ twenty people to keep it running.
This story surfaced this week in the Hacker News thread on David Crawshaw’s cloud essay, and the comments section turned into a confessional. Engineer after engineer describing the same pattern: cluster adopted, cluster “optimized,” cloud spend doubled, incidents doubled, and somehow the only thing anyone can agree on is that they need to hire a platform engineer.</description><content:encoded>&lt;![CDATA[<p>There’s an insurance company somewhere — real, working, profitable — with 100,000 monthly users and a peak concurrent load of about 5,000.</p><p>They spend high six figures a month on Kubernetes.</p><p>They employ twenty people to keep it running.</p><p>This story surfaced this week in the Hacker News thread on David Crawshaw’s cloud essay, and the comments section turned into a confessional. Engineer after engineer describing the same pattern: cluster adopted, cluster “optimized,” cloud spend doubled, incidents doubled, and somehow the only thing anyone can agree on is that they need to hire a platform engineer.</p><p>You don’t. You never did. Your entire application would run on a laptop.</p><h2 id="the-incentive-nobody-likes-to-say-out-loud"><strong>The incentive nobody likes to say out loud</strong></h2><p>Here’s the quiet part: your DevOps team does not choose infrastructure based on what your application needs.</p><p>They choose it based on what their next job will pay for.</p><p>Kubernetes on a resume is worth more than Docker Compose on a resume. Terraform on a resume is worth more than “I SSH’d into the box.” Managed EKS on a resume is worth more than “I run a VM.” Every procurement decision in a modern engineering org is being made by someone who, at some level, is also writing the next page of their LinkedIn.</p><p>And management, god bless them, trusts the sales and marketing departments of Datadog and AWS and HashiCorp more than they trust their own engineers. So when someone internally says “we could do this on one server,” and someone externally sends a deck titled<em>Scaling Your Platform For The Future</em>, guess which one wins the meeting.</p><p>The decision was never technical. You just paid the technical price for it.</p><h2 id="kubernetes-is-not-the-villain-the-scale-is"><strong>Kubernetes is not the villain. The scale is.</strong></h2><p>Let’s be precise, because “Kubernetes” is doing a lot of work in this essay.</p><p>Full enterprise Kubernetes — managed control planes, service meshes, operators for everything, a dedicated platform team, Helm charts nested inside Helm charts like Russian dolls of YAML — that thing was built for Google’s problem. Multi-tenant, multi-region, thousands of services, teams that don’t talk to each other.</p><p>If your org does not look like that, you are wearing a costume.</p><p>K3s on a single VPS is not the same animal. Docker Compose on a single VPS is not the same animal. Kamal shipping containers to one Debian box is not the same animal. Those are orchestration for people who want one sane way to deploy a container, not a career in platform engineering.</p><p>The HN thread is full of<a href="https://blog.lakshminp.com/p/kubernetes-indie-dev-alternative" rel="external nofollow noopener" class="lnp-link">engineers who moved from full K8s to one of these simpler setups<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a>. The reports are boringly consistent: costs collapsed, incidents dropped, debugging became possible again. Nobody was shocked. Everyone had been waiting for permission to say it.</p><h2 id="the-solo-founders-version-of-this-trap"><strong>The solo founder’s version of this trap</strong></h2><p>You are not the insurance company. You do not have twenty people. You have you, and maybe a contractor, and a credit card that is getting nervous.</p><p>And yet — you will read the AWS Well-Architected Framework. You will follow a tutorial that starts with “first, let’s set up your VPC.” You will pay $80/month for a managed database to store 200 rows. You will provision a load balancer in front of one server. You will copy the shape of infrastructure you saw at your day job, because that shape felt legitimate, and you want to feel legitimate too.</p><p>This is how solo founders end up with a<a href="https://blog.lakshminp.com/p/aws-is-overrated" rel="external nofollow noopener" class="lnp-link">$600/month AWS bill for an app that has six users<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a>.</p><p>The shape of legitimacy is the trap. Nobody cares what your infrastructure looks like until you have customers, and once you have customers,<a href="https://blog.lakshminp.com/p/30-dollar-saas-stack" rel="external nofollow noopener" class="lnp-link">“my app runs on one $12 VPS”<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a> is a story people<em>love</em>. It’s the opposite of suspicious. It’s proof that the thing works.</p><h2 id="what-to-actually-do"><strong>What to actually do</strong></h2><ol><li><p><strong>One machine until you can’t.</strong> One VPS. One Postgres on that VPS. One reverse proxy. Docker Compose or Kamal to deploy. You are allowed to stop here for years.</p></li><li><p><strong>Scale vertically first.</strong> Hetzner will rent you a 48-core EPYC machine with 256 GB of RAM for €199/month. A mid-tier managed Kubernetes cluster on AWS starts at more than that before you’ve run a single pod. Most apps die from bad unit economics, not from running out of CPU.</p></li><li><p><strong>When you outgrow that — and you might not —<a href="https://blog.lakshminp.com/p/when-diy-beats-managed-kubernetes" rel="external nofollow noopener" class="lnp-link">K3s on a few boxes gives you orchestration without the org chart<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a>.</strong> This is the actual sweet spot for a solo operator who needs more than one machine but less than a platform team.</p></li><li><p><strong>Treat every infrastructure recommendation as a resume artifact until proven otherwise.</strong> Ask who benefits if you adopt this. If the answer is “the person telling me to adopt it,” weigh accordingly.</p></li><li><p><strong>Your cloud bill is a leading indicator of how much time you are spending on things that do not make your product better.</strong> Watch it like you watch your weight.</p></li></ol><p>The cloud was supposed to be leverage. For most people, most of the time, it has become the opposite: a recurring invoice for someone else’s credibility.</p><p>You are allowed to just run the server.</p>
]]></content:encoded></item><item><title>The $30/Year Stack for Launching Small Bets</title><link>https://lakshminp.com/2026/01/30-dollar-saas-stack/</link><pubDate>Mon, 19 Jan 2026 00:00:00 +0000</pubDate><author>Lakshmi Narasimhan</author><guid isPermaLink="true">https://lakshminp.com/2026/01/30-dollar-saas-stack/</guid><category>essays</category><category>saas</category><category>kubernetes</category><description>Every time I launch a new small bet, I need the same boring stuff: professional email, a chat widget, uptime monitoring. The kind of infrastructure that’s completely unsexy but makes you look like you have your act together.
For years, I overcomplicated this. Custom SMTP servers. Self-hosted monitoring. Elaborate setups that took days to configure and broke whenever I looked at them wrong.
Then I realized something: I was spending more time on infrastructure than on validating whether anyone wanted my product.</description><content:encoded>&lt;![CDATA[<p>Every time I launch a new small bet, I need the same boring stuff: professional email, a chat widget, uptime monitoring. The kind of infrastructure that’s completely unsexy but makes you look like you have your act together.</p><p>For years, I overcomplicated this. Custom SMTP servers. Self-hosted monitoring. Elaborate setups that took days to configure and broke whenever I looked at them wrong.</p><p>Then I realized something: I was spending more time on infrastructure than on validating whether anyone wanted my product.</p><p>So I built a repeatable stack. Total cost: about $30-42 per year, per small bet. Here’s the whole thing.</p><h2 id="domain--hosting-cloudflare-free"><strong>Domain &amp; Hosting: Cloudflare (Free)</strong></h2><p>Buy your domain wherever you want, but point the nameservers to Cloudflare immediately.</p><p>Cloudflare’s free tier is absurd:</p><ul><li><p>DNS management (fast, reliable)</p></li><li><p>Free SSL certificates (automatic)</p></li><li><p>DDoS protection</p></li><li><p>CDN caching</p></li><li><p>Cloudflare Pages (unlimited sites, unlimited bandwidth)</p></li></ul><p>That last one is key. Your landing page goes on Cloudflare Pages. Connect your repo, push to main, it deploys. No servers. No bills. No thinking about infrastructure when you should be thinking about whether anyone wants your product.</p><p>I run every small bet’s landing page on CF Pages. Zero hosting cost.</p><h2 id="email-google-workspace-the-india-pricing-hack"><strong>Email: Google Workspace (The India Pricing Hack)</strong></h2><p>You want professional email.<code>hello@yourdomain.com</code>, not<code>yourdomain.help@gmail.com</code> like some kind of digital nomad running a dropshipping scam.</p><p>Google Workspace direct pricing: $6/month. Painful when you’re running multiple bets.</p><p>Google Workspace through an Indian reseller: Rs.125/month. That’s roughly $1.50.</p><p>Same product. Same Gmail experience. Same everything. Just&hellip; cheaper, because regional pricing exists and Google apparently forgot to close this loophole.</p><p>Recommended resellers: Medha Cloud, Host IT Smart, Shivaami. They’re authorized, they’re legit, and they’ll save you $50+/year per domain.</p><p>Setup takes 30 minutes: verify domain, add MX records, configure SPF/DKIM/DMARC so your emails don’t land in spam. Done.</p><h2 id="support-crisp-chat-free"><strong>Support: Crisp Chat (Free)</strong></h2><p>Intercom wants $74/month. For a small bet that might make $0.</p><p>Crisp’s free tier gives you:</p><ul><li><p>2 team seats (it’s just you anyway)</p></li><li><p>Unlimited conversations</p></li><li><p>Mobile app for notifications</p></li><li><p>A widget that doesn’t look like it was designed in 2008</p></li></ul><p>Copy-paste their script tag into your landing page. Five minutes.</p><p>Upgrade trigger: when you have so many support conversations that you need automation. Which means you have customers. Which means you can afford to pay for things.</p><h2 id="monitoring-betterstack-free"><strong>Monitoring: BetterStack (Free)</strong></h2><p>Your app will go down at 3am on a Sunday. This is not a prediction, it’s a guarantee.</p><p>BetterStack’s free tier:</p><ul><li><p>10 uptime monitors</p></li><li><p>1GB logs/month</p></li><li><p>Email and Slack alerts</p></li><li><p>3-day log retention</p></li></ul><p>Is 3-day retention enough? For a small bet you’re validating? Yes. You’re not running a bank.</p><p>Alternative: Axiom gives you 500GB ingest and 30-day retention if you’re logging more aggressively. Also free.</p><h2 id="error-tracking-sentry-free"><strong>Error Tracking: Sentry (Free)</strong></h2><p>Your code will throw exceptions in production that never happened locally. Classic.</p><p>Sentry’s free tier:</p><ul><li><p>5K errors/month</p></li><li><p>10K performance transactions</p></li><li><p>1 user</p></li><li><p>90-day retention</p></li></ul><p>For a small bet, 5K errors/month is plenty. If you’re hitting that limit, either your app is broken or you have enough users to pay for it.</p><h2 id="database-supabase-free-tier-or-self-hosted"><strong>Database: Supabase (Free Tier or Self-Hosted)</strong></h2><p>Every small bet needs a database. Supabase’s free tier is genuinely useful:</p><ul><li><p>500MB database</p></li><li><p>1GB file storage</p></li><li><p>50K monthly active users</p></li><li><p>Unlimited API requests</p></li></ul><p>That’s enough to validate most ideas. The catch: you get 2 free projects total. After that, it’s $25/month per project.</p><p>For small bets that graduate to real products, I self-host Supabase on a $6/month Hetzner VPS. Full Postgres, auth, storage, realtime — no project limits, no usage caps. (I’m building a service called<a href="https://supabyoi.com/" rel="external nofollow noopener" class="lnp-link">Supabyoi<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a> to make this dead simple. More on that soon.)</p><h2 id="the-complete-stack"><strong>The Complete Stack</strong></h2><ul><li><p><strong>Domain</strong> — ~$10-15/year</p></li><li><p><strong>Cloudflare (DNS + Pages)</strong> — Free</p></li><li><p><strong>Google Workspace (India)</strong> — ~1.50/month( 1.50/<em>month</em>( 18/year)</p></li><li><p><strong>Crisp</strong> — Free</p></li><li><p><strong>BetterStack</strong> — Free</p></li><li><p><strong>Sentry</strong> — Free</p></li><li><p><strong>Supabase</strong> — Free</p></li></ul><p><strong>Total: ~1.50/month, 1.50/<em><strong><strong>month</strong></strong></em>, 30-42/year</strong></p><p>That’s DNS, hosting, professional email, live chat, uptime monitoring, error tracking, and a database for less than a single month of most “startup” tools.</p><h2 id="the-rules"><strong>The Rules</strong></h2><p><strong>Don’t upgrade until you have paying customers.</strong> Free tiers exist for validation. Use them.</p><p><strong>Keep the setup identical across bets.</strong> Same tools, same patterns, same DNS records. You should be able to launch a new bet’s infrastructure in an afternoon, not a weekend.</p><p><strong>Resist the urge to self-host.</strong> Yes, you<em>can</em> run your own mail server. You can also perform your own dental surgery. Neither is advisable.</p><h2 id="when-to-actually-upgrade"><strong>When To Actually Upgrade</strong></h2><ul><li><p><strong>Google Workspace</strong> — You need &gt;30GB storage → $7/mo</p></li><li><p><strong>Crisp</strong> — You need chatbots or &gt;2 team members → $25/mo</p></li><li><p><strong>BetterStack</strong> — You’re pushing &gt;1GB logs/month → $24/mo</p></li><li><p><strong>Sentry</strong> — You’re hitting 5K errors/month → $26/mo</p></li><li><p><strong>Supabase</strong> — You need &gt;2 projects or more storage → $25/mo (or self-host)</p></li></ul><p>Notice a pattern? These are all “you have real traction” problems. Good problems to have.</p><h2 id="whats-not-covered-yet"><strong>What’s Not Covered (Yet)</strong></h2><p>This is the skeleton — the basic infrastructure every small bet needs from day one.</p><p>I’ll cover these in separate posts:</p><ul><li><p><strong>Tech stack choices</strong> (frameworks, languages, deployment)</p></li><li><p><strong>Payment processing</strong> (Stripe, Lemon Squeezy, regional considerations)</p></li><li><p><strong>CI/CD pipelines</strong> (GitHub Actions, deployment automation)</p></li><li><p><strong>Landing page patterns</strong> (what actually converts)</p></li></ul><p>One thing at a time.</p><h2 id="the-point"><strong>The Point</strong></h2><p>Infrastructure should be invisible. It should cost almost nothing while you’re validating. It should scale up only when you have revenue to pay for it.</p><p>$30/year per bet means you can run 10 small bets for less than most people pay for a single Notion subscription.</p><p>Stop building infrastructure. Start shipping products.</p><hr><p><em>This is part of my “Deploy” series — simple infrastructure patterns for solo operators who’d rather build products than manage servers.</em></p>
]]></content:encoded></item><item><title>I Found a Cryptominer in My Client's Production Cluster. Claude Code Found the Attacker.</title><link>https://lakshminp.com/2026/01/cryptominer-production-cluster/</link><pubDate>Sat, 03 Jan 2026 00:00:00 +0000</pubDate><author>Lakshmi Narasimhan</author><guid isPermaLink="true">https://lakshminp.com/2026/01/cryptominer-production-cluster/</guid><category>essays</category><category>kubernetes</category><category>claude-code</category><description>New Year’s Day. Coffee in hand. Ready to ease back into work.
Then I saw the logs.
2026-01-02T06:34:27 GET xmrig-6.24.0-linux-static-x64.tar.gz 2026-01-02T06:34:30 GET http://37.32.6.33:7979/m 2026-01-02T06:34:30 spawn /opt/systemf/m ENOENT xmrig. In production. Someone was mining Monero on my client’s Kubernetes cluster.
The horror.
The Investigation I had a few hundred megabytes of JSON logs and approximately zero patience for manually correlating timestamps. So I did what any reasonable person would do: I asked Claude Code to analyze the logs and figure out what triggered the miner download.</description><content:encoded>&lt;![CDATA[<p>New Year’s Day. Coffee in hand. Ready to ease back into work.</p><p>Then I saw the logs.</p><pre><code>2026-01-02T06:34:27 GET xmrig-6.24.0-linux-static-x64.tar.gz
2026-01-02T06:34:30 GET http://37.32.6.33:7979/m
2026-01-02T06:34:30 spawn /opt/systemf/m ENOENT</code></pre><p>xmrig. In production. Someone was mining Monero on my client’s Kubernetes cluster.</p><p>The horror.</p><h2 id="the-investigation"><strong>The Investigation</strong></h2><p>I had a few hundred megabytes of JSON logs and approximately zero patience for manually correlating timestamps. So I did what any reasonable person would do: I asked Claude Code to analyze the logs and figure out what triggered the miner download.</p><p>Within seconds, it built a timeline:</p><p><strong>Time Event</strong></p><p>06:34:26. Normal request to /onboarding</p><p>06:34:27. xmrig downloaded from GitHub</p><p>06:34:30. Secondary payload from sketchy IP</p><p>06:34:57. Container OOMKilled</p><p>The cryptominer was so resource-hungry it consumed 2GB of memory in 30 seconds and crashed the container. Ironic. The attacker’s greed saved us from a prolonged compromise.</p><p>But how did they get in?</p><h2 id="chasing-red-herrings"><strong>Chasing Red Herrings</strong></h2><p>Claude Code’s first suspect: a low-version npm package called<code>device-unique-keygen</code>. Added by a developer whose email matched the package maintainer. Classic supply chain attack pattern.</p><p>I got excited. Maybe too excited.</p><p>Claude Code fetched the GitHub repo, analyzed the source code, checked for postinstall scripts, looked for obfuscated code, searched for eval() calls.</p><p>Nothing. The package was clean. Just a browser fingerprinting library. Boring. Legitimate.</p><p>We moved on.</p><p>No malicious init containers. No sidecars. No .ashrc shenanigans. The Dockerfile was clean. The pod spec was clean.</p><p>Everything was clean except someone was definitely mining crypto on our infrastructure.</p><h2 id="the-actual-answer"><strong>The Actual Answer</strong></h2><p>Claude Code ran<code>npm audit</code> on the codebase.</p><pre><code>critical │ Next.js is vulnerable to RCE in React flight protocol
Package │ next
Patched │ &gt;=15.3.6
Your ver │ 15.3.4
CVSS │ 10.0</code></pre><figure><a href="https://substackcdn.com/image/fetch/$s_!FrM8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png" class="image-link image2 is-viewable-img" target="_blank" data-component-name="Image2ToDOM"/><img src="https://substack-post-media.s3.amazonaws.com/public/images/9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png" class="sizing-normal" data-attrs='{"src":"https://substack-post-media.s3.amazonaws.com/public/images/9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":765,"width":1247,"resizeWidth":null,"bytes":128814,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://lakshminp.substack.com/i/183314492?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png","isProcessing":false,"align":null,"offset":false}' srcset="https://substackcdn.com/image/fetch/$s_!FrM8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png 424w, https://substackcdn.com/image/fetch/$s_!FrM8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png 848w, https://substackcdn.com/image/fetch/$s_!FrM8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png 1272w, https://substackcdn.com/image/fetch/$s_!FrM8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c5bcacd-085e-465e-85f7-03955737439e_1247x765.png 1456w" sizes="100vw" loading="lazy" width="1247" height="765"/><img src="data:image/svg+xml;base64,PHN2ZyByb2xlPSJpbWciIHdpZHRoPSIyMCIgaGVpZ2h0PSIyMCIgdmlld2JveD0iMCAwIDIwIDIwIiBmaWxsPSJub25lIiBzdHJva2Utd2lkdGg9IjEuNSIgc3Ryb2tlPSJ2YXIoLS1jb2xvci1mZy1wcmltYXJ5KSIgc3Ryb2tlLWxpbmVjYXA9InJvdW5kIiBzdHJva2UtbGluZWpvaW49InJvdW5kIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjxnPjx0aXRsZT48L3RpdGxlPjxwYXRoIGQ9Ik0yLjUzMDAxIDcuODE1OTVDMy40OTE3OSA0LjczOTExIDYuNDMyODEgMi41IDkuOTExNzMgMi41QzEzLjE2ODQgMi41IDE1Ljk1MzcgNC40NjIxNCAxNy4wODUyIDcuMjM2ODRMMTcuNjE3OSA4LjY3NjQ3TTE3LjYxNzkgOC42NzY0N0wxOC41MDAyIDQuMjY0NzFNMTcuNjE3OSA4LjY3NjQ3TDEzLjY0NzMgNi45MTE3Nk0xNy40OTk1IDEyLjE4NDFDMTYuNTM3OCAxNS4yNjA5IDEzLjU5NjcgMTcuNSAxMC4xMTc4IDE3LjVDNi44NjExOCAxNy41IDQuMDc1ODkgMTUuNTM3OSAyLjk0NDMyIDEyLjc2MzJMMi40MTE2NSAxMS4zMjM1TTIuNDExNjUgMTEuMzIzNUwxLjUyOTMgMTUuNzM1M00yLjQxMTY1IDExLjMyMzVMNi4zODIyNCAxMy4wODgyIiAvPjwvZz48L3N2Zz4="/><img src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyMCIgaGVpZ2h0PSIyMCIgdmlld2JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiIGNsYXNzPSJsdWNpZGUgbHVjaWRlLW1heGltaXplMiBsdWNpZGUtbWF4aW1pemUtMiI+PHBvbHlsaW5lIHBvaW50cz0iMTUgMyAyMSAzIDIxIDkiPjwvcG9seWxpbmU+PHBvbHlsaW5lIHBvaW50cz0iOSAyMSAzIDIxIDMgMTUiPjwvcG9seWxpbmU+PGxpbmUgeDE9IjIxIiB4Mj0iMTQiIHkxPSIzIiB5Mj0iMTAiPjwvbGluZT48bGluZSB4MT0iMyIgeDI9IjEwIiB5MT0iMjEiIHkyPSIxNCI+PC9saW5lPjwvc3ZnPg==" class="lucide lucide-maximize2 lucide-maximize-2"/></figure><p>CVSS 10. The maximum possible score. The “your house is actively on fire” of security ratings.</p><p>The app was running Next.js 15.3.4. A publicly disclosed RCE vulnerability. No authentication required. An attacker could run arbitrary commands on the server by sending a crafted request.</p><p>That’s exactly what happened. They sent a request, ran wget twice, downloaded the miner, and started extracting crypto value from compute cycles they weren’t paying for.</p><p>The container’s memory limit stopped them. A $20/month Kubernetes resource limit prevented what could have been ongoing theft.</p><h2 id="what-claude-code-actually-did"><strong>What Claude Code Actually Did</strong></h2><p>I want to be clear about what happened here. I didn’t single-handedly unravel a sophisticated attack. I didn’t manually correlate log timestamps or reverse-engineer obfuscated npm packages.</p><p>I said “check these logs” and Claude Code:</p><ul><li><p>Built a timeline from JSON log entries</p></li><li><p>Identified the malware artifacts and C2 server</p></li><li><p>Traced git blame to find who added suspicious packages</p></li><li><p>Fetched and analyzed source code from GitHub</p></li><li><p>Ruled out attack vectors one by one</p></li><li><p>Found the actual vulnerability via npm audit</p></li><li><p>Correlated the OOMKill timing with the attack</p></li><li><p>Suggested remediation and forensic preservation steps</p></li></ul><p>The entire investigation took under an hour. Not because I’m fast. Because Claude Code is.</p><h2 id="the-fix"><strong>The Fix</strong></h2><pre><code>pnpm update next@^15.3.6</code></pre><p>One command. That’s the remediation for a CVSS 10.0 vulnerability.</p><p>We also orphaned the compromised pods for forensic analysis, rotated secrets, and added proper security contexts to prevent future wget adventures.</p><figure><a href="https://substackcdn.com/image/fetch/$s_!bCvw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png" class="image-link image2 is-viewable-img" target="_blank" data-component-name="Image2ToDOM"/><img src="https://substack-post-media.s3.amazonaws.com/public/images/654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png" class="sizing-normal" data-attrs='{"src":"https://substack-post-media.s3.amazonaws.com/public/images/654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":772,"width":1344,"resizeWidth":null,"bytes":121268,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://lakshminp.substack.com/i/183314492?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png","isProcessing":false,"align":null,"offset":false}' srcset="https://substackcdn.com/image/fetch/$s_!bCvw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png 424w, https://substackcdn.com/image/fetch/$s_!bCvw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png 848w, https://substackcdn.com/image/fetch/$s_!bCvw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png 1272w, https://substackcdn.com/image/fetch/$s_!bCvw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654c9829-43d6-46a6-8405-abfd7b305b80_1344x772.png 1456w" sizes="100vw" loading="lazy" width="1344" height="772"/><img src="data:image/svg+xml;base64,PHN2ZyByb2xlPSJpbWciIHdpZHRoPSIyMCIgaGVpZ2h0PSIyMCIgdmlld2JveD0iMCAwIDIwIDIwIiBmaWxsPSJub25lIiBzdHJva2Utd2lkdGg9IjEuNSIgc3Ryb2tlPSJ2YXIoLS1jb2xvci1mZy1wcmltYXJ5KSIgc3Ryb2tlLWxpbmVjYXA9InJvdW5kIiBzdHJva2UtbGluZWpvaW49InJvdW5kIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjxnPjx0aXRsZT48L3RpdGxlPjxwYXRoIGQ9Ik0yLjUzMDAxIDcuODE1OTVDMy40OTE3OSA0LjczOTExIDYuNDMyODEgMi41IDkuOTExNzMgMi41QzEzLjE2ODQgMi41IDE1Ljk1MzcgNC40NjIxNCAxNy4wODUyIDcuMjM2ODRMMTcuNjE3OSA4LjY3NjQ3TTE3LjYxNzkgOC42NzY0N0wxOC41MDAyIDQuMjY0NzFNMTcuNjE3OSA4LjY3NjQ3TDEzLjY0NzMgNi45MTE3Nk0xNy40OTk1IDEyLjE4NDFDMTYuNTM3OCAxNS4yNjA5IDEzLjU5NjcgMTcuNSAxMC4xMTc4IDE3LjVDNi44NjExOCAxNy41IDQuMDc1ODkgMTUuNTM3OSAyLjk0NDMyIDEyLjc2MzJMMi40MTE2NSAxMS4zMjM1TTIuNDExNjUgMTEuMzIzNUwxLjUyOTMgMTUuNzM1M00yLjQxMTY1IDExLjMyMzVMNi4zODIyNCAxMy4wODgyIiAvPjwvZz48L3N2Zz4="/><img src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyMCIgaGVpZ2h0PSIyMCIgdmlld2JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiIGNsYXNzPSJsdWNpZGUgbHVjaWRlLW1heGltaXplMiBsdWNpZGUtbWF4aW1pemUtMiI+PHBvbHlsaW5lIHBvaW50cz0iMTUgMyAyMSAzIDIxIDkiPjwvcG9seWxpbmU+PHBvbHlsaW5lIHBvaW50cz0iOSAyMSAzIDIxIDMgMTUiPjwvcG9seWxpbmU+PGxpbmUgeDE9IjIxIiB4Mj0iMTQiIHkxPSIzIiB5Mj0iMTAiPjwvbGluZT48bGluZSB4MT0iMyIgeDI9IjEwIiB5MT0iMjEiIHkyPSIxNCI+PC9saW5lPjwvc3ZnPg==" class="lucide lucide-maximize2 lucide-maximize-2"/></figure><h2 id="the-lesson"><strong>The Lesson</strong></h2><p>Two things saved us:</p><ol><li><p>Centralized logging (couldn’t investigate without the logs)</p></li><li><p>Memory limits (the attacker’s miner killed itself)</p></li></ol><p>One thing would have prevented this entirely: running<code>npm audit</code> before deployment.</p><p>The attacker exploited a vulnerability that was publicly disclosed and patched. We just hadn’t updated yet.</p><p>Godspeed with your own dependency updates.</p><hr><p>My Medium friends can read this<a href="https://medium.com/@lakshminp/i-found-a-cryptominer-in-my-clients-production-cluster-claude-code-found-the-attacker-ae6148ec0514" rel="external nofollow noopener" class="lnp-link">over there<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a> as well.</p>
]]></content:encoded></item><item><title>I spent years on Kubernetes. Now I'm betting against it.</title><link>https://lakshminp.com/2025/12/kubernetes-indie-dev-alternative/</link><pubDate>Thu, 04 Dec 2025 00:00:00 +0000</pubDate><author>Lakshmi Narasimhan</author><guid isPermaLink="true">https://lakshminp.com/2025/12/kubernetes-indie-dev-alternative/</guid><category>essays</category><category>kubernetes</category><description>I’ve spent years in the Kubernetes ecosystem. I wrote about K3s. I ran production clusters. I know my way around kubectl, Helm charts, and the CNCF landscape.
And I’m building a deployment tool that doesn’t use any of it.
Here’s why.
Kubernetes solves problems you don’t have K8s is incredible engineering. It solves real problems:
Multi-team deployments without stepping on each other
Automatic failover across dozens of nodes
Fine-grained resource allocation at massive scale</description><content:encoded>&lt;![CDATA[<p>I’ve spent years in the Kubernetes ecosystem. I wrote about K3s. I ran production clusters. I know my way around kubectl, Helm charts, and the CNCF landscape.</p><p>And I’m building a deployment tool that doesn’t use any of it.</p><p>Here’s why.</p><h2 id="kubernetes-solves-problems-you-dont-have"><strong>Kubernetes solves problems you don’t have</strong></h2><p>K8s is incredible engineering. It solves real problems:</p><ul><li><p>Multi-team deployments without stepping on each other</p></li><li><p>Automatic failover across dozens of nodes</p></li><li><p>Fine-grained resource allocation at massive scale</p></li><li><p>Rolling updates for services with thousands of instances</p></li></ul><p>If you’re Spotify, you need this. If you’re running a 50-person engineering org, you need this.</p><p>If you’re a solo dev with one FastAPI app and a Celery worker? You don’t.</p><p>As one dev put it: “Do you want to build a product, or do you want to build an infrastructure team? Kubernetes makes sense for the latter, but it’s often overkill for the former.”</p><p>You need:</p><ul><li><p>git push → app is live</p></li><li><p>Rollback when you break something</p></li><li><p>Logs you can actually read</p></li><li><p>Alerts when the site goes down</p></li></ul><p>That’s it. Everything else is ceremony.</p><h2 id="the-hidden-cost-isnt-the-cluster"><strong>The hidden cost isn’t the cluster</strong></h2><p>“But K3s is lightweight! You can run it on a $6 VPS!”</p><p>True. I’ve done it. Here’s what they don’t tell you:</p><p>A solo dev<a href="https://www.reddit.com/r/kubernetes/comments/1p2k9xd/solo_dev_tired_of_k8s_churn_what_are_my_options/" rel="external nofollow noopener" class="lnp-link">recently posted<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a> on r/kubernetes with a title that said it all: “Solo dev tired of K8s churn&hellip; What are my options?”</p><p>His pain point wasn’t learning Kubernetes. It was the maintenance:</p><blockquote><p>“I don’t mind learning the topics and writing the config, I do mind having to deal with a lot of work out of nowhere just because the underlying tools are beyond my control and requiring breaking updates.”</p></blockquote><p>He’d been burned by Bitnami charts pulling the rug, NGINX ingress breaking changes. Things that worked stopped working — not because he changed anything, but because the ecosystem did.</p><blockquote><p>“It all felt very straightforward, and it worked so well for a bit, but it starts to crumble even when I haven’t changed anything on my side.”</p></blockquote><p>This is the hidden cost. Not the setup — the churn.</p><p><strong>The YAML tax</strong>: Every change requires editing manifests. Add an env var? YAML. Change a port? YAML. Want a cron job? That’s a whole new CronJob resource. One team had a production outage caused by an improperly indented YAML line. A single space broke prod.</p><p><strong>The debugging tax</strong>: Something’s wrong. Is it the pod? The service? The ingress? The network policy? The PVC? Hope you remember how to read<code>kubectl describe</code>.</p><p><strong>The upgrade tax</strong>: K3s made this easier, but you’re still running a distributed system. A 2024 report found over 77% of Kubernetes practitioners still have issues running their clusters — up from 66% in 2022. It’s getting harder, not easier.</p><p><strong>The cognitive tax</strong>: Part of your brain is always allocated to “how does Kubernetes work” instead of “how do I ship features.”</p><p>As one commenter put it: “Choose your churn.” There’s always something.</p><p>The Reddit OP’s conclusion? He gave up on K8s entirely. Settled on plain NixOS on a single Hetzner VPS. Accepted that 99.9% uptime from one server is good enough. Skipped the redundancy he thought he needed.</p><blockquote><p>“I am trying to write my software, I just want a reliable thing to host it with the freedom and reliability that one would expect from a system that stays out of your way.”</p></blockquote><p>That’s the real ask. A system that stays out of your way.</p><p>For teams, the Kubernetes tax is worth paying. You split it across people, you build expertise, you amortize the cost.</p><p>Solo? You pay it all yourself, every time.</p><h2 id="what-actually-works-for-solo-devs"><strong>What actually works for solo devs</strong></h2><p>So if not Kubernetes, what?</p><p>The same Reddit OP nailed the PaaS problem too:</p><blockquote><p>“These ‘managed-docker’ services charge per container/pod and force the user to over-provision. Your pod doesn’t run on 250mb RAM? Ok pay for 1GB even though you only need 500mb.”</p></blockquote><p>I’ve tried everything:</p><ul><li><p>Heroku (great until the bill hits)</p></li><li><p>Railway/Render (same story, nicer UX — $50-100/mo for what costs $5 on a VPS)</p></li><li><p>Dokku (solid, but showing its age)</p></li><li><p>Coolify (powerful, but now you’re babysitting another server)</p></li><li><p>K3s (overkill for most solo projects)</p></li><li><p>Raw Docker + nginx (works but tedious)</p></li></ul><p>The best setup I’ve found:<strong>Kamal</strong>.</p><p>It’s from 37signals. They run Basecamp and HEY on it. It’s just Docker + SSH. No cluster, no orchestrator, no YAML manifests.</p><pre><code>kamal deploy</code></pre><p>That’s it. It SSHs into your server, pulls your container, does a zero-downtime swap. Rollback is one command. Logs are one command.</p><p>It’s boring. It works.</p><h2 id="my-bet-ai-interface--dashboards--cli--yaml"><strong>My bet: AI interface &gt; dashboards &gt; CLI &gt; YAML</strong></h2><p>Here’s where it gets interesting.</p><p>Kamal solved the “deploy” problem. But ops is more than deploy:</p><ul><li><p>Why is the app slow right now?</p></li><li><p>What happened at 3am?</p></li><li><p>Should I upgrade my VM or optimize my code?</p></li><li><p>Show me the errors from the last hour</p></li></ul><p>These questions require jumping between tools. SSH into the box, grep the logs, check Grafana, cross-reference with your deploy history.</p><p>My bet: you shouldn’t need to do any of that.</p><p>You should just ask.</p><p>“Why is memory usage spiking?” → Here’s what’s using RAM, and here’s the trend over the last week.</p><p>“Roll back to yesterday’s deploy” → Done. Here’s what changed.</p><p>“Show me errors from the /api/checkout endpoint” → Found 47 errors, here’s the pattern.</p><p>This isn’t science fiction. LLMs are good at this now. The interface just doesn’t exist yet.</p><h2 id="what-im-building"><strong>What I’m building</strong></h2><p>VMKit is my attempt at this interface.</p><ul><li><p>Bring your own VPS (Hetzner, DigitalOcean, whatever)</p></li><li><p>It handles Kamal, Traefik, SSL, monitoring</p></li><li><p>The interface is conversation — web chat or MCP server in Claude Code</p></li></ul><p>No Kubernetes. No YAML manifests. No 47-screen dashboards.</p><p>Just say what you want.</p><p>I might be wrong. Maybe solo devs actually love clicking through Render’s UI. Maybe the Kubernetes complexity is worth it for everyone.</p><p>But I don’t think so. I think the right answer for one person running one to three apps is radically simpler than what we have today.</p><p><a href="https://vmkit.dev/" rel="external nofollow noopener" class="lnp-link">vmkit.dev<span class="lnp-link-ext" aria-hidden="true"> ↗</span></a> if you want to follow along.</p><h2 id="the-uncomfortable-truth"><strong>The uncomfortable truth</strong></h2><p>I’m not anti-Kubernetes. I’m anti-complexity-for-its-own-sake.</p><p>K8s is a tool. An incredibly powerful one. But tools have contexts where they make sense and contexts where they don’t.</p><p>Solo dev shipping a SaaS? You don’t need pod autoscaling. You need deploys that work and a way to debug when they don’t.</p><p>That’s the bet.</p>
]]></content:encoded></item><item><title>AWS Is Overrated</title><link>https://lakshminp.com/2025/10/aws-is-overrated/</link><pubDate>Sat, 11 Oct 2025 00:00:00 +0000</pubDate><author>Lakshmi Narasimhan</author><guid isPermaLink="true">https://lakshminp.com/2025/10/aws-is-overrated/</guid><category>essays</category><category>kubernetes</category><description>If you’re an indie dev building your first SaaS, AWS is not your friend.
It’s a maze of services, dashboards, and acronyms pretending to make you productive while quietly billing you for curiosity.
Sure, it’s “the industry standard.” But here’s the thing: you’re not Netflix. You’re not Stripe. You don’t need fifteen managed services to ship an MVP. You just need one working prototype in front of users.
When I started shipping my own SaaS projects, I defaulted to AWS too. Everyone said it was the “serious” choice. I spun up EC2s, tinkered with VPCs, IAM roles, and CloudWatch dashboards.</description><content:encoded>&lt;![CDATA[<p>If you’re an indie dev building your first SaaS, AWS is not your friend.</p><p>It’s a maze of services, dashboards, and acronyms pretending to make you productive while quietly billing you for curiosity.</p><p>Sure, it’s “the industry standard.” But here’s the thing: you’re not Netflix. You’re not Stripe. You don’t need fifteen managed services to ship an MVP. You just need one working prototype in front of users.</p><p>When I started shipping my own SaaS projects, I defaulted to AWS too. Everyone said it was the “serious” choice. I spun up EC2s, tinkered with VPCs, IAM roles, and CloudWatch dashboards.</p><p>Two weeks later, my app still wasn’t live. But my bill was.</p><p>That’s when it clicked. AWS is optimized for<em>scale,</em> not<em>speed.</em> It’s designed for teams with DevOps pipelines, budgets, and compliance officers. Indie devs have none of those.</p><p><strong>Here’s the real problem:</strong></p><blockquote><p>AWS makes you<em>feel</em> productive because it has a service for everything.</p></blockquote><p>But it slows you down because you end up<em>assembling</em> infrastructure instead of shipping software.</p><p>You’re busy wiring VPCs while your users are waiting for a login page.</p><p>If you’re building your first SaaS, you’re better off with:</p><ul><li><p><strong>Render</strong> or<strong>Fly.io</strong> for fast deploys.</p></li><li><p><strong>Railway</strong>,<strong>Supabase</strong> if you love simplicity.</p></li><li><p>DigitalOcean app platform</p></li><li><p>Or even your own<strong>K3s</strong> box on a $30 DigitalOcean droplet if you like to tinker.(More on this in future posts)</p></li></ul><p>You’ll have full control, predictable costs, and a deploy story you can explain in a single sentence.</p><p>That’s what matters at your stage — not five-nines availability across three regions.</p><p>AWS will always have its place. It’s incredible at running serious workloads, regulated systems, and multi-tenant platforms at scale.</p><p>But for indie devs trying to launch, learn, and iterate fast — it’s<em>overkill.</em></p><p>Use the simplest stack that lets you ship.</p><p>Add complexity only when success forces you to.</p><p>Because nothing kills momentum faster than debugging IAM policies instead of building features.</p><h2 id="tldr">TL;DR</h2><p>If you’re a solo founder or small team, your advantage isn’t scale — it’s speed.</p><p>Don’t trade that away for a cloud that was never built for you.</p><p>I share one short post daily-ish for productive indie developers — how to ship faster, cheaper, and saner. Subscribe if that’s your vibe.</p>
]]></content:encoded></item></channel></rss>