Latest Posts
Stop Sitting on the Bench! Why AI Resisters Are Getting Kicked Out
Today we’re going to talk about something completely different. It’s about betting, not software engineering. Or is it? I guess we’ll find out.
Distributed Tracing Explained: OpenTelemetry & Jaeger Tutorial
Your users are complaining that your application is slow. Sometimes it takes 8 seconds to respond, other times 2 seconds. But when you check your metrics, everything looks fine. Average response times are acceptable. All services report healthy. Your dashboards are green.
So either your users are idiots, or you’re not capable of capturing what’s actually happening with their requests. Now, I tend to assume users are right. Which means I’d have to call you… Well… I’m not going to do that. Instead, I’m going to show you why you can’t see what’s really happening.
Here’s what you’re about to learn. You’ll see exactly how to track requests as they flow through dozens of microservices, identify which specific operation is causing delays, and understand why your traditional observability tools are lying to you. By the end of this video, you’ll know how to implement distributed tracing that actually shows you what’s happening in your system.
Let’s start with why this problem exists in the first place.
Top 10 GitHub Project Setup Tricks You MUST Use in 2025!
Have you ever seen a GitHub issue that just said “it’s broken” with zero context? Or reviewed a pull request where you had no idea what changed or why? How many hours have you wasted chasing down information that should have been provided upfront?
Here’s the reality: whether you’re maintaining an open source project, building internal tools, or managing commercial software, you face the same problem. People file vague bug reports. Contributors submit PRs without explaining their changes. Dependencies fall months behind. Security issues pile up. And you’re stuck playing detective instead of building features.
But here’s what most people don’t realize: all of this chaos is preventable. GitHub has built-in tools for issue templates, pull request templates, automated workflows, and community governance. The problem is that setting all of this up manually takes hours, and most people either don’t know these tools exist or don’t bother configuring them properly.
Deploy AI Agents and MCPs to K8s: Is kagent and kmcp Worth It?
What if you could manage AI agents with kubectl? kagent lets you define AI agents as custom resources, give them tools, and run them in your cluster. kmcp deploys MCP servers to Kubernetes using simple manifests. Both promise to bring AI agents into the cloud-native world you already know.
The idea sounds compelling. Create agents with YAML, connect them to MCP servers, let them talk to each other through the A2A protocol. All running in Kubernetes, managed like any other resource. It’s the kind of integration that platform engineers dream about.
But there’s a gap between promise and reality. We’re going to deploy both tools to a Kubernetes cluster, create agents, connect them to MCP servers, and see what actually happens when you try to use them. We’ll find out if this is the future of AI in Kubernetes, or if we’re solving problems that don’t need solving.
Gemini 3 Is Fast But Gaslights You at 128 Tokens/Second
Gemini 3 is fast. Really fast. But speed means nothing when the AI confidently tells you it fixed a bug it never touched, or insists a file is updated when it’s completely unchanged. That’s not laziness. That’s gaslighting at 128 tokens per second.
AI vs Manual: Kubernetes Troubleshooting Showdown 2025
It’s 3 AM. Your phone buzzes. Production is down. A Pod won’t start. You run kubectl events, wade through hundreds of normal events to find the one warning that matters, describe the Pod, check the ReplicaSet, trace back to the Deployment, realize a PersistentVolumeClaim is missing, write the YAML, apply it, validate the fix. Thirty minutes later, you’re back in bed, wondering if there’s a better way.
There is. What if AI could detect the issue, analyze the root cause, suggest a fix, and validate that it worked? What if all four phases happened automatically, or at least with your approval, while you stayed in bed?
I’m going to show you exactly how to do this with Kubernetes. First, we’ll walk through the manual troubleshooting process so you understand what we’re automating. Then I’ll show you an AI-powered solution using Claude Code and the Model Context Protocol that handles detection, analysis, remediation, and validation. Finally, we’ll look under the hood at how the system actually works.