Artificial Intelligence (AI)

Stop Blaming AI: Vector DBs + RAG = Game Changer

Let me guess. You tried AI, it hallucinated something completely wrong, and now you’re back to doing everything manually while complaining that “AI doesn’t work.”

Maybe you’re a developer who asked it about your codebase, and it confidently explained functions that don’t exist. Or suggested using deprecated APIs your team abandoned two years ago. Perhaps it recommended architectural patterns that directly contradict your team’s decisions.

Or you’re in ops, and it gets even worse. You asked about your backup policies, and it invented procedures you’ve never implemented. You requested help with a Kubernetes deployment, and it suggested configurations that violate every security standard you have. You wanted it to troubleshoot a production issue, and it gave you generic advice that would take down your entire cluster.

Stop Wasting Time: Turn AI Prompts Into Production Code

I spent three hours writing the perfect prompt. Three. Damn. Hours. And you know what? The AI still screwed it up. Not because the AI was bad, but because I was doing it completely wrong. I was treating prompts like throwaway commands when I should have been treating them like production code.

Here’s what nobody tells you about AI prompts: they’re not just instructions. They’re your team’s collective knowledge, encoded in a way that AI can execute. And if you’re not treating them as first-class citizens in your codebase, you’re wasting everyone’s time. Today, I’m going to show you how to turn your prompts into a shared asset that evolves with your team, deploys like any other code, and actually makes AI useful instead of frustrating.

We’ll start by understanding why context is everything in AI, then I’ll show you the evolution of a real prompt from 5 words to 500, and finally reveal how MCP changes the entire game for prompt distribution. Let’s dive in.

AI Will Replace Coders - But Not the Way You Think

I’ve been in tech for over three decades, and I’ve never seen developers this scared. Not during the dot-com crash. Not during outsourcing waves. Not even during the layoffs. This time it’s different because the threat isn’t coming from other humans. It’s coming from AI that can already write code faster than us. And here’s what should really worry you: we’re on a trajectory where soon it might write better code too.

But here’s what pisses me off: Everyone’s panicking about the wrong thing. They’re worried AI will take their jobs because it can write code. That’s like a chef worrying about losing their job because someone invented a better knife. You’re missing the point entirely.

In this post, I’m going to tell you what your job actually is, why most developers have been doing it wrong for years, and how AI is about to expose that brutal reality. But I’ll also show you exactly how to adapt, because those who get this right won’t just survive. They’ll thrive. And the questions I keep hearing prove that most people don’t get it yet.

Can AI Replace Your Terraform Modules? Infrastructure's New Future

My AI agent just failed to create a database. It forgot the resource group, messed up the credentials, and made three attempts before getting it right.

But here’s the plot twist: That’s the best thing that could have happened.

Today, I’m going to show you why AI agents might make your carefully crafted golden paths obsolete. Why those Terraform modules or similar abstractions you spent months building might be holding AI back, and why letting AI fail and learn might be the future of infrastructure management.

AI vs Developer: Can GitHub Copilot or Claude Replace My Job?

I just gave two AI agents the same complex coding task. One completely failed. The other… honestly shocked me.

But here’s the twist: I’m not sure if I want the better one to succeed. Because if these autonomous coding agents can really implement full features from scratch, write tests, and handle edge cases like an experienced developer… well, we might all be looking for new careers sooner than we think.

Today, I’m putting GitHub Copilot’s new coding agent head-to-head with Claude’s autonomous agent. Same PRD, same requirements, zero hand-holding. And the results? Let’s just say one of these agents just changed everything I thought I knew about AI coding.

Vibe Coding Explained: AI Coding Best Practices

Vibe coding is probably the hottest trend right now in the software industry. Instead of manually writing every line of code, we can now describe what we need in plain language and let AI agents generate code, run tests, perform some, if not all, SDLC operations, and whatever else we might need.

That’s great since it means that now it is socially acceptable to talk to your computer. “Hello, precious. Can you write a Web app based on the OpenAI schema defined over there?” “Please be so kind as to fix this issue for me and make a patch release.” “Would you mind working on that problem for a while longer?”

There’s even a button now in VS Code that enables us to literally talk to AI through a microphone.

Vibe Coding is, in a way, still coding. We generate instructions for machines and those instructions are interpreted into binary code. The major difference is in the language we use. With “normal” coding, we do not become proficient just by writing code. We become proficient once we learn the rules, understand best practices, and practice (a lot). It’s the same with Vibe Coding. The more we practice it and the better we understand the rules, the more proficient we become.

So, today I want to share my best practices for vibe coding. They are mine. They work well for me, and I am very curious to hear your thoughts.

Better Code Reviews with AI? GitHub Copilot and Qodo Merge Tested

Second opinions are important. We get them them from doctors, as well as from software engineers. We want “stuff” to be reviewed and we want feedback. Today, however, we will not talk about second opinions and suggestions from doctors and software engineers. We’ll talk about one AI reviewing work of another AI, with us being managers of both.

Today we’ll explore the possibilities of using a few AI agents to do code review. We’ll see how they integrate into pull requests in GitHub, whether they can find issues in code written by a different AI in an IDE, and how we can incorporate those reviews into our development workflow.

My Workflow With AI: How I Code, Test, and Deploy Faster Than Ever

Today I want you share my development workflow with AI. I want to share how I start working on a new feature, how I manage product requirement documents, or PRDs, how I write code and test it, and how I move through the development lifecycle. The way I approach all that today is very different from the way I did all that in the past. There is a whole team working on each feature, with me being the only human involved.

Kubernetes AI: The Good, The Bad, and The Disappointing (kubectl-ai)

I will make an assumption by saying that you work, in some capacity or another, with Kubernetes and that you are interested in making management of your clusters much easier, better, and faster with AI. If that’s the case, I have a treat for you. We’ll explore how to do just that. We’ll take a look at an AI agent specialized in management of Kubernetes clusters. An agent that comes from the company that made Kubernetes. An agent that is open source. An agent that has the potential to be one of the most important tools in your toolbelt.

From Shame to Fame: How I Fixed My Lazy Vibe Coding Habits with Taskmaster

AI does not work, or, to be more precise, works poorly when trying to accomplish larger tasks that require many steps.

Imagine that we have a Product Requirements Document, or a PRD, that requires some major development, or a major refactoring. We might have spent hours or even days defining that PRD, and even more time defining all the tasks such a PRD should contain. Once we have it all set, we can start writing the code that implements that PRD, and that is likely to take even more time.

That situation presents one problem and one opportunity for improvement.

The Missing Link: How MCP Servers Supercharge Your AI Coding Assistant

We got Large Language Models (LLMs), but they were not enough. Then we got AI agents, but they were not enough either. Now we got Model Context Protocol (MCP).

Is that it? Is that what was needed to make AI for software engineers truly useful?

Let’s see.

Claude Code: AI Agent for DevOps, SRE, and Platform Engineering

If you are a software engineer, you are probably already using an AI agent like GitHub Copilot, Cursor, Windsurf, Cline or something similar. If you are, you probably have an opinion which one of those is the best one out there. Or you might have been dissapointed with the results AI agents provide and chose to use none of them.

Today I’ll tell you which one is the best AI agent for any type of software engineers, especially for those focused on operations. If you call yourself DevOps, or SRE, or Platform Engineer, you’ll find out which one you should use.

Ready?

Outdated AI Responses? Context7 Solves LLMs' Biggest Flaw

LLMs are always behind. They do not contain up to date information and examples for programming languages, libraries, tools, and whatever else we, software engineers, are using. Depending on the date an LLM was created, it might be days, weeks, or months behind. As such, examples will be using older libraries, outdated APIs, and deprecated versions of the tools.

Moreover, since LLMs are, in a way, databases of the whole Internet, they might give us code examples taken from places other than, for example, oficial documentation. They might give us generic answers that do not match versions we’re working with.

We are going to fix that today in a very simple, yet effective way. We are going to teach our agents how to get up to date information they might need to come to the right conclusion and perform correct actions.

By the end of this post, the likelyhood of your AI agent doing the right thing will increase exponentially.

Unlock the Power of GPUs in Kubernetes for AI Workloads

Here’s a question. Where do we run AI models? Everyone knows the answer to that one. We run them in servers with GPUs. GPUs are much more efficient at processing AI models or, to be more precise, at inference.

Here’s another question. How do we manage models across those servers? The answer to that question is… Kubernetes.