Artificial Intelligence (AI)

Kubernetes AI: The Good, The Bad, and The Disappointing (kubectl-ai)

I will make an assumption by saying that you work, in some capacity or another, with Kubernetes and that you are interested in making management of your clusters much easier, better, and faster with AI. If that’s the case, I have a treat for you. We’ll explore how to do just that. We’ll take a look at an AI agent specialized in management of Kubernetes clusters. An agent that comes from the company that made Kubernetes. An agent that is open source. An agent that has the potential to be one of the most important tools in your toolbelt.

From Shame to Fame: How I Fixed My Lazy Vibe Coding Habits with Taskmaster

AI does not work, or, to be more precise, works poorly when trying to accomplish larger tasks that require many steps.

Imagine that we have a Product Requirements Document, or a PRD, that requires some major development, or a major refactoring. We might have spent hours or even days defining that PRD, and even more time defining all the tasks such a PRD should contain. Once we have it all set, we can start writing the code that implements that PRD, and that is likely to take even more time.

That situation presents one problem and one opportunity for improvement.

The Missing Link: How MCP Servers Supercharge Your AI Coding Assistant

We got Large Language Models (LLMs), but they were not enough. Then we got AI agents, but they were not enough either. Now we got Model Context Protocol (MCP).

Is that it? Is that what was needed to make AI for software engineers truly useful?

Let’s see.

Claude Code: AI Agent for DevOps, SRE, and Platform Engineering

If you are a software engineer, you are probably already using an AI agent like GitHub Copilot, Cursor, Windsurf, Cline or something similar. If you are, you probably have an opinion which one of those is the best one out there. Or you might have been dissapointed with the results AI agents provide and chose to use none of them.

Today I’ll tell you which one is the best AI agent for any type of software engineers, especially for those focused on operations. If you call yourself DevOps, or SRE, or Platform Engineer, you’ll find out which one you should use.

Ready?

Outdated AI Responses? Context7 Solves LLMs' Biggest Flaw

LLMs are always behind. They do not contain up to date information and examples for programming languages, libraries, tools, and whatever else we, software engineers, are using. Depending on the date an LLM was created, it might be days, weeks, or months behind. As such, examples will be using older libraries, outdated APIs, and deprecated versions of the tools.

Moreover, since LLMs are, in a way, databases of the whole Internet, they might give us code examples taken from places other than, for example, oficial documentation. They might give us generic answers that do not match versions we’re working with.

We are going to fix that today in a very simple, yet effective way. We are going to teach our agents how to get up to date information they might need to come to the right conclusion and perform correct actions.

By the end of this post, the likelyhood of your AI agent doing the right thing will increase exponentially.

Unlock the Power of GPUs in Kubernetes for AI Workloads

Here’s a question. Where do we run AI models? Everyone knows the answer to that one. We run them in servers with GPUs. GPUs are much more efficient at processing AI models or, to be more precise, at inference.

Here’s another question. How do we manage models across those servers? The answer to that question is… Kubernetes.