Artificial Intelligence (AI)

The Missing Link: How MCP Servers Supercharge Your AI Coding Assistant

We got Large Language Models (LLMs), but they were not enough. Then we got AI agents, but they were not enough either. Now we got Model Context Protocol (MCP).

Is that it? Is that what was needed to make AI for software engineers truly useful?

Let’s see.

Claude Code: AI Agent for DevOps, SRE, and Platform Engineering

If you are a software engineer, you are probably already using an AI agent like GitHub Copilot, Cursor, Windsurf, Cline or something similar. If you are, you probably have an opinion which one of those is the best one out there. Or you might have been dissapointed with the results AI agents provide and chose to use none of them.

Today I’ll tell you which one is the best AI agent for any type of software engineers, especially for those focused on operations. If you call yourself DevOps, or SRE, or Platform Engineer, you’ll find out which one you should use.

Ready?

Outdated AI Responses? Context7 Solves LLMs' Biggest Flaw

LLMs are always behind. They do not contain up to date information and examples for programming languages, libraries, tools, and whatever else we, software engineers, are using. Depending on the date an LLM was created, it might be days, weeks, or months behind. As such, examples will be using older libraries, outdated APIs, and deprecated versions of the tools.

Moreover, since LLMs are, in a way, databases of the whole Internet, they might give us code examples taken from places other than, for example, oficial documentation. They might give us generic answers that do not match versions we’re working with.

We are going to fix that today in a very simple, yet effective way. We are going to teach our agents how to get up to date information they might need to come to the right conclusion and perform correct actions.

By the end of this post, the likelyhood of your AI agent doing the right thing will increase exponentially.

Unlock the Power of GPUs in Kubernetes for AI Workloads

Here’s a question. Where do we run AI models? Everyone knows the answer to that one. We run them in servers with GPUs. GPUs are much more efficient at processing AI models or, to be more precise, at inference.

Here’s another question. How do we manage models across those servers? The answer to that question is… Kubernetes.