Artificial Intelligence (AI)

AI vs Developer: Can GitHub Copilot or Claude Replace My Job?

I just gave two AI agents the same complex coding task. One completely failed. The other… honestly shocked me.

But here’s the twist: I’m not sure if I want the better one to succeed. Because if these autonomous coding agents can really implement full features from scratch, write tests, and handle edge cases like an experienced developer… well, we might all be looking for new careers sooner than we think.

Today, I’m putting GitHub Copilot’s new coding agent head-to-head with Claude’s autonomous agent. Same PRD, same requirements, zero hand-holding. And the results? Let’s just say one of these agents just changed everything I thought I knew about AI coding.

Vibe Coding Explained: AI Coding Best Practices

Vibe coding is probably the hottest trend right now in the software industry. Instead of manually writing every line of code, we can now describe what we need in plain language and let AI agents generate code, run tests, perform some, if not all, SDLC operations, and whatever else we might need.

That’s great since it means that now it is socially acceptable to talk to your computer. “Hello, precious. Can you write a Web app based on the OpenAI schema defined over there?” “Please be so kind as to fix this issue for me and make a patch release.” “Would you mind working on that problem for a while longer?”

There’s even a button now in VS Code that enables us to literally talk to AI through a microphone.

Vibe Coding is, in a way, still coding. We generate instructions for machines and those instructions are interpreted into binary code. The major difference is in the language we use. With “normal” coding, we do not become proficient just by writing code. We become proficient once we learn the rules, understand best practices, and practice (a lot). It’s the same with Vibe Coding. The more we practice it and the better we understand the rules, the more proficient we become.

So, today I want to share my best practices for vibe coding. They are mine. They work well for me, and I am very curious to hear your thoughts.

Better Code Reviews with AI? GitHub Copilot and Qodo Merge Tested

Second opinions are important. We get them them from doctors, as well as from software engineers. We want “stuff” to be reviewed and we want feedback. Today, however, we will not talk about second opinions and suggestions from doctors and software engineers. We’ll talk about one AI reviewing work of another AI, with us being managers of both.

Today we’ll explore the possibilities of using a few AI agents to do code review. We’ll see how they integrate into pull requests in GitHub, whether they can find issues in code written by a different AI in an IDE, and how we can incorporate those reviews into our development workflow.

My Workflow With AI: How I Code, Test, and Deploy Faster Than Ever

Today I want you share my development workflow with AI. I want to share how I start working on a new feature, how I manage product requirement documents, or PRDs, how I write code and test it, and how I move through the development lifecycle. The way I approach all that today is very different from the way I did all that in the past. There is a whole team working on each feature, with me being the only human involved.

Kubernetes AI: The Good, The Bad, and The Disappointing (kubectl-ai)

I will make an assumption by saying that you work, in some capacity or another, with Kubernetes and that you are interested in making management of your clusters much easier, better, and faster with AI. If that’s the case, I have a treat for you. We’ll explore how to do just that. We’ll take a look at an AI agent specialized in management of Kubernetes clusters. An agent that comes from the company that made Kubernetes. An agent that is open source. An agent that has the potential to be one of the most important tools in your toolbelt.

From Shame to Fame: How I Fixed My Lazy Vibe Coding Habits with Taskmaster

AI does not work, or, to be more precise, works poorly when trying to accomplish larger tasks that require many steps.

Imagine that we have a Product Requirements Document, or a PRD, that requires some major development, or a major refactoring. We might have spent hours or even days defining that PRD, and even more time defining all the tasks such a PRD should contain. Once we have it all set, we can start writing the code that implements that PRD, and that is likely to take even more time.

That situation presents one problem and one opportunity for improvement.

The Missing Link: How MCP Servers Supercharge Your AI Coding Assistant

We got Large Language Models (LLMs), but they were not enough. Then we got AI agents, but they were not enough either. Now we got Model Context Protocol (MCP).

Is that it? Is that what was needed to make AI for software engineers truly useful?

Let’s see.

Claude Code: AI Agent for DevOps, SRE, and Platform Engineering

If you are a software engineer, you are probably already using an AI agent like GitHub Copilot, Cursor, Windsurf, Cline or something similar. If you are, you probably have an opinion which one of those is the best one out there. Or you might have been dissapointed with the results AI agents provide and chose to use none of them.

Today I’ll tell you which one is the best AI agent for any type of software engineers, especially for those focused on operations. If you call yourself DevOps, or SRE, or Platform Engineer, you’ll find out which one you should use.

Ready?

Outdated AI Responses? Context7 Solves LLMs' Biggest Flaw

LLMs are always behind. They do not contain up to date information and examples for programming languages, libraries, tools, and whatever else we, software engineers, are using. Depending on the date an LLM was created, it might be days, weeks, or months behind. As such, examples will be using older libraries, outdated APIs, and deprecated versions of the tools.

Moreover, since LLMs are, in a way, databases of the whole Internet, they might give us code examples taken from places other than, for example, oficial documentation. They might give us generic answers that do not match versions we’re working with.

We are going to fix that today in a very simple, yet effective way. We are going to teach our agents how to get up to date information they might need to come to the right conclusion and perform correct actions.

By the end of this post, the likelyhood of your AI agent doing the right thing will increase exponentially.

Unlock the Power of GPUs in Kubernetes for AI Workloads

Here’s a question. Where do we run AI models? Everyone knows the answer to that one. We run them in servers with GPUs. GPUs are much more efficient at processing AI models or, to be more precise, at inference.

Here’s another question. How do we manage models across those servers? The answer to that question is… Kubernetes.