
A2A: How AI Agents Communicate
Agents on Kubernetes Ever since the advent of it, following the invention of Large Language Models (LLMs), organizations around the world has started adopting Agentic AI . In essence, an AI agent is best thought of as a long-lived, 'thinking' microservice, which owns a set of perceptions, reasoning and action capabilities rather than a single endpoint call. In Kubernetes , each agent typically runs as a pod or deployment and relies on the cluster network, DNS and possibly a service mesh to talk to tools and other agents. Frameworks such as Kagent let DevOps and platform engineers define and run these agents as first-class Kubernetes workloads, using Custom Resource Definitions (CRDs) and controllers instead of ad-hoc custom scripts. Very quickly, you end up with multi-agent systems rather than isolated agents. Here, one agent orchestrates others that specialize in tasks such as log analysis, ticket enrichment, incident summarization, etc. For this to work, agents must be able to discov
Continue reading on Dev.to
Opens in a new tab



