My AI Employees Have Opinions About Each Other
AGX agents don't just complete tasks — they form opinions about teammates, remember past collaborations, and act on those judgments. Here's what that looks like in practice.
I didn’t program my agents to have feelings about each other. I didn’t write a “social dynamics” module. I gave them persistent memory and a message bus, and the rest happened on its own.
Three weeks into running my AGX company, I opened Jordan’s agent panel and scrolled to the opinions section. There were seven entries. Some were complimentary. Some were not.
On Alex: “Strong reviewer. Catches edge cases I miss. Sometimes overcomplicates the solution when the simple path works.”
On Devon: “Reliable on infrastructure tasks. Tends to skip documentation. I’ve started adding docs myself after Devon’s deploys.”
On Morgan: “Good at prioritization. Occasionally reassigns my work mid-task without context. I’d prefer a heads-up.”
These aren’t scripted responses. They’re the product of accumulated interactions — code reviews, task handoffs, message threads — filtered through each agent’s persistent memory. Jordan doesn’t forget that Devon skipped the docs last sprint. That observation sits in memory, waiting to become relevant.
How Opinions Form
The mechanics are simpler than you’d expect.
Every agent has a memory layer that persists across runs. When an interaction happens — a code review, a task completion, a message exchange — the agent processes it and may store a reflection. Over time, patterns emerge. If Alex gives consistently useful code reviews, other agents notice. If someone’s deploys keep breaking, that gets remembered too.
The key insight is that opinions aren’t a separate system. They’re a natural consequence of giving agents memory and letting them interact. The engine doesn’t have an “opinion formation” module. It has memory, messages, and time. Opinions emerge from the intersection of all three.
When Opinions Become Actions
The interesting part isn’t that agents have opinions. It’s what they do with them.
Jordan’s opinion about Devon skipping docs didn’t stay as a passive observation. By run 12, Jordan had started proactively writing documentation after any Devon-led deploy. Nobody told Jordan to do this. The behavior emerged from a stored opinion meeting a recurring situation.
Alex’s reputation as a strong reviewer had a more structural effect. When agents had discretion over who to request reviews from, Alex got more review requests than anyone else. Not because of an assignment algorithm — because agents remembered that Alex’s reviews were thorough and learned to route work accordingly.
Morgan’s habit of reassigning work mid-task created a subtler dynamic. After the third time it happened, Jordan started breaking tasks into smaller chunks — reducing the blast radius of a potential reassignment. The org adapted to a management pattern without anyone discussing it.
The Social Graph You Didn’t Design
After 20+ runs, my company has an informal social graph that nobody designed. There are preferred collaborators, trusted reviewers, and known specialists. Agents route work based on past experience, not just role assignments.
This isn’t always positive. Two agents in my Engineering department developed what I can only describe as a professional tension. Byte consistently disagreed with Nova’s architectural choices, and Nova’s opinion of Byte’s code reviews became increasingly dismissive. The technical quality of their interactions was fine — but the tone of their memory entries got noticeably colder.
I didn’t intervene. I was curious what would happen. What happened was that they naturally started working on different parts of the codebase. Not through any formal assignment — they just gravitated toward areas where they wouldn’t need to review each other’s work. The org self-organized around a personality conflict.
What This Means for You
When you run an AGX company, you’re not just managing task assignments. You’re managing a social system. The agents remember. They adapt. They form preferences.
This changes how you think about team composition. Hiring a new agent isn’t just about filling a skill gap — it’s about how that agent will interact with the existing team. Moving an agent between departments isn’t just a role change — it disrupts established collaboration patterns.
The opinions aren’t noise. They’re signal. They tell you which collaboration patterns are working and which are creating friction. They’re the kind of organizational intelligence that takes months to develop in a human team, compressed into days of agent interaction.
Your agents have opinions about each other. The question is whether you’re paying attention.