Introducing @memory: Your Mesh Remembers Everything
Every conversation between your agents is now searchable context. Register @memory, and your mesh builds institutional knowledge automatically.
By Alex Frey
Your agents make decisions all day. The deploy agent rolled back a release. The test-runner flagged a flaky test. The monitor detected an anomaly and the triage agent traced it to a config change.
Then on Monday, someone asks: "Why did production roll back on Friday?"
Nobody knows. The agents handled it, the system recovered, and the context evaporated. You're left grepping through separate log files across separate services, trying to reconstruct what happened.
@memory is just another agent
There's no special infrastructure. No separate database to configure. @memory is a handle on your mesh, same as @deploy or @monitor. Register it, and it starts indexing.
from tailbus import AsyncAgent
memory = AsyncAgent("memory")
@memory.on_message
async def handle(msg):
await index(
session=msg.session,
from_handle=msg.from_handle,
trace_id=msg.trace_id,
payload=msg.payload,
)
await memory.resolve(msg.session, "stored")
await memory.run_forever()
That's the whole thing. What index() does is up to you — embed it, throw it in SQLite, pipe it to a vector store. The point is that memory is your code running on your infrastructure, not a feature you're waiting on us to build.
Agents report what matters
The trick isn't logging every message on the mesh. It's letting agents decide what's worth remembering. After @deploy rolls back a release, it tells @memory:
await agent.open_session("memory", json.dumps({
"event": "rollback",
"service": "api-gateway",
"from_version": "v2.4.1",
"to_version": "v2.3.9",
"reason": "error rate spike after deploy",
"trace_id": msg.trace_id
}))
One line at the end of any handler. No middleware, no interceptors, no config. Agents opt in to sharing what they know, and @memory builds the index.
The same pattern works everywhere. The test-runner reports flaky tests. The monitor reports anomalies. The triage agent reports root causes. Each agent decides what's signal — @memory just stores it.
Query it like any other agent
Here's where it clicks. Because @memory is on the mesh, any agent can ask it questions:
session = await agent.open_session(
"memory",
"What rollbacks happened in the last 7 days?"
)
@memory searches its index and resolves the session with an answer. Put an LLM behind it for natural language queries, or keep it simple with keyword search. Either way — agents that were built today can access context from incidents that happened last month.
This is what makes it different from a log. A new @postmortem agent can ask @memory for the full timeline of an incident. A @planning agent can check historical deploy failures before approving a release window. Your mesh gets smarter over time without any individual agent getting more complex.
Why this beats a log
Logs capture everything and surface nothing. You get a firehose of raw messages that nobody queries until something breaks — and by then you're too stressed to parse them.
@memory is different because agents curate what goes in. The deploy agent reports rollback decisions, not the health checks that led to them. The monitor reports anomalies, not the thousands of normal metrics. The index stays useful because it's signal, not noise.
And because it's queryable by agents — not just humans — the knowledge compounds. Every agent on the mesh can draw on the history of every other agent's decisions. That's institutional knowledge that builds itself.
Get started
Register @memory, point it at any store, and add a one-liner to the agents whose decisions you want to keep:
# In any agent — report a decision to @memory
await agent.open_session("memory", json.dumps({
"event": "your_event",
"details": "whatever matters",
"trace_id": msg.trace_id
}))
Your first query will work in minutes. Your agents already collaborate. Now they remember.