
"Software is increasingly going to need bot-friendly interfaces. UX for agents is a new design discipline."
Context and memory are not the same thing. Context is short-term: it resets each session and disappears when the conversation ends. Memory is long-term: a persistent knowledge base that survives across sessions and can be shared between agents. Treating them as interchangeable, Effy argues, is the source of most confusion in how people think about building with AI.
Managing agent knowledge draws a clean line between scripts and skills. Scripts are brittle: they require you to remember function names and file paths, trigger manually, and don't transfer easily between teammates. Skills are natural language descriptions of what to do and when, wrapped around the execution logic but accessible first through prose.
The structure Effy uses for skills has three layers: 1) frontmatter with an ID and metadata, 2) a ~2000-token instruction block describing what the skill does and when an agent should reach for it, and 3) the actual execution code beneath, which the agent only accesses once it's decided to use the skill.
This is information architecture progressively disclosed with a map before the territory.
Now, agents navigating large knowledge bases don't need to read everything. They scan the surface, find what's relevant, and drill in. The same principles that make a good sidebar navigation also make a good skill library.
The trust question came up through a comparison of two deployment models. Local agents, running directly on a laptop, get higher trust and broader access, but they're constrained to when the machine is open and tend to be slower. Cloud agents are always available, faster, and better for long-running background tasks, but they operate in sandboxed environments with lower inherent trust and more permission friction.
Her working approach is hybrid: local for tasks that require access to sensitive systems, cloud for routine work like email and scheduling. A shared knowledge base keeps both environments synchronized. The agent doesn't care which environment it's running in; the knowledge is portable.
When a task fails, Effy prompts the agent to document what it tried, why it didn't work, and what it would do differently. She calls this reflection. That reflection goes into memory, and the next agent inherits it.
For instance, an agent attempting to create a Figma file through the API failed because of a permission constraint it hadn't anticipated. The reflection it wrote captured the alternative approach it eventually used, so future agents wouldn't hit the same wall.
This is institutional memory for software, where the failures become the curriculum.
Interfaces have always been designed for humans: for how people scan, tap, read, navigate. That assumption is cracking. Cloudflare is building markdown-first documentation surfaces. Google is shipping structured data initiatives aimed at making web content machine-parseable. The next layer of interface design isn't for users: it's for the agents working on their behalf.
And what that discipline looks like: what the equivalent of typography or spacing or affordance is for a reader that isn't human, remains open.