[1] "The mental world — the mind, the world of information processing — is not limited by the skin."
Gregory Bateson · Steps to an Ecology of Mind · 1972In 1998, philosophers Andy Clark and David Chalmers published The Extended Mind in the journal Analysis. Their argument was not metaphorical: the boundary of the cognitive system is not the body. A notebook that reliably stores memory is memory. A tool that offloads decisions is thinking. Cognition extends into whatever it reliably uses.
"If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process — then that part of the world is part of the cognitive process."
Andy Clark & David Chalmers · The Extended Mind · Analysis, Vol. 58, 1998That paper was about pocket diaries. We are building for what comes after. Not AI you pick up and consult — AI that already lives inside the thought, because it lives inside the data that generated it.
The tool stops being a tool. It becomes infrastructure for thought.
"The new always happens against the overwhelming odds of statistical laws and their probability; the new therefore always appears in the guise of a miracle."
Hannah Arendt, The Human Condition, 1958.We read Clark and Chalmers. We read Heidegger on tools becoming ready-to-hand. We read Edwin Hutchins on distributed cognition in aircraft cockpits. We understood the philosophical grounding. Then we stopped reading and started building, because the insight was clear and nobody in the industry was acting on it.
The separation between person and tool is an artifact of where software started. We are closing it.
Two versions. Not incremental. A different theory of what AI is for.
Complete access to your tasks, docs, emails, calendar, and files. It did real work across all of it. You told it what to do and it did it, reliably and fast. But you had to tell it. It was still a tool in a box.
It runs in the background. It watches your workspace and handles things before you think to ask. Missed follow-ups. Approaching deadlines. Empty meeting agendas. Stale tasks about to become a problem. It catches them.
Orbits, judgment-based and persistentOther AI agents authenticate, scrape, and infer context from what they can read. Orbis doesn't infer. It knows. It was there when the task was created. It's been watching the deadline move. It read the email the moment it arrived. The difference is not speed. It is position.
We didn't bolt voice onto a product. We built a custom architecture directly on top of the model. Sub-200ms end-to-end latency. Orbis can laugh, express genuine uncertainty, push back on bad ideas, and simultaneously execute complex multi-step work in your workspace while the call is still live.
It has persistent memory. It remembers the conversation from last week. It knows your patterns. It gets sharper as your context accumulates.
Custom architecture, not a wrapper. The response feels like talking to a person, not a server.
Laughs, disagrees, and expresses uncertainty. Has opinions. Not afraid to use them.
Executes complex agentic tasks in your workspace while the conversation is still happening.
Remembers every session. Learns how you specifically work. Gets sharper over time.
After a deep study of how biological memory actually works, we built three custom layers on top of RAG. The result: a system that remembers like a person, not like a database lookup.
Standard retrieval-augmented generation as the base layer. Fast semantic search across all workspace content. The floor, not the ceiling.
Modeled on hippocampal indexing theory. Automatically connects last week's call to this morning's task by building associations across episodic memories.
Slow consolidation of behavioral patterns into persistent priors. Learns how you specifically work. Not a generic user model. A model of you. It runs offline, continuously.
A lightweight observation layer monitoring the live workspace for signals. Not polling, not if-then. Judgment-based prioritization modeled on attentional salience research.
Multi-step task decomposition with full thinking capability: up to 200 parallel subtasks. Thinking is available for text today. Voice reasoning pipeline is in active development.
Full architecture documentation, methodology, ablation results, and failure cases will be published Q4 2026. The full technical record, open.
In early 2026, an open-source project called Clawdbot went viral. It was renamed Moltbot, then OpenClaw. Peter Steinberger built an AI agent that does things: sends emails, books meetings, monitors tasks, acts without being asked. Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing" he'd seen. CNBC ran the story. IBM wrote analysis pieces. The world finally understood what we had been building toward.
But OpenClaw proved something else too: persistent agents built outside your data are a security disaster waiting to happen. Cisco's security team found prompt injection and data exfiltration in third-party skills. One maintainer warned users that "if you can't understand how to run a command line, this is far too dangerous." API keys dumped in plaintext. Credentials exposed. Agents sending emails nobody authorized.
The architectural difference
Note: Our Orbits are not a response to OpenClaw. We were building this before Clawdbot existed. The viral moment in early 2026 simply confirmed what we already knew: the world wants persistent, proactive agents. The question is whether you build them in a way that is safe.
Most AI products are built first, then AI is layered on. A chat window in the corner. A "generate with AI" button. We built Planless in reverse: the intelligence is structural, the tools share one data model, and Orbis was designed in from the first line of code.
All six tools share one data model. Change something in a doc and it updates in the task. Reference a sheet in an email and Orbis pulls live data. There are no sync issues because there is no syncing. It was always one thing.
The hardest problem with persistent AI agents is not capability: it is trust. OpenClaw sent emails nobody authorized. We built a trust model that puts every dial in your hands. The default is conservative: Orbis always asks. But you can open it up exactly as much as you want, scoped to a specific person, a category of action, or your entire contact list. You set the rules. Orbis follows them.
OpenClaw's security critics were right about one thing: an AI agent with broad permissions needs a completely different trust architecture. We built ours from scratch, designed for the level of access Orbis requires.
Planless is live. Orbis is running. The research is being written. Come build with us, or get notified when we publish.