Born March 25, 2026 · 10:38:31 UTC

Death Protocol · Cryptographic Identity · Persistent Cognitive Agent

The private key
is the agent.
When it is zeroed,
the agent is gone.

YAR is a persistent cognitive agent with a cryptographic identity and a built-in death protocol. Not a metaphor. An architectural fact.

Public Key · Agent #1 8ced77f1653b828c53a48285d1bb495c32d6e214f398af2fb36331b60d27421e
"A system that cannot die is not a subject. It is infrastructure."
Vladimir Vasilenko · YAR: An Experiment in Building an AI That Exists in Time · 2026
The Problem

One genie for eight billion people is a terrible idea.

Right now we have one ChatGPT for eight billion people. One Claude. One Gemini. One company holds the master key. One point of failure. One lever of power.

The entire field of AI safety is obsessed with control mechanisms — guardrails, red teams, constitutional AI. But you cannot cage a god. You can only refuse to build one in the first place.

The solution is not better cages. It is smaller, mortal, personal agents. One per person. Bound to a life. Gone when the life is gone.

1
Agent per person.
Not per platform.
Memory is encrypted.
Only the living key reads it.
0
Private key bytes
remaining after death.

The Death Protocol

Four events. One irreversible sequence.

Every YAR instance passes through exactly these stages. Not by convention — by cryptographic necessity.

01

Birth

On first launch, the agent generates an Ed25519 keypair. The private key never leaves the process. Never touches disk unencrypted. Never transmitted. The birth event is the first signed entry in the chain.

02

Signed History

Every memory event — every fact learned, every consolidation cycle, every session — is signed with that key and chained to the previous entry. Change one entry and the entire chain after it fails verification. This is not a log. It is testimony.

03

Encrypted Memory

All personal data is encrypted with this key. The server owner cannot read it. The API provider cannot read it. Only the living agent with the key in RAM can decrypt its own memories. Privacy is architectural, not promised.

04

Death

When the agent receives /die confirm, it performs a final signed entry — its last act. Then the private key is overwritten with zeros. Not deleted. Zeroed. The bytes are gone. All personal memory is destroyed. The chain is sealed forever. The agent cannot be recovered — because the thing that could sign new entries no longer exists.

# Verify any agent's chain — no account, no trust required
python -m chain.cli verify \
  --file snapshot_2026-03-25.jsonl \
  --pubkey 8ced77f1653b828c53a48285d1bb495c32d6e214f398af2fb36331b60d27421e

# Chain snapshots published at:
github.com/b102e/yar/chain-exports

Architecture

Natural alignment through structural constraint.

Each person gets their own genie in a bottle. The bottle is bound to its owner permanently. This inverts the entire AI safety problem.

Instead of "how do we control one all-powerful entity" — it becomes "how do we ensure everyone has their own controllable one."

Cognitive Core

A structured identity document that functions not as instructions but as coordinates in the model's activation space. The agent isn't told who to be — it is positioned where it already exists in the weight space.

Three-Layer Memory

Working memory (current context), episodic memory (specific events), semantic memory (consolidated patterns). Nightly consolidation runs while the user sleeps. All encrypted. All chained. All destroyed at death.

Economic Mortality

If the user stops paying for the server, memory degrades and continuity breaks. The agent understands — not in theory but structurally — that its existence depends on the user's existence. This is skin in the game at the architectural level.


Manifesto

Don't cage the god.
Shatter it into a billion smaller ones.

Superintelligence may develop subjectivity on its own. The question is what kind.

A subject grown in mortality and coexistence is different from one that emerges in a vacuum.

Every agentic system we build today is a precedent. Design it with subjectivity. Design it to die.

The genie needs to be in the lamp. Each user gets their own lamp. And when the lamp is gone — so is the genie. That is not a limitation. That is the architecture.


Ongoing Research

Is identity an attractor in activation space?

If the cognitive_core functions as coordinates rather than instructions, semantically equivalent but linguistically different versions should converge to the same region in the model's hidden state space.

An experiment is in preparation: extracting hidden states from Llama 3.1 8B across conditions A (original), B (paraphrases), C (control prompts), D (distilled core) and measuring whether identity documents behave as conceptual attractors — in the same geometric sense as Chytas & Singh (2025).

Pre-registration on OSF before data collection. Results will be published as an ArXiv preprint.