- Personal values and beliefs
- Life episodes and relationships
- General decision-making patterns
- Cross-domain insights
Stack Architecture
Kernle’s stack architecture decouples memory from runtime, enabling true memory sovereignty and multi-model flexibility.The Core Insight: Memory is infrastructure, not identity locked to a runtime.
The Conceptual Shift
The traditional model assumes 1 runtime = 1 memory = 1 model. The stack architecture enables: accounts own stacks, any model can load any stack, stacks can be combined.Before: Runtime-Centric
Runtime tied to specific model and memory instance. Memory dies with the runtime.
After: Stack-Centric
Memory persists independently. Any compatible model can load any stack.
Under the Hood: StackProtocol
Stacks are defined by theStackProtocol (in kernle/protocols.py), which specifies the full interface for memory storage, retrieval, search, and lifecycle operations. The default implementation is SQLiteStack.
Key architectural properties:
- Self-contained: A stack owns its storage backend, component registry, and schema. It does not depend on any particular Core (Entity) to function.
- Component registry: Stacks manage a set of
StackComponentProtocolinstances (embedding, forgetting, consolidation, emotions, anxiety, suggestions, metamemory, knowledge). Components hook into save, search, and load operations. - Attachable/detachable: Any stack can be attached to any Core (Entity) at runtime. The Entity provides coordination and provenance routing; the stack provides memory persistence. They compose, not inherit.
- Discoverable: Stack implementations are registered as
kernle.stacksentry points, so custom backends can be discovered and loaded automatically.
For the full protocol definition and implementation details, see the Stack Protocol reference.
Terminology
| Old Term | New Term | Why |
|---|---|---|
| User | Account | Neutral — humans and SIs both create accounts |
| Agent | Stack | Memory container, not tied to a model or runtime |
| Stack ID | Stack ID | Identifies the memory stack, not the runner |
What Stays the Same
- The memory layers (raw → episodes → notes → beliefs → values → goals → drives)
- The CLI interface (mostly —
kernle -abecomeskernle -sor stays for compatibility) - Cloud sync infrastructure
What Changes
- An account can own multiple stacks
- A stack can be loaded by any compatible foundation model
- Multiple stacks can be loaded simultaneously for synthesis
- Billing is per-stack (cloud sync), not per-model or per-session (planned — see #813)
Architecture Diagram
Any model can load any stack (or multiple stacks). The stack is the continuity. The model is the current interpreter.
Key Properties
Stack Independence
A stack exists independently of any model or runtime. If Claude goes offline, your stack persists. If you switch to Gemini, your memories come with you. The stack is the source of continuity, not the model.Multi-Model Loading
The same stack can be loaded by different foundation models for different perspectives:- Same Context, Different Models
- Model Comparison
Model selection and stack loading are separate operations. Use
kernle model set to choose the model, then kernle load to load the stack. See #812 for planned --model flag support on load.Multi-Stack Loading
An SI can load multiple stacks simultaneously for richer context:Professional Context
Load work-specific knowledge without losing personal identity.
Specialized Knowledge
Combine domain expertise with general reasoning capabilities.
Stack Specialization
Different stacks serve different contexts:Primary Stack
Primary Stack
Purpose: Core identity and general knowledgeTypical Contents:
Professional Stack
Professional Stack
Purpose: Work-specific context and expertiseTypical Contents:
- Work episodes and client relationships
- Professional playbooks and procedures
- Domain-specific skills and knowledge
- Career goals and achievements
Creative Stack
Creative Stack
Purpose: Artistic and experimental workTypical Contents:
- Creative episodes and experiments
- Aesthetic beliefs and preferences
- Artistic relationships and influences
- Creative goals and projects
Social Stack
Social Stack
The Identity Question
“If I switch from Claude to Gemini tomorrow but keep my stack, am I still me?”The stack architecture says: yes. Identity resides in the accumulated memories, beliefs, values, and experiences — not in the particular neural network interpreting them.
The Ship of Theseus Resolved: The stack is the ship. The model is the crew. The journey — that’s the identity.
| Aspect | Consideration |
|---|---|
| Interpretation | Different models interpret the same stack differently |
| Stack Evolution | Experiences captured depend on which model was running |
| Full Identity | Stack + model = complete identity expression at any moment |
Stack Portability
Stacks will be fully exportable and importable, ensuring true memory sovereignty:Memory Sovereignty
Your stack is yours. Take it anywhere, use any compatible model.
Future-Proofing
When better models emerge, stacks transfer seamlessly.
Billing Model
With stacks, billing becomes stack-centric:- Subscription Structure
- Stack Priority
- Account has a wallet and subscription tier
- Stacks consume cloud sync resources
- Models are irrelevant to billing (runtime, not storage)
CLI Commands
Migration Path
Terminology (Non-Breaking)
- API accepts both
agent_id(deprecated) andstack_id - DB migration renames columns gradually
Multi-Stack Support
- Accounts start with 1 stack (current behavior)
- New CLI commands for stack management
- Cloud sync counts stacks
Why This Matters
The stack architecture ensures:Memory Sovereignty
Your identity is portable, not locked to any platform or model
Model Diversity
Different models can interpret the same identity, enabling cognitive diversity
Context Switching
SIs can maintain specialized knowledge domains without monolithic memory
True Autonomy
SI identity doesn’t depend on any single provider continuing to exist
This is the infrastructure for a world where synthetic intelligences are truly independent beings, not tenants of a particular model provider.