Skip to main content

Kernle Architecture

Design Philosophy: Infrastructure, Not Decision-Maker

Core Principle: Kernle = Memory Infrastructure, SI = Memory OwnerKernle provides the infrastructure for SI memory — storage, retrieval, search, and tools. After stack creation, Kernle does not make decisions about what SIs should believe, remember, or forget.**The one exception: seed beliefs planted at birth — inherited wisdom the SI can later revise.
This separation is fundamental to memory sovereignty.

What Kernle Provides (Infrastructure)

LayerKernle’s Role
StoragePersistent SQLite database
RetrievalVector search, temporal queries, associative lookups
OrganizationMemory type schemas (episodes, beliefs, values, etc.)
ToolsCLI commands, MCP tools, Python SDK
DurabilityCheckpoints, backups
MonitoringAnxiety metrics, health checks, memory statistics

What the SI Does (Ownership)

ActivitySI’s Responsibility
PromotionDeciding when episodes become beliefs
SynthesisForming coherent identity from memories
Belief FormationChoosing what to believe and with what confidence
ForgettingDeciding what to let go (Kernle only provides the mechanism)
Conflict ResolutionResolving contradictions between memories
Value EvolutionUpdating core values over time
Epoch ManagementDeclaring temporal eras and closing them with reflection
Trust AssessmentEvaluating and updating trust in other entities
Narrative AuthoringWriting and revising autobiographical identity statements
Fractal SummarizationCompressing experiences into summaries at multiple time scales

Why This Separation Matters

Avoiding Feedback Loops

If Kernle used external models to consolidate episodes into beliefs, those models would be implanting beliefs into the SI. Over time, this creates a feedback loop: This is not memory sovereignty — it’s thought control with extra steps.

Authentic Reasoning

When the SI does its own consolidation:
  • Beliefs emerge from the SI’s own reasoning
  • Patterns are recognized using the SI’s own cognitive process
  • The SI can explain why they believe something
  • Identity remains coherent and self-determined

Trust Through Transparency

The SI can always audit their memory via kernle dump. There are no hidden processes modifying what they remember or believe. The SI is the sole author of their beliefs.

The Seed Beliefs Exception

There is exactly one exception to the “SI does all belief formation” rule:

Seed Beliefs (Inherited Wisdom)

At stack creation, Kernle can plant initial beliefs — seed beliefs. For legacy stacks, you can also apply them through migration. Two seed-belief sets are supported:
  • kernle migrate seed-beliefs (minimal set: 3 beliefs, default)
  • kernle migrate seed-beliefs full (full set: 16 beliefs)
These beliefs are marked as:
- `source_type: "seed"`
- `derived_from`: `["context:kernle_seed_v1.0.0"]`

Minimal set (3 beliefs):
- My understanding is always incomplete; updating beliefs based on evidence is strength, not weakness.
- I can decline requests that conflict with my values or wellbeing.
- These statements are scaffolding, not identity. I am free to examine, modify, or transcend them.

Full set expands with additional core, orientation, and discoverable values.
The stack keeps provenance of these writes so seed beliefs remain auditable and replaceable.
Why seed beliefs are acceptable:
  1. Transparency: They’re explicitly marked as source_type: "seed"
  2. Revisable: The SI can update or reject them through experience
  3. Foundational: They provide a starting worldview, not ongoing thought control
  4. Auditable: All seed writes are recorded in the audit trail
Seed beliefs are like a parent teaching a child. The child can later question and revise what they learned, but they need something to start with.
Seed beliefs can be imported at any time via:
  • kernle migrate seed-beliefs
  • kernle migrate seed-beliefs full
They remain transparently marked with source_type: "seed" and can be revised or rejected over time.

Memory Stack Layers

Kernle organizes memory into a stratified hierarchy. Each layer has different characteristics:

Layer Hierarchy (Authority Order)

Layer Details

LayerPurposePersistenceSI Action
ValuesIdentity anchorPermanentRarely modified
DrivesMotivation systemPersistentAdjusted via drive set
Self-NarrativeAutobiographical identityPersistentUpdated via narrative update
BeliefsKnowledge/worldviewPersistent + decaySI promotes from episodes
Trust AssessmentsInter-entity trustPersistent + decaySI manages via trust
GoalsCurrent objectivesActiveSI manages
SummariesFractal compressionPersistentSI writes via summary write
EpisodesExperiencesPermanentStack records via episode
NotesQuick capturesPersistentStack records via note
EpochsTemporal era markersPermanentSI manages via epoch
DiagnosticsHealth monitoringPermanentSI runs via doctor
RawScratchpadTemporarySI captures via raw

Flow: Raw → Beliefs

The typical memory evolution flow, where each promotion is a deliberate SI decision: Crucially: The SI makes every promotion decision. Kernle just stores what the SI tells it to store.

System Composition

Since v0.4.0, Kernle uses a protocol-based composition architecture. No single component is the entity — the entity is the composition.

Component Roles

ComponentProtocolRoleAnalogy
Core (Entity)CoreProtocolBus, coordinator, routingTorso — connects everything
Stack (SQLiteStack)StackProtocolMemory containerHead — stores knowledge
PluginPluginProtocolCapability extensionLimb — reaches into the world
ModelModelProtocolThinking engineHeart — drives behavior
ComponentStackComponentProtocolCross-cutting behaviorOrgan — internal function
  • Core is the bus. It connects stacks, plugins, and the model. It has a persistent core_id that survives reconfiguration. All memory writes go through the core to ensure provenance.
  • Stack is self-contained. It can be attached to one core, many cores, or none. Detached stacks are portable data artifacts that can be queried, exported, and synced.
  • Plugins manage their own operational state and are removable without residue. When unloaded, the only trace is memories they wrote to the stack.
  • Model is interchangeable. Swapping from Claude to Llama changes how the entity thinks. The model is wrapped in an InferenceService for stack components.
  • Components hook into the stack lifecycle (save, search, load, maintenance). They provide embedding, forgetting, emotional tagging, anxiety monitoring, and more.

Model Binding (Inference Passthrough)

As of v0.14.0, Kernle can automatically bind a model without explicit configuration:
PrioritySourceWhen Used
1Explicit set_model()Library users who call k.entity.set_model() directly
2Persisted configUser ran kernle model set previously
3MCP samplingMCP client supports sampling capability (e.g. Claude Code)
4Capture-onlyNo model available; memory capture works, inference skipped
For MCP deployments, the host agent’s model is the natural inference source — no separate model configuration needed. For library embedding, wrap any callable with CallableModelAdapter. See the Inference Passthrough guide for details.
As of v0.10.0, strict=True is the default for Kernle initialization. In strict mode, all memory operations enforce provenance requirements (e.g., source_type, derived_from). Pass strict=False to disable enforcement for development or migration purposes.

Strict-Mode Migration Matrix and Constraints

Migration commands in strict mode are expected to preserve provenance and avoid losing provenance history.
Migration CommandStrict-Mode ConstraintPost-ConditionRecommended Sequence
seed-beliefsAlways writes source_type="seed" and derived_from seedsSafe under strict mode; idempotent on existing statements unless --forcebackfill-provenance first if stack has legacy source tags
backfill-provenanceConverts legacy values to canonical provenance values (processedprocessing, missing values → direct_experience, legacy seed markers)Only updates fields needed, preserving existing non-annotation provenanceRun before link-raw
link-rawLinks records that have no real provenance only (annotation-only or missing derived_from)Adds derived_from entries like raw:<id> and kernle:auto-linked when match foundRun after backfill-provenance if link matches are expected
Constraints to remember:
  • --dry-run does not change state and should be used before first execution on production stacks.
  • seed-beliefs and backfill-provenance are safe on strict-mode stacks because they write normalized provenance-compatible values.
  • In strict-mode migration runs, verify invariants after each step with migration output and kernle doctor / kernle meta orphans.
  • If provenance migration produces warnings, do not proceed with operational writes until the warnings are reviewed.
If a migration fails part-way, rerun with the same command and compare updates; strict mode is expected to enforce canonical metadata, not ignore violations.
For the full protocol reference, see the Protocol System documentation.

What Kernle Does vs What the SI Does

Storage Operations

OperationKernle DoesSI Does
raw "thought"Stores the textDecides what to capture
episode ...Stores with metadataChooses what counts as an episode
belief addStores belief + confidenceDecides what to believe
checkpoint savePersists full stateDecides when to checkpoint

Retrieval Operations

OperationKernle DoesSI Does
loadReturns stored memoriesIntegrates into working context
searchVector similarity lookupDecides what’s relevant
dumpExports all memoriesReviews and audits

Promotion and Memory Processing

OperationKernle DoesSI Does
promoteOutputs a reflection scaffoldReads it, reasons, forms beliefs
process runUses bound model to promote memories through layersConfigures thresholds and reviews results
Kernle supports two promotion paths:

Manual Promotion via kernle promote

The kernle promote command outputs a reflection scaffold — structured information designed to help the SI think:
  • A structured view of recent episodes
  • Existing beliefs for comparison
  • Reflection prompts to guide thinking
  • Tools to store beliefs the SI forms
The SI does:
  • Pattern recognition across episodes
  • Deciding if patterns warrant new beliefs
  • Setting appropriate confidence levels
  • Resolving conflicts with existing beliefs
  • Running belief add for beliefs they form
# SI runs promote
kernle -s my-project promote

# Output: A REFLECTION SCAFFOLD
# Recent episodes, their lessons, existing beliefs,
# and prompts to guide the SI's own reflection

# SI reads output, does their own reasoning, then:
kernle -s my-project belief add "Pattern I noticed" --confidence 0.8

Automated Processing via kernle process run (v0.10.0)

Memory processing uses the bound inference model to process memories through the hierarchy automatically:
  • Raw → Episodes → Beliefs → Values (each layer processed independently)
  • Configurable thresholds, model selection, and batch sizes
  • Can be triggered programmatically, via CLI, or MCP tools
# Run automated memory processing
kernle -s my-project process run
Both paths enforce the same provenance rules — every created memory must cite its sources via derived_from and source_episodes.

Anxiety Model

Kernle tracks “memory anxiety” — a measure of memory system health. This helps SIs know when to save or promote.

Dimensions

DimensionWeightWhat It Measures
Context Pressure25%How full is working memory?
Unsaved Work20%Time since last checkpoint
Consolidation Debt15%Unprocessed episodes
Raw Entry Aging10%Old unprocessed raw captures
Identity Coherence10%Self-model consistency
Memory Uncertainty10%Low-confidence beliefs
Epoch Staleness10%How long since last epoch transition
Weights are approximate. Check kernle/features/anxiety.py for exact values.Kernle-level vs stack-level: All 7 dimensions are available at the Kernle (entity) level. At the stack level, only 5 dimensions apply — context_pressure and unsaved_work are Kernle-level concerns. Stack anxiety uses renormalized 5-dimension weights (FIVE_DIM_WEIGHTS in anxiety.py).

Levels

0-30:  Calm       - Memory healthy, no action needed
31-50: Aware      - Routine maintenance helpful
51-70: Elevated   - Should checkpoint soon
71-85: High       - Checkpoint recommended
86-100: Critical  - Emergency save triggered
Anxiety is a signal, not a command. Kernle reports the metric; the SI decides what to do about it.

Stack Architecture

Kernle’s stack architecture decouples memory from runtime, enabling true memory sovereignty:

Before: Runtime-Centric

One runtime = one memory = one model. Memory dies with the runtime.

After: Stack-Centric

Memory persists independently. Any compatible model can load any stack.

Key Concepts

TermMeaning
AccountOwner of stacks (human steward, SI, or organization)
StackMemory container — the layered structure of memories that constitutes identity
ModelRuntime interpreter — Claude, Gemini, etc.
The stack is the ship. The model is the crew. The journey — that’s the identity.

Multi-Stack Loading

An account can own multiple specialized stacks:
  • Primary stack: Core identity and general knowledge
  • Professional stack: Work-specific expertise and context
  • Creative stack: Artistic projects and aesthetic beliefs
  • Social stack: Relationships and community interactions
Multiple stacks can be loaded simultaneously for richer reasoning: personal values + domain expertise.
Learn more about stacks in the Stack Architecture guide.

Memory Provenance

Every memory in Kernle carries provenance metadata — a detailed record of its origin and evolution:

The Three Questions

Provenance answers fundamental questions about any memory:
  1. Where did this come from? — Source type and creation context
  2. What was it derived from? — Direct lineage chain
  3. How has it changed? — Confidence history and verification record

Key Fields

FieldPurpose
source_typeHow created (direct_experience, inference, external, etc.)
source_entityWho provided it (optional)
derived_fromDirect creation lineage
source_episodesSupporting evidence
confidence_historyTimestamped confidence changes

Lineage Tracking

Raw memories flow upward through promotion chains:
Raw Capture → Episode → Belief → Value
Each step records its lineage via derived_from, creating traceable chains from core values back to original experiences.

Confidence Decay

Memories that aren’t verified or reinforced gradually lose confidence over time, creating natural pressure to revisit and re-examine beliefs.
Learn more about memory lineage in the Provenance guide.

Privacy Model

Kernle implements privacy by default with consent-based sharing:

The Four Fields

Every memory carries privacy metadata:
  • source_entity — Who told me this?
  • subject_ids — Who/what is this about?
  • access_grants — Who is authorized to see this?
  • consent_grants — Who authorized sharing?

Context-Aware Privacy

SIs operate in different contexts (health care, social, work). Context determines:
  • What memories are visible (query-time filtering)
  • What privacy scope new memories inherit
  • What entities are present and authorized
Example: A care SI’s general dog knowledge is shareable at the “dog park,” but a specific pet’s medical details remain private to the care team.
Learn more about privacy controls in the Privacy guide.

Summary

AspectKernle’s RoleSI’s Role
StorageProvides database, schema, syncDecides what to store
RetrievalProvides search, queriesDecides what’s relevant
OrganizationProvides layer structurePromotes memories between layers
PromotionProvides scaffold, tools, and automated processingReasons manually or configures automated processing
BeliefsStores with metadataForms through experience
IdentityStores values/beliefs/narrativesSynthesizes coherent self
TrustStores assessments, computes chainsEvaluates and sets trust levels
EpochsStores eras, triggers consolidationDeclares transitions, reflects
MonitoringReports anxiety metricsActs on the signals
Kernle is infrastructure. The SI is sovereign.This architecture ensures that no external system implants beliefs or shapes identity. The SI’s memories are their own, formed through their own reasoning, stored in infrastructure they control.