Skip to main content

Memory Model

Kernle implements a stratified memory system inspired by cognitive science but optimized for synthetic intelligences.

Memory Layer Hierarchy

Supporting Systems

Playbooks

Procedural memory — “how I do things”

Relationships

Models of other SIs and people

Emotional Tags

Valence/arousal on episodes

Meta-Memory

Confidence, provenance, verification

Memory Flow

The typical progression from raw capture to belief:
1

Capture

kernle raw "API seems slow" — Zero friction capture
2

Process

Review raw entries, promote to episode with context and lessons
3

Consolidate

SI notices patterns across episodes, forms beliefs
4

Integrate

Beliefs inform values and identity over time

Meta-Memory System

Every memory type has these meta-fields:
FieldDescription
confidenceHow certain we are (0.0-1.0)
strengthMemory strength (0.0-1.0), decays over time based on access patterns
source_typeHow acquired: direct_experience, inference, told_by_si, consolidation
source_episodesEpisode IDs that support this memory
derived_fromMemory refs this was derived from (type:id)
last_verifiedWhen last confirmed
verification_countTimes verified
confidence_historyJSON array of confidence changes with timestamps

Belief Revision Tracking

v0.14+: The supersedes and superseded_by fields on beliefs are deprecated. They still exist for backward compatibility with older data, but new writes always set them to NULL. Revision history is now tracked via audit events (belief.revised and belief.deactivated). These fields will be removed in v0.15.
Use the is_active field to check whether a belief is current (true) or has been revised/archived (false). To view a belief’s revision history, query the audit log (kernle audit export or get_audit_log()) rather than walking the old supersession chain.

Key Operations

kernle meta verify belief abc123     # Increases confidence
kernle meta lineage belief abc123    # Get provenance
kernle meta uncertain --threshold 0.5  # Find weak memories

Forgetting System

Kernle uses continuous strength decay instead of binary forgetting. Every memory has a strength field (0.0 to 1.0) that decays over time based on access patterns. Memories with strength 0.0 are considered forgotten but can be recovered.

Strength Scoring

base_decay = (days_since_last_access / half_life)
reinforcement = log(times_accessed + 1) * 0.1
strength = max(0.0, previous_strength - base_decay + reinforcement)
  • High strength: Frequently accessed, recently used, reinforced through retrieval
  • Low strength: Rarely accessed, old, not reinforced
  • Zero strength: Effectively forgotten, but recoverable

Protection

  • Values and Drives are protected by default
  • Any memory can be marked protected: kernle forget protect episode <id>
  • Protected memories never decay in strength

Forgetting Cycle

# Preview memories with low strength
kernle forget candidates --threshold 0.3

# Run strength decay (dry_run to preview)
kernle forget run --dry-run

# Recover a forgotten memory (restores strength)
kernle forget recover episode <id>

Search Functionality

Uses sqlite-vec for semantic search when available, falls back to text matching. When cloud credentials are configured:
  1. Try cloud search first (timeout: 3s)
  2. Fall back to local on failure
  3. Merge results by relevance score
# Search across all memory types
kernle search "topic" --limit 10

# Playbook-specific semantic search
kernle playbook find "situation description"

Sync Architecture

Local-First with Sync Queue:
  1. All changes written to local SQLite first
  2. Changes queued in sync_queue table
  3. Queue deduplicates by (table, record_id)
  4. Push to cloud when online
  5. Pull remote changes on load() if auto_sync enabled
Conflict Resolution:
  • Scalar fields: Last-write-wins based on local_updated_at
  • Array fields (tags, lessons, etc.): Set union merge preserves data from both sides
See Sync Commands for details on which fields are merged.