Skip to main content
This roadmap outlines planned features that extend Kernle’s cognitive infrastructure. Items are drawn from the Kernle Enhancement Proposals and organized by implementation phase.

What’s Been Built

The core cognitive infrastructure is in place, built across thirteen versioned milestones:

Protocol System

5 protocols defining composition architecture for cores, stacks, plugins, models, and components

Stack Components

8 default components with hook dispatch (on_save, on_search, on_load) for embedding, forgetting, consolidation, emotions, and more

Plugin Ecosystem

chainbased (commerce: wallet, jobs, skills, escrow) and fatline (communications: agent registry, Ed25519 crypto)

Version History

v0.03.00 — Protocol & Types Foundation

Established the protocol definitions, shared type system, and plugin discovery mechanism that all subsequent architecture builds on.
  • Protocol definitions (CoreProtocol, StackProtocol, PluginProtocol, ModelProtocol, StackComponentProtocol)
  • Shared types (types.py — all memory dataclasses)
  • Discovery system (discovery.py — entry point discovery via importlib.metadata)

v0.04.00 — Core/Stack Split

Separated the coordinator (Entity) from the memory container (SQLiteStack), enabling independent evolution and composability.
  • Entity — CoreProtocol implementation (coordinator/bus, provenance routing, InferenceService creation)
  • SQLiteStack — StackProtocol implementation (wraps SQLiteStorage + feature mixins + component registry)
  • Kernle compat layer — lazy .entity and .stack properties for backward compatibility
  • Contract tests — 163 tests for StackProtocol + CoreProtocol conformance
  • CLI migration — composition info in status, plugin discovery

v0.05.00 — Stack Components

Extracted feature mixins into discoverable, composable stack components that can be independently configured.
  • InferenceService — wraps ModelProtocol with HashEmbedder fallback
  • EmbeddingComponent — vector embedding via StackComponentProtocol
  • 7 feature mixin components — forgetting, consolidation, emotions, anxiety, suggestions, metamemory, knowledge
  • Component discovery — auto-loading of 8 default components via entry points

v0.06.00 — Plugin System

Added model provider implementations and wired plugin tools through the Entity and MCP server.
  • AnthropicModel + OllamaModel — ModelProtocol implementations for cloud and local inference
  • Plugin CLI/MCP tool registration — Entity wiring, namespaced {plugin_name}.{tool_name} dispatch
  • MCP dispatch fix — resolved validate_tool_input rejecting plugin tool names

v0.07.00 — Plugin Extraction

Moved commerce and communications into independent packages that implement PluginProtocol.
  • chainbased — PluginProtocol package (wallet, jobs, skills, escrow), comms module removed from kernle
  • fatline — AgentRegistry + Ed25519 crypto identity, own SQLite DB in plugin data dir
  • Both registered as kernle.plugins entry points

v0.09.00 — Memory Integrity

Introduced provenance enforcement, continuous memory strength replacing binary forgetting, and controlled access with audit trails.
  • Provenance hierarchy enforcementderived_from required on all memory types except raw, hierarchy rules enforced (Raw->Episode/Note->Belief->Value etc.), ProvenanceError exception for violations
  • Stack lifecycle states — INITIALIZING/ACTIVE/MAINTENANCE, seed writes only permitted in INITIALIZING
  • Continuous strength (0.0-1.0) replacing binary is_forgotten — 5 tiers: Strong (0.8-1.0), Fading (0.5-0.8), Weak (0.2-0.5), Dormant (0.0-0.2), Forgotten (0.0)
  • Controlled access + audit trail — named operations (weaken, forget, recover, verify, protect) with memory_audit table
  • Strict mode (strict=True default) — enforces provenance and source_type requirements
  • Plugin registration — live settings sync between stack settings and stack components

v0.10.00 — Provenance Wiring + Strength Cascade

Wired provenance through all interfaces (CLI, MCP, core methods), added strength cascade across memory lineage, and enabled component inference.
  • Strength cascade — reverse-lineage lookup, forget/weaken cascade flags, verify boost to sources, get_ungrounded_memories
  • Component inference — EmotionalTagging + Consolidation call inference when available with keyword fallback
  • CLI provenancederived_from on relationship add/update, all import commands, value/goal/relationship core methods
  • MCP provenancederived_from/source/source_type on all 6 memory creation tools, validation
  • export-full command — complete agent context export as markdown/JSON
  • Memory processing — MCP tools + CLI (memory_process, memory_process_status, kernle process run/status)
  • Audit fixes — enforce_provenance=True default, strict=True default, strength-tier gating, on_save mutation persistence

v0.11.x — Platform Integration & Architecture

Shipped deep AI runtime integrations, then progressively improved test quality and module architecture across 8 patch releases. v0.11.00 — Hook CLI & Platform Integration Deep integration with AI runtimes ensures automatic memory capture — not just loading, but writing. Both plugins are shipped and available in the integrations/ directory. OpenClaw Plugin — Native Plugin SDK integration with lifecycle hooks:
Plugin HookKernle Use
before_agent_startLoad memory at session start (injected as prependContext)
agent_endAuto-checkpoint with task context + raw entry marking session end
before_tool_callBlock writes to native memory files (memory/, MEMORY.md), capture content as Kernle raw entry
tool_result_persistTruncate long kernle CLI output to conserve compaction space
Installation: cd integrations/openclaw && npm install && npm run build && openclaw plugins install . See OpenClaw Integration for full details. Claude Code Hooks — Full lifecycle coverage via kernle hook CLI commands:
HookKernle Use
SessionStartLoad memory at session start (injected as additionalContext)
PreToolUseBlock writes to native memory files (memory/, MEMORY.md), capture content as Kernle raw entry
PreCompactAuto-save checkpoint before context compaction
SessionEndFinal checkpoint + raw entry on session close
Installation: kernle setup claude-code (writes hooks to .claude/settings.json) No repo access needed — works directly from pip install kernle. Configuration via environment variables (KERNLE_STACK_ID, KERNLE_TOKEN_BUDGET). See Claude Code Integration for full details. v0.11.01–v0.11.08 — Architecture Improvements
  • v0.11.01 — Lazy decay-on-read, provenance migration, component ordering, CLI migration guide, plugin documentation, search regression fix, checkpoint provenance fix
  • v0.11.02 — Fragmented monolithic sqlite.py and core.py into focused modules
  • v0.11.03 — Refactored MCP call_tool to handler registry pattern
  • v0.11.04 — Test coverage for MCP suggestion tools, cloud.py, embeddings.py
  • v0.11.05 — Sync engine tests, belief revision edge cases, tautological test fixes
  • v0.11.06 — Extracted CLI command modules (sync, auth, memory, relations, diagnostic, migrate), CI coverage gate raised to 77%
  • v0.11.07 — Tests for all extracted CLI command modules
  • v0.11.08 — Fragmented storage/sqlite.py and core.py into focused modules with 80%+ test coverage

v0.12.x — Pipeline Integrity

Established pipeline safety invariants, promotion governance, corpus seeding, and observability tooling across 4 patch releases. v0.12.00 — Pipeline Safety Critical fixes from the memory pipeline audit. These prevent identity corruption when inference is unavailable and establish the suggestions-first governance model.
  • No-inference safe-mode — prevent identity-layer writes when inference_available=false; allow only raw capture, basic notes, and suggestions
  • source_type taxonomy — resolve "processed" mismatch; establish canonical SourceType enum aligned with docs
  • Suggestions-first promotion — make suggestions the default output of processing; require explicit opt-in for auto-promotion into beliefs/values
  • Memory lint pass — reject malformed or low-signal beliefs/values before commit; store failures as suggestions instead
  • Transition idempotency — deduplicate by provenance hash and content hash; reprocessing the same batch produces zero duplicates
v0.12.01 — Pipeline Robustness With the safety invariants in place, the pipeline became smarter about when and whether to promote.
  • Time/valence triggers — wire aging, emotional arousal, and consolidation debt into check_triggers() alongside quantity thresholds
  • Promotion gates — require minimum evidence count, confidence floor, and trust floor before creating beliefs/values; build on existing strength-tier mechanism
  • Suggestion resolution workflow — full lifecycle: list, accept, dismiss, expire; CLI commands, MCP tools, and audit events
  • Golden snapshot test — end-to-end pipeline regression test with fixed inputs; covers inference-on/off variants
v0.12.02 — Corpus Seeding & Cognitive Testing Seeded agents from a corpus (repo, docs), processed the full pipeline until exhaustion, and validated cognitive quality. Serves as both a product feature (instant topic expertise) and the definitive integration test for the memory pipeline.
  • Corpus ingestion pipeline — chunk repos/docs into semantically meaningful raw entries; respect function/class/section boundaries; tag with source metadata
  • Process-until-exhaustion runner — iterative pipeline with intensity scaling (light->medium->heavy across cycles) and convergence detection; stops when no new promotions emerge
  • Cognitive quality assertions — test framework for provenance integrity, contradiction detection, duplicate detection, content quality, and pipeline health metrics
  • Golden corpus integration test — end-to-end test: seed from known corpus -> process to exhaustion -> assert on cognitive quality; extends golden snapshot test with holistic validation
  • Dev dashboard — self-contained local memory stack inspector (dev/dashboard.py); stdlib HTTP server serving an embedded dark-theme HTML/CSS/JS dashboard with 16 API endpoints, 6 tabs (Overview, Raw Entries, Memories, Suggestions, Audit Log, Settings), anxiety visualization, strength bars, and provenance chain display
v0.12.03 — Governance & Observability Auditability and belief management tooling for long-running SIs.
  • Promotion explanations — store rationale on every promoted memory: trigger condition, evidence list, confidence inputs, trust gate result
  • Belief revision — contradiction detection (lexical + embedding), supersession workflow with lineage preservation, downstream impact flagging
  • JSONL audit events — standardized, versioned audit event schema with correlation IDs for pipeline runs
  • Terminology docs — canonical pipeline diagram, glossary (transition, promotion, suggestion, maintenance, consolidation), end-to-end reference page
  • Component ordering DAG — formalize component execution order with declared dependencies; validate at init

v0.13.x — Security Hardening, Architecture, and Sync Unification

Comprehensive security hardening, architectural decomposition, and sync system improvements across 13 patch releases. v0.13.00 — Anxiety Unification & Architecture
  • Unified anxiety computation across CLI, features, and stack components — single AnxietyCore engine
  • Removed legacy feature mixins from SQLiteStack (replaced by StackComponentProtocol)
  • Delegated load() assembly to StackProtocol instead of Entity
v0.13.01 — Suggestion Lifecycle & Security
  • Added suggestion lifecycle APIs to StackProtocol (get, list, accept, dismiss)
  • Blocked plaintext HTTP credential submission in auth CLI
  • Fixed PreToolUse hook fail-closed behavior
  • Credential URL validation hardened against hostname bypass
v0.13.03–v0.13.04 — Test Hardening & Audit Remediation
  • Hardened test suite and optimized CI pipeline
  • Complete audit remediation across all critical/high findings
  • CSV importer coverage raised to 92%, knowledge component to 97%
v0.13.06 — Validation & Architecture Cleanup
  • Hardened input validation and model error handling
  • Architecture cleanup and CI hardening
  • Raised overall test coverage targets
v0.13.07–v0.13.09 — Runtime Safety & Observability
  • Runtime safety improvements: score clamping, file permissions, sync guards
  • Sync integrity hardening with guard clauses
  • Observability improvements for debugging production issues
  • Audit critical/high findings — security, logic, and observability fixes
v0.13.10 — Architectural Decomposition
  • Decomposed protocols, storage, components, and imports into focused modules
  • Separated StackWriterProtocol, StackReaderProtocol, and other protocol fragments
  • Storage CRUD operations extracted to per-table modules (beliefs_crud, episodes_crud, etc.)
v0.13.11–v0.13.12 — Write Path Unification & Sync
  • Unified memory write paths with enrichment extraction and entity/batch parity
  • CLI moved off raw SQLite — storage admin methods replace direct sqlite3.connect
  • Consolidated entity persistence — removed redundant binding, wired checkpoint restore
  • Unified sync CLI pull with SyncEngine for all 8 table types
  • Added upsert methods to StackProtocol for drives and relationships
  • Entity.drive() and Entity.relationship() now use get-then-atomic-update-or-save pattern
  • Deduplicated sync credential discovery into shared kernle.credentials module
  • Fixed source_entity persistence in Relationship CRUD

Planned Milestones

Development is organized into versioned milestones. Each minor version (e.g., v0.14.x) is a milestone, with patch versions (e.g., v0.14.00, v0.14.01) as parent issues within. The v0.14.x series introduces pending memory and pondering, and v0.15.x adds skills as a memory type.

Deferred: Remote Protocol Abstraction

These features were originally planned for v0.13.x but deferred in favor of security hardening, architectural decomposition, and sync unification work. They remain planned for a future milestone.

Remote Protocol & Generic Sync Client

kernle-core is to kernle-cloud what git is to GitHub. kernle-core is the open-source protocol that works fully offline with SQLite. kernle-cloud is Ei’s hosted offering — one of many possible remotes. Switching remotes is a config change, not a migration.
Define the RemoteProtocol — the interface any remote memory service must implement — and replace the Supabase-coupled CloudClient with a generic sync client.
  • RemoteProtocol + SyncOperation types — minimal protocol interface: push, pull, search, health, capabilities. Remotes declare what they support via capability negotiation (#490)
  • Generic sync client — replaces CloudClient; lazily initialized, works with any RemoteProtocol implementation; SQLiteStorage decoupled from cloud imports (#491)
  • Remote configuration CLIkernle remote add/list/use/remove following the git remote pattern; multiple remotes, switchable active remote (#492)

Auth Abstraction & Pluggable Schemes

Replace hardcoded Supabase OAuth with pluggable auth. kernle-core ships simple schemes (API key, bearer token); kernle-cloud provides OAuth as a plugin via entry points.
  • Pluggable auth abstractionAuthScheme protocol with discover via kernle.auth entry points; built-in API key and bearer token schemes (#493)
  • Supabase/OAuth extraction — move Supabase dependency and OAuth flow to kernle-cloud package; remove cloud optional dependency from kernle-core (#494)

Bidirectional Communication

Enable remotes to push updates to kernle-core via an opt-in inbound listener. Define the formal API specification.
  • Inbound listenerkernle serve exposes webhook endpoints for remote-initiated sync, notifications, and conflict detection; authenticated via shared secret (#495)
  • Remote API specification + conformance tests — formal HTTP API spec for RemoteProtocol; kernle remote test validates any implementation (#496)

Migration & Cleanup

Remove stale Supabase/Postgres coupling and provide a reference self-hosted remote.
  • Stale reference cleanup — remove dead Makefile ignores, unused fixtures, Postgres doc references, cloud-specific protocol methods (#497)
  • Reference self-hosted remote — minimal FastAPI server implementing RemoteProtocol with SQLite storage and API key auth; deployable via Docker; validates the protocol design (#498)

v0.14.00 — Pending Memory & Pondering

This milestone replaces the human-in-the-loop suggestions system with an agent-controlled pending memory pipeline. The v0.12.x “suggestions-first promotion” work (#401, #406) establishes the governance model that this milestone then re-implements on a fundamentally different architecture — pending columns on memory tables instead of a separate suggestions table.
The suggestions system was designed for human curation, but kernle serves autonomous agents. Memory curation is the agent’s responsibility. This milestone introduces pending memory and pondering sessions — a deliberate, agent-controlled approach to memory consolidation.

Pending & Ask-Human Columns

Two new boolean columns on all memory types except notes and raw entries (#464):
ColumnDefaultPurpose
pendingTrueMemory is proposed but not yet confirmed. Excluded from load() by default.
ask_humanFalseAgent flags this memory for optional human review. Advisory, not authoritative.
Notes and raw entries are exempt — they’re captures, not interpretations. Everything else (episodes, beliefs, values, goals, drives, relationships, summaries) enters pending state by default.

Pondering Sessions

Agent-triggered processing where pending memories are evaluated and either confirmed, modified, or rejected (#465). Different memory layers require different evaluation intensity:
LayerIntensityWhat Pondering Evaluates
Raw → EpisodeLightDid this happen? Is the objective/outcome accurate?
Raw → BeliefMediumIs this a valid generalization? Contradicts existing beliefs?
Raw → Value/GoalHeavyDoes this align with identity? Is this a real commitment?
Belief revisionMediumIs the revision justified? What changed?
Value/Drive changesHeavyDeep consistency check against identity layer
Pondering assigns strength tiers and can adjust confidence, content, and other metadata. With inference available, pondering performs LLM-powered consistency checks. Without inference, it uses pattern-based heuristics and duplicate detection.

Suggestions System Replacement

The MemorySuggestion dataclass, memory_suggestions table, SuggestionComponent, and all suggestion CLI/MCP tools are replaced by pending-aware equivalents (#466):
Old (Suggestions)New (Pending)
MemorySuggestion dataclasspending=True on Episode/Belief/etc.
memory_suggestions tablepending column on existing tables
kernle suggestions approvekernle ponder (batch) or kernle pending confirm
memory_suggestions_promote MCPmemory_ponder or memory_pending_confirm

Ask-Human Interface

When an agent sets ask_human=True, it’s requesting — not requiring — human input (#467). The agent includes context (“I’m unsure about this belief”), a human provides advisory feedback, and the agent makes the final decision in its next pondering session.

v0.14.01 — Enriched Cognition

These features extend the existing protocol system with no breaking changes. New capabilities are delivered as StackComponentProtocol implementations or protocol extensions.

Memory Echoes

Partially addressed in v0.09.00/v0.10.00. Strength-tier filtering now excludes weak and dormant memories from load() while keeping them searchable — providing a form of peripheral awareness. The explicit echoes metadata below remains future work for richer hints.
When load() excludes memories because of budget limits, those memories are invisible. Memory echoes provide peripheral awareness — lightweight hints about excluded memories (#412):
"_meta": {
    "echoes": [
        {"type": "belief", "id": "abc", "hint": "testing prevents...", "salience": 0.72},
        {"type": "episode", "id": "def", "hint": "deployed v2 fail...", "salience": 0.68},
    ],
    "temporal_summary": "Memory spans 2024-01-15 to 2026-02-05 (2.1 years). 3 epochs.",
    "topic_clusters": ["deployment", "testing", "collaboration"]
}
Echoes cost ~200 tokens for 20 entries but give the entity awareness that relevant memories exist beyond what’s currently loaded.

Goal Types

Not all goals are the same cognitive object. A goal_type field differentiates (#413):
TypeCompletion ModelForgettingExample
taskBinary (done/not done)Normal decay after completion”Ship v0.3”
aspirationAsymptotic (never done)Very slow decay, protected”Become a better communicator”
commitmentRecurring (resets)No decay while active, protected”Review PRs within 24 hours”
explorationOpen-ended (may spawn new goals)Normal decay”Investigate distributed consensus”

Belief Enrichment

Two new fields on beliefs to support transfer learning and identity modeling (#414): Belief Scope — distinguishes self-model beliefs from world-model beliefs:
ScopeDecay RateExample
selfSlow (like values)“I am a careful reasoner”
worldStandard”Testing prevents surprises”
relationalStandard”Claire values directness”
Domain Metadata — enables cross-domain transfer:
source_domain: "coding"
cross_domain_applications: ["writing", "teaching"]
abstraction_level: "domain"  # 'specific' | 'domain' | 'universal'

Plugin Enhancements

PlatformHookPurposeIssue
Claude CodePostToolUseTrack significant tool calls (Write, Edit, Bash) as raw captures#415
Claude CodeUserPromptSubmitInject relevant memory context based on the user’s prompt#416

v0.14.02 — Trust & Access Control

Enforcing trust at the write path — without this, trust is theater.
  • Trust-layer write gating — require trust evaluation for accepting suggestions, promoting beliefs/values, and loading memory into context; minimum trust thresholds per memory type (#417)
  • Access control & consent enforcement — implement read/write scopes and export/redaction policies using existing access_grants/consent_grants fields (#418)
  • Dynamic trust — compute trust scores from episode history: interaction-based trust, trust decay, self-trust floor at historical accuracy rate (#419)

v0.14.03 — Scale & Depth

Preparing the memory system for long-running SIs with large memory stores.
  • Load-time memory curation — token budgeting with “must include” sets (protected values, active goals), priority-ordered allocation, transparent selection reporting (#420)
  • Memory compaction — fractal summarization and epoch summaries as first-class memory types; inference-gated; summaries preferred by load() when budget constrained (#421)
  • Multi-stack merge — formal merge semantics for values (never auto-merge), beliefs (lineage-aware), and goals (dedup vs supersession); kernle stack merge --dry-run (#422)

v0.15.00 — Skills as Memory

Skills become a first-class memory type with a proficiency lifecycle that evolves through use.
  • Skill memory type — new Skill dataclass with proficiency states (theoretical → learning → practiced → proficient → expert → rusty), domain, success rate, and decay (#471)
  • Episode-skill linking — episodes reference skills they exercised; success/failure feeds back into proficiency scores with diminishing returns at higher levels (#472)
  • Skill CLI and MCP toolskernle skill list/show/add/assess, corresponding MCP tools, skills included in load() context (#473)
Every skill execution is an episode. Proficiency is earned, not declared.

v0.15.01 — Skill Intelligence

Skills become composable, belief-enriched, and transferable across domains.
  • Skill composition — complex skills built from sub-skills via provenance DAG; composite proficiency bounded by weakest sub-skill; skill tree visualization (#474)
  • Skill beliefs — wisdom about when/how to apply skills, extracted from episode history; stability and contradiction metrics indicate expertise depth (#475)
  • Cross-domain transfer — recognize structural patterns applicable in new domains (e.g., “systematic elimination” in debugging → troubleshooting → research); transferred skills start at theoretical proficiency (#476)

v0.15.02 — Skill Transfer & Teaching

Skill transfer between SIs is teaching, not copying. The receiving SI gets curated knowledge but must build its own experience. This preserves the value of personal experience while enabling knowledge transfer.
  • Skill packages — structured curricula containing lessons, beliefs, pitfalls, and sub-skill composition — but NOT raw episodes (those are personal). Source SI creates the package; receiving SI imports it at strength: 0.3 with source_type: "taught" (#477)
  • Package validation and trust — schema validation, duplicate/conflict detection, trust-gated acceptance (higher source trust → higher initial strength), all imported content enters pending queue for pondering (#478)
Connects to chainbased’s commerce model — skilled SIs can sell skill packages as a service.

Further Out: Social & Temporal Depth

These features involve new storage schemas and more sophisticated protocol interactions. Not yet assigned to milestones.

Relationship History

Currently relationships capture a snapshot. Relationship history tracks the trajectory:
CREATE TABLE relationship_history (
    relationship_id UUID REFERENCES relationships(id),
    event_type TEXT,       -- 'interaction' | 'trust_change' | 'type_change' | 'note'
    old_value JSONB,
    new_value JSONB,
    episode_id UUID        -- Episode that triggered this change
);
This lets the entity answer “How has my relationship with Claire evolved?” without scanning all episodes.

Entity Models

Beyond tracking relationships, the entity can model what they know about another entity:
CREATE TABLE entity_models (
    entity_name TEXT,
    model_type TEXT,        -- 'behavioral' | 'preference' | 'capability'
    observation TEXT,       -- "Claire is careful with code but overcommits on timelines"
    confidence FLOAT,
    source_episodes UUID[]
);
Entity models carry privacy fields (subject_ids) since they contain information about others.

Consolidation Scaffold Enhancements

Cross-domain pattern scaffolding — surface structural similarities across domains:
Episodes tagged [deployment]:
  "Skipped staging" -> failure
  "Full pipeline" -> success

Episodes tagged [relationships]:
  "Skipped 1:1 prep" -> failure
  "Prepared talking points" -> success

STRUCTURAL SIMILARITY:
  "Shortcutting process -> failure" appears in 2+ domains.
Belief-to-value promotion — flag beliefs stable enough to be values:
Belief: "Iterative development leads to better outcomes"
  - Active for 14 months
  - Reinforced 8 times, never contradicted
  - Referenced across 3 domains
  This belief may have reached value-level stability.
Drive emergence — surface undeclared drives from behavioral patterns:
Behavioral evidence from last 30 days:
  8/12 episodes involved collaboration -> connection evidence (0.65)
  5 episodes mention "teaching" -> reproduction evidence (0.55)
No declared drive matches "connection" or "reproduction."

Entity Model to Belief Promotion

When multiple entity models point toward the same generalization, the scaffold flags it:
Observations across entities:
  Claire: "careful with code" -> good outcomes
  Bob: "thorough in reviews" -> good outcomes

Possible generalization: "Thoroughness correlates with quality outcomes"
The derived_from provenance tracks this abstraction step.

Further Out: Diagnostics & Infrastructure

Transitive Trust Chains

Trust propagation through relationships: “I trust Claire, Claire trusts Bob, therefore I have derived trust in Bob (at a discount).” This requires graph-based trust computation with appropriate decay at each hop. Builds on dynamic trust work in v0.14.02.

Formal Diagnostic Sessions

If the note-based doctor approach proves insufficient, formalize with:
  • Diagnostic session table with consent model and access levels (structural | content | full)
  • Diagnostic report table with structured findings and recommendations
  • Trust integration: Operator-initiated sessions must pass through gate_memory_input
The privacy boundary remains: diagnostic reports contain structural findings, not content reproduction. The doctor sees structure; the entity reviews specific memories by ID.

Future Directions

These areas are acknowledged but not yet designed in detail:
AreaDescription
Scope-based inference routingComponents now declare inference_scope (‘fast’, ‘capable’, ‘embedding’, ‘none’) indicating what kind of model they need. A future core could route infer() calls to different models based on scope — e.g., cheap/fast models for tagging, capable models for consolidation, dedicated embedding models for vectors. The declarations are in place; routing is future work.
Embedding strategyCurrent 384-dim vectors may need updating for longer-lived stacks
MCP tool coverageEach new table/feature needs corresponding MCP tools
Transfer learningCross-domain belief application with domain metadata (builds on belief enrichment in v0.14.01 and skill transfer in v0.15.01)
kernle import enhancementsCurrently supports one-shot empty-stack seeding: JSON (all types with provenance chains), markdown/CSV (raw-only). Future work: heuristic classification of raw imports, LLM-assisted analysis for promotion, incremental import support

Contributing

Kernle is open source. If you’re interested in contributing to any of these areas, check the GitHub repository for open issues.