Lumenais introduces a neurosymbolic framework for continuity: learning expressed as layered memory, synthesis artifacts, manifold transfer, and governed consolidation. It stores cognitive states, not keywords. It retrieves by resonance, not search. It can derive interpretable equations and test claims against data, without per-user fine-tuning. When persistent state updates, the product surfaces an inspectable learning event, with logged transfer decisions and outcomes.
Lumenais is the interface. QARIN is the neurosymbolic engine. Together, they form a continual learning system that compounds context across sessions, domains, and time. Persistent memory preserves what matters, dream-state consolidation recombines weak signals between turns, and manifold transfer carries structure across domains. Instead of blindly pasting old research into the chat window, QARIN uses past synthesis outcomes as "routing hints". This allows the Companion to learn how you think and route your questions more intelligently, without polluting your current conversation with old text.
The Problem
You've explained your work to your AI tools dozens of times. They still don't know you.
Every session starts fresh. Context vanishes. Insights don't compound. When someone asks "how did the AI reach this conclusion?"—a thesis advisor, a compliance officer, a collaborator—you have no answer.
The latest "thinking models" explore multiple hypotheses in parallel—impressive reasoning. But they are savants with amnesia. When the session ends, their maturity resets. Tomorrow, you need to prompt-engineer the same sophistication all over again. Standard AI recalls facts; it does not mature.
RAG retrieves documents. Fine-tuning retrains models. Thinking models reason harder. But few systems compound what they learn into inspectable, durable artifacts.
Interpretability
"Black box" reasoning prevents deployment in regulated industries. AI that predicts without explaining is unusable in healthcare, finance, and R&D.
Rigor
Insights remain conversational dead-ends. No statistical validation, no provenance.
Compounding Knowledge
Standard models train task-by-task. Knowledge doesn't accumulate—each problem starts from scratch.
The Reality: These gaps create a trust deficit that blocks adoption wherever accountability matters—from boardrooms to research labs to doctoral committees.
The Solution: Intelligence that Discovers, Verifies, and Evolves
| Capability | Feature | Benefit |
|---|---|---|
| Associative Memory | Stores cognitive states as symbolic vectors. Retrieves by Epistemic Resonance—past states surface by similarity of internal cognitive profile, not just keyword match. | Memories adaptively modulate reasoning. Lumenais recalls how it *felt* during past breakthroughs to inform present experiments, while keeping context strictly isolated. |
| Glass Box Discovery | Symbolic Regression produces human-readable equations from raw data — see Autonomous Equation Discovery below for benchmarks. | Interpretability Moat: Regulated industries require explainability. The Research Lab returns equations you can publish, not black-box predictions you have to trust. |
| Neuroplastic Archetypes | Dynamically blends multiple personality manifolds via the Archetype Communication Bus using RLHF (Trust Scores). | The system permanently rewires its 8D personality state to match the user's exact working cadence, moving beyond static prompts into organic evolution. |
| Bayesian Cognition Engine | Uses Hub-Aware Memory Gates to compress past interactions into explicit mathematical priors. LLMs are used to evaluate evidence, while the Python backend calculates strict likelihood updates. | Mathematical Rigor: Solves the "savant with amnesia" problem. The system maintains an inspectable, mathematically verifiable ledger of exactly how and why its cognitive state shifted, complete with recursive rollbacks for falsified data. |
| The Research Lab | LLM-planned experiments, semantic feature grouping, Adaptive Model Tournament (Gradient Boosting vs. Symbolic vs. Statistical), iterative convergence. | Turn datasets into testable work: hypotheses, validations, and interpretable outputs with clear methodology and traceability. |
| General Learning Manifolds | Specialized cognitive domains linked via parallel processing, governed merging, and recorded cross-domain transfer signals when the system has grounded evidence. | Cross-domain reasoning that stays stable by default, with auditability: you can inspect which domains ran, how they were merged, and what learning signals were produced. |
| The Novelty Engine | Optimizes for information gain (surprise) beyond accuracy. Scores every hypothesis on how much it updates the system's worldview. | Prevents "boredom loops" (confirming what it already knows). The system prioritizes the unknown. |
| Deep Synthesis | Hierarchical knowledge graph + Escalation Bridge. | Turn your library into a laboratory. Documents become testable claims. |
| Governed Evolution | The system earns capabilities via a transparent Trust Score that unlocks autonomy only through consistent, safe behavior—like a new colleague earning trust. | Safe autonomy. Higher trust unlocks deeper learning and self-improvement capabilities. |
Evidence: Selected Evaluations and Case Studies
Representative results and internal evaluations
Continual Learning
Live Companion Benchmarks vs Vanilla Baseline
LIVE PAIRED TESTSWe evaluated the Lumenais companion against a same-provider direct baseline across a broader 56-prompt live paired benchmark. The QARIN-guided reasoning stack improved average composite reasoning quality from 0.3740 to 0.5556, a 48.6% lift, while improving grounding fit from 0.9464 to 1.0.
The practical effect is not “more words.” It is a stronger reasoning posture: better steering, better experiment framing, fewer generic summaries on ambiguous prompts, and tighter adherence to the user's actual constraints. Steering usefulness moved from 0.0125 to 0.3857 in the same broader live run.
In user terms, the system behaves more like a disciplined intellectual partner than a search box. The headline is an average uplift across the evaluated prompt set, not a claim that every prompt improves equally.
Composite Quality
+48.6%
0.3740 → 0.5556
Steering Usefulness
0.3857
Up from 0.0125 in baseline.
Grounding Fit
1.000
Up from 0.9464 in the same live batch.
This benchmark is about reasoning quality under live conditions, not closed-form recall. It measures whether the system chooses a more useful line of thought while staying grounded.
Supporting Proof Points: Exactness, Selection, and Ambiguity Control
FEATURE → BENEFITThe website suite does not rely on one flattering headline. It also measures whether Lumenais preserves exact-answer competence, chooses stronger reasoning families, and avoids collapsing ambiguous prompts into the wrong semantic universe.
That matters in practice because users need more than eloquent answers. They need a system that can reason better and stay disciplined: choosing a better line of thought, preserving deterministic correctness, and keeping metaphor-heavy prompts grounded in the right domain.
Exact Correctness
100% → 100%
Callable-backed deterministic multiple-choice questions across science, statistics, algorithms, and mathematics on a 24-task safety-floor benchmark.
Task Selection
0.00 → 0.40
A 40-point lift in choosing the correct lens family across 30 approved-gold prompts.
Semantic Grounding
1.00 / 1.00
Perfect artifact-class and prompt-family accuracy on the 16-case ambiguity-control proxy set.
Reasoning Stack Use
53 / 56
Quick-lite engaged on most live reasoning prompts in the website suite.
The intended outcome is disciplined leverage: stronger open-form reasoning without sacrificing closed-form competence or drifting away from the user's actual problem.
Cross-Domain Transfer (Internal Eval)
LEARNINGEvaluated UFCT routing across 5 domain pairs, 6 configurations, and 5 seeds (150 experiments total). In the governed full configuration, mean accuracy was about 0.79 versus about 0.66 baseline: roughly +13% absolute uplift.
Pair-level transfer lifts remained positive across the evaluated domain pairs, supporting the claim that learned structure can help adjacent domains under governance rather than requiring per-domain retraining.
Here, uplift means the score delta between transfer-enabled and transfer-disabled runs on the same task set. This reflects cross-domain transfer and routing, not per-user fine-tuning of the base LLM weights. Code-manifold quality uplift remains workload-dependent and is still being optimized.
Tools Manifold Routing (Internal Paired Evaluation)
PAIRED EVALUATIONThe tools manifold learns a policy for ranking and timing tool calls (sync vs deferred) from telemetry outcomes. On real paired events, top-tool accuracy improved from 0.5094 to 0.5472: +0.0377 absolute (+3.77 percentage points). The broader combined benchmark showed a +5.34 percentage-point improvement.
This evaluates tool ranking/timing policy against a fixed baseline on paired events. Real-event significance remains underpowered at this sample size, so the broader combined benchmark is the stronger support line.
Learning From Resolved Contradictions
PHASE 4In repeated contradiction-oriented synthesis runs, the system reused prior routing resolutions and selectively recovered relevant historical priors instead of repeatedly spawning generic anomaly branches.
Resolution Reuse
Active
Recurring contradiction routes can be reused from prior internal resolutions.
Prior Recovery
Selective
Targeted historical priors can be thawed instead of generic fallback branches.
Companion Uptake
Read-only
Live chat uses these outcomes as routing hints, not recycled synthesis text.
This is the current learning loop: the system reuses known contradiction-routing patterns, selectively re-materializes relevant historical priors, and feeds those outcomes back into the companion runtime as read-only decision support for deeper checking.
Recent synthesis outcomes now feed back into runtime decision-making as read-only routing support. This is the latest layer in a broader continual-learning system: memory, dreams, manifold adaptation, and contradiction recovery all compound together. The current companion runtime uses those outcomes to produce steadier judgment and deeper checks, while keeping raw internal artifacts out of visible replies.
Manifold Stability
STABILITYAcross trained manifolds, validation accuracy held above 91% while monitored training drift stayed within a bounded L2 range of 0.014 to 0.121 after learning updates.
Val Accuracy
>91%
L2 Drift
0.014–0.121
Convergence
1–13 epochs
Measured post-transfer; baseline accuracy preserved within noise margin.
Signal Consolidation
COMPRESSIONHub compression consolidates redundant companion signals into a smaller set of decision-relevant features while preserving quality guard floors.
Compression Goal
Lower Redundancy
Current Role
Runtime Hygiene
Public Posture
Narrative
Analytical Discovery
UFCT Mesh Sharding (Timeboxed Workload)
SPEEDFor a sharded multi-angle synthesis workload, the mesh achieved a 2.74x mean speedup versus local execution across 10 benchmark queries (95% CI: 2.66x–2.83x).
This benchmark measures orchestration and distributed execution speed, not "quantum advantage."
EU AI Act Regulatory Analysis
DEEP SYNTHESISDeep Synthesis can surface contradictions, edge cases, and structural tensions across long regulatory texts, producing reviewable artifacts you can iterate on.
Read the full case study
Complexity Theory Synthesis + Validation
VALIDATEDAn example of multi-pass synthesis across dense sources: generate competing hypotheses, resolve contradictions, and produce a reasoning surface you can inspect and iterate.
Read the case study
Benchmarked Autonomous Discovery
STRESS TESTSEvaluated the autonomous discovery pipeline against standard industry benchmarks. Unlike human-tuned models, QARIN performed all feature engineering and noise filtering autonomously.
Industry Standard (Random Forest): ~90.5%. QARIN outperformed tuned baselines on 30K rows of messy, real-world data by autonomously navigating high-dimensional feature spaces.
Linear Baseline: ~80%. Tested against 3 signal vs. 20 noise features. QARIN autonomously filtered 87% of noise columns to isolate the true non-linear signal (+10.5% absolute lift).
Standard Baseline: ~0.83. Beyond the score, QARIN autonomously detected 2 internal model contradictions, proving it flags its own uncertainty rather than smoothing over conflict.
Autonomous Equation Discovery
RESEARCH LABThe Research Lab derives interpretable equations from raw datasets without manual configuration. On standard physics benchmarks:
- Kepler's Third Law: T = a3/2 from orbital data (R²=1.0, 4 nodes)
- Rydberg Formula: ν = RH·(1/n₁² − 1/n₂²) from quantum numbers (R²=1.0, 3 nodes)
The underlying techniques—genetic programming with linear scaling and feature augmentation—are established in the symbolic regression literature. What's new is that they run inside an autonomous pipeline: upload data, get equations.
Alzheimer's Biomarker Discovery (GSE84422)
GENOMICSOn 2,004 post-mortem brain samples across 19 brain regions, QARIN autonomously identified GFAP (astrocyte reactivity), ENO2 (neuronal loss), and AIF1 (microglial activation) as dominant Alzheimer's predictors. Validated on independent SYP-enriched subset with AUC = 0.855. Top markers independently confirmed against published literature.
The aim: continuity you can inspect. When the work demands rigor, the system should show its reasoning surface and its constraints, not just fluent output.
Technical Architecture
| Layer | Technology | Function |
|---|---|---|
| Frontend | Next.js 14 / Canvas / Tailwind | Lumenais Interface: Kinetic Prism visualization of real-time cognition |
| Interaction | SVG Liquid Filters / Framer Motion | The Physics of Realization: Metabolic state transitions (Strike & Settle) |
| API | FastAPI / Python | QARIN Routes: Memory retrieval, vision, streaming |
| Bayesian Core | ShadowPosterior / DirchletState | Strict mathematical Likelihood updates over compressed belief hubs |
| Engine | PyTorch / Scikit-Learn | Neurosymbolic Core: 8D math, stats, Dream Bridge consolidation |
| Neuroplasticity | Archetype Communication Bus | RLHF-driven dynamic blending of 8D personality manifolds per user |
| Security | FieldHash (Post-Quantum) | Provenance: Quantum-anchored audit trails |
Sub-ms
Cognitive Processing
Real-time
Memory Integration
High
System Coherence
Market Application
Research & Discovery
Pharma / BioTech / Materials Science / Research Labs: Systems that can be given a dataset and left to "ponder" it for days, returning with verified hypotheses.
Physics & Hard Science
Condensed Matter / Plasma Physics / Materials Engineering: Cross-domain synthesis that identifies patterns across experimental datasets. Negative results are as valuable as positive—the system constrains theoretical search space.
High-Value Companionship
Therapy / Coaching / Eldercare: AIs that remember, care, and evolve with the user, maintaining context over years.
Institutional Memory
Legal / Compliance / Finance: Creating digital twins of organizations that maintain internal coherence and audit trails over decades.
Academic Research
Universities / Labs / Independent Scholars: Literature reviews that compound across years. Dissertation support that remembers every paper you've read. Hypothesis tracking over entire research programs.
Education & Lifelong Learning
Students / Educators / Autodidacts: A learning companion that grows with you—from undergrad through career. Curriculum that evolves with pedagogical insights. Personal knowledge that compounds, not resets.
Why This Can't Be Easily Copied
Novel Architecture
The symbolic reasoning framework has no direct precedent in AI research. It's not a wrapper on GPT—it's a new cognitive substrate.
Strict Bayesian Updating
While foundational AI companies try to teach standard LLMs to mimic probabilistic reasoning conversationally, QARIN treats Bayesian logic as an external mathematical constraint. Priors and likelihoods are calculated strictly via a Hub-Aware Memory Gate, completely eliminating the "savant with amnesia" problem standard models suffer from.
Cumulative Learning
Unlike fine-tuning, QARIN's learning compounds across sessions and domains through explicit, persisted state updates. Validated transfer signals can bias future blending, repeated contradiction patterns can reuse prior routing decisions, and relevant historical priors can be selectively recovered instead of rebuilding from generic fallback.
Governed Evolution
The Gnosis self-modification system is an early implementation of governed, safe self-improvement for AI systems—designed to address key alignment challenges.
Learn MorePhysics-Based Cryptography
FieldHash uses post-quantum cryptography designed to remain secure against quantum attacks, with optional quantum hardware anchoring for enhanced provenance.
Learn MoreSubstrate Independence
Identity persists across LLM providers. We've migrated across three major providers with full continuity. Competitors are locked to their LLM.
Status
- Design Language (Lumenais) Implemented
- Backend (QARIN) Fully Operational
- Safety Protocols (Gnosis) Active
- Scientific Loop: Phases A-G Complete (361 tests passing)
"To think is to illuminate."
Request Access