At Arqion Labs, we’ve been developing a modular reasoning framework designed to move beyond typical retrieval-augmented generation (RAG) systems. That framework is CORTEX—our internal platform for symbolic-neural hybrid reasoning, engineered to enable explainable logic, editable memory, and structured fact extraction from real-world documents.

CORTEX is built to serve as a foundation for systems that require more than just language generation. It brings together persistent knowledge graphs, multi-hop logic, and natural language understanding into a unified platform, with full transparency at every step.

We’re now actively using CORTEX across a variety of internal use cases, and it serves as the base for two ongoing R&D efforts: Sherlock, a domain-specific case-solving engine, and CORTEX++, an experimental architecture exploring artificial cognition.

In this post, we’ll highlight CORTEX’s current capabilities and the direction we’re taking next.


CORTEX: Current Capabilities

Structured Knowledge Extraction
CORTEX accepts PDFs, CSVs, plain text, and image files (via OCR), extracting structured (subject, relation, object) triples from content using GPT-4. Each chunk of input is also embedded and indexed for semantic retrieval.

Symbolic + Neural Hybrid Reasoning
At its core, CORTEX maintains a symbolic memory graph where facts can be inferred using a rule-based engine. This is combined with semantic and keyword search to surface both exact and fuzzy matches. The system supports logic chaining, conflict resolution, and confidence scoring for each fact.

Transparent, Explainable Outputs
Unlike most LLM-based systems, CORTEX provides clear reasoning traces. Answers are grounded in retrieved facts, with chunk-level citations and explicit justifications for inferences and extrapolations. Contradictions between sources are detected and highlighted during the reasoning process.

Interactive Visualization
All knowledge is visualized in a directed graph interface, allowing users to explore entity relationships, detect information gaps, and directly edit memory. Conflicts, symbolic inferences, and fact confidence levels are represented visually.

Flexible Inference Modes
CORTEX classifies questions into investigative, business, strategic, and general reasoning categories, dynamically adjusting its search and prompting strategies to fit the context of the inquiry.


Designed for Precision, Built for Expansion

We engineered CORTEX as a testbed for advanced reasoning. Its design balances interpretability with depth, combining the structured rigor of knowledge graphs with the generative flexibility of large language models. All triples are editable in real time, and all reasoning steps are auditable, giving developers and analysts full control over the system’s memory and logic.

This makes CORTEX especially well-suited to domains that demand traceability and trust—such as research workflows, intelligence gathering, and analytical decision support.


What’s Next

CORTEX is already enabling high-quality reasoning over document sets and structured data. The next phase of development is focused on two strategic tracks:

Sherlock – A Case-Solving Reasoning Engine
Sherlock is being built on top of CORTEX for investigative use cases. It introduces scoped memory by case, timeline modeling, motive and hypothesis inference, and automated report generation. Sherlock aims to assist analysts in identifying relationships, inconsistencies, and causality across complex narratives.

CORTEX++ – Toward Artificial Consciousness Style Cognition
CORTEX++ explores the cognitive dimension of AI. This includes internal self-talk, recursive reasoning, belief state modeling, and memory evolution over time. Our goal is to investigate what it takes to build a system that not only answers questions but reflects on its own knowledge and decision processes.


CORTEX is more than a tool for reasoning—it’s a step toward transparent, accountable, and adaptive AI. We’re excited to share more as both Sherlock and CORTEX++ continue to develop, and we look forward to releasing technical updates, architecture walkthroughs, and public demos in the coming months.

Stay tuned.

— The Arqion Labs Research Team