Official codebase for ORMind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research.
This repository keeps the paper's five-module cognitive backbone:
Semantic EncoderFormalization ThinkingExecutive CompilerMetacognitive SupervisorSystem 2 Reasoner
On top of that backbone, the current implementation adds several research-oriented upgrades designed to improve novelty, robustness, and interview/demo value:
Adaptive Search Controller: allocates candidate count, repair budget, and deliberation depth based on problem complexityDual-View Formalization: builds both constraint-first and objective-first formulations, then measures their consistencyReflective Formalization Synthesis: when ambiguity or low consistency is detected, a third synthesis formalization is generatedOnline Preference Verifier: a lightweight trainable ranker that updates after each run and learns which candidate programs are more reliableExperience Distiller: turns successful and failed traces into reusable memory with family-aware retrievalExecution-Triggered Repair: combines supervisor revision, runtime feedback, and System 2 diagnosis into a closed repair loop- Cleaner OpenRouter-native model integration with lower dependency complexity than the earlier LangChain-style prototype
The runtime still follows the paper's main flow, but now adds an adaptive reflective layer around it:
Semantic Encoderextracts entities, parameters, ambiguities, decomposition axes, and problem family.Experience Distillerretrieves family-aware lessons from prior runs and feeds them back into the semantic stage.Formalization Thinkingproduces one or more optimization formulations under different reasoning lenses.Adaptive Search Controllerdecides whether to spend extra budget on dual-view modeling, synthesis, and repair.Executive Compilergenerates multiple candidate programs with distinct implementation styles.Online Preference Verifierscores candidates with uncertainty-aware ranking.Metacognitive Supervisorselects and normalizes the best candidate, then revises code after failures.System 2 Reasonerdiagnoses runtime errors and performs counterfactual consistency checks.Experience Distillerwrites the final lessons back into memory, creating a lightweight self-improving loop.
The algorithm figure is provided in:
docs/ormind_algorithm_flow.excalidraw
The main runtime entry points are:
main.pyagent_team/ormind_pipeline.py
git clone https://github.com/XiaoAI1989/ORMind.git
cd ORMind
pip install -r requirements.txtCreate env.local in the project root. The runtime loads it automatically.
Example:
OPENROUTER_API_KEY=your_key_here
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
ORMIND_MODEL=deepseek/deepseek-chat-v3.1
ORMIND_FALLBACK_MODEL=openai/gpt-5-nanoFor LPWP:
python run_exp.pyFor ComplexOR:
python run_exp_ComplexOR.pyUseful options:
python run_exp.py --problem prob_12 --max_repair_rounds 3
python run_exp_ComplexOR.py --problem steel3
python run_exp.py --model openai/gpt-5-nano
python run_exp.py --problem prob_12 --num_candidates 4Core dependencies:
openai>=1.30.0numpy>=1.24.0tqdm>=4.66.0pulp==2.8.0gurobipy==10.0.2pypdf>=5.4.0
No heavyweight training stack is required for the current self-improving components. The trainable preference verifier updates online from execution outcomes and is persisted as a small JSON weight file.
This repo includes:
LPWPComplexOR
The experiment scripts preserve the original dataset readers and evaluation interface.
The following modules are kept mainly for historical comparison and ablation reproducibility, not for the default pipeline:
ConductorTerminology InterpreterCode Reviewer
This matches the paper's own ablation conclusion that these additions did not improve the final system.
@inproceedings{wang-etal-2025-ormind,
title = "{ORM}ind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research",
author = "Wang, Zhiyuan and
Chen, Bokui and
Huang, Yinya and
Cao, Qingxing and
He, Ming and
Fan, Jianping and
Liang, Xiaodan",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-industry.10/",
doi = "10.18653/v1/2025.acl-industry.10",
pages = "104--131"
}