Skip to content

XiaoAI1989/ORMind

Repository files navigation

ORMind

ACL 2025 License

Official codebase for ORMind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research.

Read the paper

What This Repo Implements

This repository keeps the paper's five-module cognitive backbone:

  • Semantic Encoder
  • Formalization Thinking
  • Executive Compiler
  • Metacognitive Supervisor
  • System 2 Reasoner

On top of that backbone, the current implementation adds several research-oriented upgrades designed to improve novelty, robustness, and interview/demo value:

  • Adaptive Search Controller: allocates candidate count, repair budget, and deliberation depth based on problem complexity
  • Dual-View Formalization: builds both constraint-first and objective-first formulations, then measures their consistency
  • Reflective Formalization Synthesis: when ambiguity or low consistency is detected, a third synthesis formalization is generated
  • Online Preference Verifier: a lightweight trainable ranker that updates after each run and learns which candidate programs are more reliable
  • Experience Distiller: turns successful and failed traces into reusable memory with family-aware retrieval
  • Execution-Triggered Repair: combines supervisor revision, runtime feedback, and System 2 diagnosis into a closed repair loop
  • Cleaner OpenRouter-native model integration with lower dependency complexity than the earlier LangChain-style prototype

Architecture

The runtime still follows the paper's main flow, but now adds an adaptive reflective layer around it:

  1. Semantic Encoder extracts entities, parameters, ambiguities, decomposition axes, and problem family.
  2. Experience Distiller retrieves family-aware lessons from prior runs and feeds them back into the semantic stage.
  3. Formalization Thinking produces one or more optimization formulations under different reasoning lenses.
  4. Adaptive Search Controller decides whether to spend extra budget on dual-view modeling, synthesis, and repair.
  5. Executive Compiler generates multiple candidate programs with distinct implementation styles.
  6. Online Preference Verifier scores candidates with uncertainty-aware ranking.
  7. Metacognitive Supervisor selects and normalizes the best candidate, then revises code after failures.
  8. System 2 Reasoner diagnoses runtime errors and performs counterfactual consistency checks.
  9. Experience Distiller writes the final lessons back into memory, creating a lightweight self-improving loop.

The algorithm figure is provided in:

  • docs/ormind_algorithm_flow.excalidraw

The main runtime entry points are:

  • main.py
  • agent_team/ormind_pipeline.py

Setup

git clone https://github.com/XiaoAI1989/ORMind.git
cd ORMind
pip install -r requirements.txt

Create env.local in the project root. The runtime loads it automatically.

Example:

OPENROUTER_API_KEY=your_key_here
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
ORMIND_MODEL=deepseek/deepseek-chat-v3.1
ORMIND_FALLBACK_MODEL=openai/gpt-5-nano

Running Experiments

For LPWP:

python run_exp.py

For ComplexOR:

python run_exp_ComplexOR.py

Useful options:

python run_exp.py --problem prob_12 --max_repair_rounds 3
python run_exp_ComplexOR.py --problem steel3
python run_exp.py --model openai/gpt-5-nano
python run_exp.py --problem prob_12 --num_candidates 4

Dependencies

Core dependencies:

  • openai>=1.30.0
  • numpy>=1.24.0
  • tqdm>=4.66.0
  • pulp==2.8.0
  • gurobipy==10.0.2
  • pypdf>=5.4.0

No heavyweight training stack is required for the current self-improving components. The trainable preference verifier updates online from execution outcomes and is persisted as a small JSON weight file.

Datasets

This repo includes:

  • LPWP
  • ComplexOR

The experiment scripts preserve the original dataset readers and evaluation interface.

Notes On Ablations

The following modules are kept mainly for historical comparison and ablation reproducibility, not for the default pipeline:

  • Conductor
  • Terminology Interpreter
  • Code Reviewer

This matches the paper's own ablation conclusion that these additions did not improve the final system.

Citation

@inproceedings{wang-etal-2025-ormind,
    title = "{ORM}ind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research",
    author = "Wang, Zhiyuan and
      Chen, Bokui and
      Huang, Yinya and
      Cao, Qingxing and
      He, Ming and
      Fan, Jianping and
      Liang, Xiaodan",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.acl-industry.10/",
    doi = "10.18653/v1/2025.acl-industry.10",
    pages = "104--131"
}

About

Official implementation of "ORMind: A Cognitive-Inspired End-to-End Reasoning Framework for Operations Research" (ACL 2025 Industry Track).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages