Skip to content

QuantaAlpha/chain-of-mindset

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 Chain of Mindset: Reasoning with Adaptive Cognitive Modes

Paper License Python 3.9+

If you like our project, please give us a star ⭐ on GitHub for the latest update.

πŸ“£ Latest News

  • 02/11/2026: πŸŽ‰ The code for Chain-of-Mindset has been released! You can now apply CoM to enhance your LLM reasoning.

πŸ’‘ Overview

Human problem-solving is never the repetition of a single mindset. When tackling a complex task, we do not rely on a single cognitive mode; instead, we dynamically switch between different mindsets as the problem state evolves. However, existing LLM reasoning methods fall into a common trap: they apply the same fixed mindset across all steps, overlooking that different stages of solving the same problem require fundamentally different cognitive approaches.

✨ The Meta-Cognition Gap

Humans make cognitive decisions in millisecondsβ€”unconsciously switching between calculation, visualization, exploration, and focused analysis. LLMs cannot do this implicitly. Chain-of-Mindset (CoM) bridges this gap by providing an explicit framework for step-level adaptive mindset orchestration.

✨ Key Innovation

Unlike previous methods that are limited to a single mindset or select a strategy only at task onset, CoM enables dynamic, state-dependent cognitive switchingβ€”recognizing when to transition between mindsets based on the progress of reasoning.

Framework Architecture:

  • Meta-Agent: Operates as a meta-cognitive orchestrator, iteratively generating cognitive decisions, dispatching subtasks to specialized mindsets, and internalizing key insights.
  • Four Heterogeneous Mindsets: Divergent, Algorithmic, Convergent, and Spatialβ€”each providing distinct cognitive capabilities.
  • Bidirectional Context Gate: Mediates information flow between modules, filtering relevant history for mindset execution and distilling verbose traces into concise results.

🧠 The Four Cognitive Mindsets

MindsetΒ Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  Trigger Cognitive Shift When to Use
πŸ’» Algorithmic <call_algorithmic> Estimation β†’ Precise Verification Hypothesis needs objective verification through code execution
πŸ–ΌοΈ Spatial <call_spatial> Verbal β†’ Visual-Spatial Representation Problem has geometric structure or benefits from visualization
🌳 Divergent <call_divergent> Convergent β†’ Parallel Exploration Uncertain which approach is correct; need to explore multiple paths
πŸ” Convergent <call_convergent> Scattered β†’ Deep Focused Analysis Need to reason deeply through one specific logical thread

πŸ”§ Installation

1. Environment Setup

# Clone the repository
git clone https://github.com/QuantaAlpha/chain-of-mindset.git
cd chain-of-mindset

# Create conda environment
conda create -n com python=3.9
conda activate com

# Install requirements
pip install -r requirements.txt

2. Configure API Keys

Edit the config files in configs/ directory:

cd configs

# For API mode (OpenAI, Azure, OpenRouter, etc.)
# Edit: meta_llm_config_api.json, mindset_llm_config_api.json, gate_config_api.json

# For Local mode (vLLM, Ollama, etc.)
# Edit: meta_llm_config_local.json, mindset_llm_config_local.json, gate_config_local.json

See configs/README.md for detailed configuration options.

πŸš€ Quick Start

API Mode (OpenAI, Azure, OpenRouter, etc.)

# Run with default query
python main_api.py

# Run with custom query
python main_api.py --query "A train leaves Station A at 80 km/h. Two hours later, another train leaves at 120 km/h in the same direction. When does the second train catch up?"

# With image input
python main_api.py --query "Analyze this geometry problem" --images path/to/diagram.png

Local Mode (vLLM, Ollama, etc.)

# Run with default query
python main_local.py

# Run with custom query
python main_local.py --query "Your question here"

Without Spatial Mindset (Text-only)

For scenarios where image generation is not needed:

# API mode without Spatial mindset
python main_without_spatial.py --mode api --query "Your question here"

# Local mode without Spatial mindset
python main_without_spatial.py --mode local --query "Your question here"

Parameters

Parameter Description Default
--query The question to solve Default test query
--images Path to input images (optional) None
--meta_llm_conf Path to meta LLM config configs/meta_llm_config_*.json
--mindset_llm_conf Path to mindset LLM config configs/mindset_llm_config_*.json
--gate_conf Path to gate config configs/gate_config_*.json
--img_conf Path to image generation config configs/api_config_image.json

οΏ½ Output

All reasoning traces and generated images are saved to the workspace/ directory, organized by session.

οΏ½πŸ“ Project Structure

chain-of-mindset/
β”œβ”€β”€ main_api.py              # API mode entry point
β”œβ”€β”€ main_local.py            # Local mode entry point
β”œβ”€β”€ main_without_spatial.py  # Entry point without Spatial mindset
β”œβ”€β”€ config.py                # Configuration management
β”‚
β”œβ”€β”€ core/
β”‚   β”œβ”€β”€ orchestrator.py      # Meta-cognitive orchestrator
β”‚   β”œβ”€β”€ llm_client.py        # LLM client wrapper
β”‚   β”œβ”€β”€ gate.py              # Bidirectional context gate
β”‚   β”œβ”€β”€ sandbox.py           # Code execution sandbox
β”‚   β”œβ”€β”€ protocol.py          # Mindset token protocol
β”‚   └── image_client.py      # Image generation client
β”‚
β”œβ”€β”€ paradigms/
β”‚   β”œβ”€β”€ base.py              # Base paradigm class
β”‚   β”œβ”€β”€ registry.py          # Paradigm registry
β”‚   β”œβ”€β”€ convergent.py        # Convergent analysis mindset
β”‚   β”œβ”€β”€ algorithmic/         # Algorithmic (code execution) mindset
β”‚   β”œβ”€β”€ divergent/           # Divergent exploration mindset
β”‚   └── spatial/             # Spatial visualization mindset
β”‚
β”œβ”€β”€ prompts/
β”‚   └── system.py            # System prompts for meta-agent
β”‚
β”œβ”€β”€ utils/                   # Utility functions
β”œβ”€β”€ configs/                 # Configuration files
└── assets/                  # Figures and images

πŸ’» Code Execution (Algorithmic Mindset)

The Algorithmic mindset executes Python code through a Generate β†’ Execute β†’ Fix β†’ Retry loop.

Two execution modes:

  • 🐳 Docker Mode: Auto-detected when Docker is available. Runs code in isolated container with auto-dependency installation.
  • πŸ’» Local Mode: Fallback when Docker unavailable. Runs in subprocess with basic security checks (no auto-install).
# Docker mode (recommended) - ensure Docker is running
docker --version

πŸ“„ Citation

If you find this work useful, please cite our paper:

@article{jiang2026chain,
  title = {Chain of Mindset: Reasoning with Adaptive Cognitive Modes},
  author = {Tianyi Jiang, Arctanx An, Hengyi Feng, Naixin Zhai, Haodong Li, Xiaomin Yu, Jiahui Liu, Hanwen Du, Shuo Zhang, Zhi Yang, Jie Huang, Yuhua Li, Yongxin Ni, Huacan Wang, Ronghao Chen},
  journal = {arXiv preprint arXiv:2602.10063},
  year = {2026}
}

πŸ“„ License

This project is released under the MIT License.

πŸ“¬ Contact

For any questions or feedback, please open an issue or reach out to us at tianyijiang0219@gmail.com.

πŸ™ Acknowledgments

  • Inspired by cognitive science research on working memory and executive function
  • The Algorithmic mindset implementation is based on Chain of Code (Li et al., 2023)
  • Built on the OpenAI API specification for broad compatibility
  • Thanks to the open-source community for foundational tools and models

Star History

Star History Chart

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages