Skip to content

MolCrafts/molq

Repository files navigation

Molq: Molcrafts Queue Interface

Tests PyPI version Python 3.10+ License: MIT

Molq is a unified and flexible job queue system designed for both local execution and cluster computing environments. It provides a clean, decorator-based API that makes it easy to submit, monitor, and manage computational tasks across different execution backends.

✨ Key Features

  • 🎯 Unified Interface: Single API for local and cluster execution
  • 🐍 Decorator-Based: Simple, Pythonic syntax using decorators
  • ⚡ Generator Support: Advanced control flow with generator-based tasks
  • 🔌 Multiple Backends: Support for local execution, SLURM clusters, and more
  • 📊 Job Monitoring: Built-in status tracking and error handling
  • 💾 Resource Management: Flexible resource allocation and cleanup
  • 🔄 Job Dependencies: Chain jobs and manage complex workflows
  • 📧 Notifications: Email alerts for job status changes

🚀 Quick Start

Installation

pip install molq

Basic Usage

from molq import submit

# Create submitters for different environments
local = submit('dev', 'local')           # Local execution
cluster = submit('hpc', 'slurm')         # SLURM cluster

@local
def hello_world(name: str):
    """A simple local job."""
    job_id = yield {
        'cmd': ['echo', f'Hello, {name}!'],
        'job_name': 'greeting'
    }
    return job_id

@cluster
def train_model():
    """A GPU training job on the cluster."""
    job_id = yield {
        'cmd': ['python', 'train.py'],
        'cpus': 16,
        'memory': '64GB',
        'time': '08:00:00',
        'gpus': 2,
        'partition': 'gpu'
    }
    return job_id

# Run jobs
hello_world("Molq")
job_id = train_model()

Command Line Integration

from molq import cmdline

@cmdline
def get_system_info():
    """Execute command and capture output."""
    result = yield {'cmd': ['uname', '-a']}
    return result.stdout.decode().strip()

system_info = get_system_info()
print(system_info)

📖 Documentation

🎯 Supported Backends

Backend Description Status
Local Local machine execution ✅ Full support
SLURM HPC cluster scheduler ✅ Full support
PBS/Torque Legacy cluster scheduler 🚧 Basic support
LSF IBM cluster scheduler 🚧 Basic support

🔧 Advanced Features

Multi-Step Workflows

@cluster
def analysis_pipeline():
    # Step 1: Preprocessing
    prep_job = yield {
        'cmd': ['python', 'preprocess.py'],
        'cpus': 8, 'memory': '32GB', 'time': '02:00:00'
    }

    # Step 2: Analysis (depends on preprocessing)
    analysis_job = yield {
        'cmd': ['python', 'analyze.py'],
        'cpus': 16, 'memory': '64GB', 'time': '08:00:00',
        'dependency': prep_job  # Wait for preprocessing
    }

    return [prep_job, analysis_job]

Error Handling

@cluster
def robust_job():
    try:
        return yield {'cmd': ['python', 'risky_script.py']}
    except Exception:
        # Fallback to safer approach
        return yield {'cmd': ['python', 'safe_script.py']}

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Inspired by Hamilton for dataflow patterns
  • Built for the scientific computing and HPC community

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages