Molq is a unified and flexible job queue system designed for both local execution and cluster computing environments. It provides a clean, decorator-based API that makes it easy to submit, monitor, and manage computational tasks across different execution backends.
- 🎯 Unified Interface: Single API for local and cluster execution
- 🐍 Decorator-Based: Simple, Pythonic syntax using decorators
- ⚡ Generator Support: Advanced control flow with generator-based tasks
- 🔌 Multiple Backends: Support for local execution, SLURM clusters, and more
- 📊 Job Monitoring: Built-in status tracking and error handling
- 💾 Resource Management: Flexible resource allocation and cleanup
- 🔄 Job Dependencies: Chain jobs and manage complex workflows
- 📧 Notifications: Email alerts for job status changes
pip install molqfrom molq import submit
# Create submitters for different environments
local = submit('dev', 'local') # Local execution
cluster = submit('hpc', 'slurm') # SLURM cluster
@local
def hello_world(name: str):
"""A simple local job."""
job_id = yield {
'cmd': ['echo', f'Hello, {name}!'],
'job_name': 'greeting'
}
return job_id
@cluster
def train_model():
"""A GPU training job on the cluster."""
job_id = yield {
'cmd': ['python', 'train.py'],
'cpus': 16,
'memory': '64GB',
'time': '08:00:00',
'gpus': 2,
'partition': 'gpu'
}
return job_id
# Run jobs
hello_world("Molq")
job_id = train_model()from molq import cmdline
@cmdline
def get_system_info():
"""Execute command and capture output."""
result = yield {'cmd': ['uname', '-a']}
return result.stdout.decode().strip()
system_info = get_system_info()
print(system_info)- Tutorial - Step-by-step guide
- API Reference - Complete API documentation
- Recipes - Real-world examples
- Examples - Practical code examples
| Backend | Description | Status |
|---|---|---|
| Local | Local machine execution | ✅ Full support |
| SLURM | HPC cluster scheduler | ✅ Full support |
| PBS/Torque | Legacy cluster scheduler | 🚧 Basic support |
| LSF | IBM cluster scheduler | 🚧 Basic support |
@cluster
def analysis_pipeline():
# Step 1: Preprocessing
prep_job = yield {
'cmd': ['python', 'preprocess.py'],
'cpus': 8, 'memory': '32GB', 'time': '02:00:00'
}
# Step 2: Analysis (depends on preprocessing)
analysis_job = yield {
'cmd': ['python', 'analyze.py'],
'cpus': 16, 'memory': '64GB', 'time': '08:00:00',
'dependency': prep_job # Wait for preprocessing
}
return [prep_job, analysis_job]@cluster
def robust_job():
try:
return yield {'cmd': ['python', 'risky_script.py']}
except Exception:
# Fallback to safer approach
return yield {'cmd': ['python', 'safe_script.py']}We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by Hamilton for dataflow patterns
- Built for the scientific computing and HPC community