Schedule Enhancer Reinforcement Learning Task Optimizer
An intelligent daily planning assistant that leverages Deep Q-Learning (DQN) to optimize task scheduling decisions based on contextual user inputs and feedback. Unlike rule-based planners, this system learns dynamically from interactions and improves its scheduling strategy over time.
Project Vision
Modern productivity tools rely on static prioritization rules. Schedule Enhancer introduces a reinforcement learning framework that adapts to user behavior and continuously refines task timing recommendations.
The system evaluates each task using contextual signals and suggests whether it should be:
Scheduled Earlier Kept at Same Time Scheduled Later
The model evolves through user feedback.
Core Architecture User Input (Task Parameters) ↓ State Vector Construction ↓ DQN Agent (Policy Network) ↓ Action Selection (Earlier / Same / Later) ↓ User Feedback (Reward Signal) ↓ Experience Logging ↓ Policy Improvement
State Representation
Each task is converted into a structured state vector including:
Task priority Time until deadline Estimated duration Current energy level
This numerical representation enables the DQN agent to learn scheduling policies.
Reinforcement Learning Model Algorithm: Deep Q-Network (DQN)
Neural network approximates Q-values Learns optimal scheduling actions Uses exploration–exploitation strategy (ε-greedy) Stores interactions for learning
Action Space Action Meaning 0 Shift Earlier 1 Keep Same 2 Shift Later
The agent selects the action with highest predicted Q-value.
Interactive Web Interface (Streamlit)
The application includes a clean UI built using Streamlit.
Features:
Task input form Real-time suggestion display Agent confidence (epsilon) visibility Accept / Reject feedback buttons Continuous logging of decisions Runs locally at:
Learning Mechanism
User feedback acts as the reward signal:
Accept → Positive reward Reject → Negative reward All interactions are stored in: logs/interactions.csv This enables:
Policy improvement Performance analysis Behavior tracking
Project Structure Schedule_Enhancer/ │ ├── trainer.py ├── ui_streamlit.py ├── dqn_model.py ├── logs/ │ └── interactions.csv ├── requirements.txt └── README.md
Installation pip install -r requirements.txt Running the Project 1️)Train the Agent python trainer.py 2️)Launch the Web App streamlit run ui_streamlit.py Functional Testing Guide
Test 1: Input Task Parameters
Enter task name Adjust sliders (priority, deadline, duration, energy) Verifies: UI rendering State vector construction Data handling
Test 2: Generate Suggestion
Click Get Suggestion Expected: Recommended scheduling action Epsilon value displayed No runtime errors
Validates:
Model loading Forward pass through DQN Action decoding
Test 3: Provide Feedback
Click: Accept Suggestion or Reject Suggestion Expected: Confirmation message New entry in logs/interactions.csv
Validates:
Reward assignment Logging system CSV persistence
Technical Highlights
Deep Reinforcement Learning (DQN) Stateful Decision-Making ε-Greedy Exploration Strategy Real-Time Feedback Loop Interactive Web Deployment Lightweight Local Logging
Key Learning Outcomes
Designing RL-based decision systems Translating productivity problems into MDP formulation Building interactive AI systems with Streamlit Logging and reward shaping Integrating ML models with UI applications
Potential Applications
Personal productivity assistants AI calendar systems Adaptive study planners Corporate task optimization tools Behavioral scheduling research
Future Enhancements
Experience Replay Buffer Target Network Stabilization Multi-task optimization Integration with Google Calendar API Personalized reward modeling Cloud deployment
AI Schedule Optimizer using Deep Q-Learning (DQN) Built a reinforcement learning-based task scheduling assistant that learns optimal timing decisions from user feedback via a Streamlit interface, implementing ε-greedy exploration and persistent interaction logging.