A comprehensive Python application that automates the job search and application process by scraping job listings from multiple job boards and submitting applications automatically.
- Job Scraping: Scrape job listings from Indeed and Glassdoor
- Smart Filtering: Filter jobs based on keywords, location, salary, and company preferences
- Automated Applications: Submit job applications automatically with customized resumes
- Resume Management: Customize resumes for each job application
- Database Tracking: Store and track all applications in SQLite database
- Statistics: Generate application statistics and track application status
- Logging: Comprehensive logging of all activities
job-automation/
├── main.py # Main application entry point
├── config/
│ └── config.json # Configuration file
├── src/
│ ├── config_manager.py # Configuration management
│ ├── job_scraper.py # Job scraping module
│ ├── job_applicator.py # Automated application submission
│ ├── resume_handler.py # Resume management and customization
│ └── database_manager.py # Database operations
├── data/
│ └── jobs.db # SQLite database
├── logs/
│ └── automation.log # Application logs
└── requirements.txt # Python dependencies
- Python 3.8 or higher
- Chrome/Chromium browser (for Selenium automation)
- ChromeDriver (automatic with selenium)
- Clone or download the project:
cd job-automation- Install required dependencies:
pip install -r requirements.txt-
Configure your settings:
- Edit
config/config.jsonwith your preferences:- Job search keywords and locations
- Resume path
- Job filters
- Application settings
- Edit
-
Prepare your resume:
- Place your resume file in the
resumes/directory - Update the
base_resume_pathin config.json
- Place your resume file in the
Edit config/config.json to customize:
"job_search": {
"keywords": ["python developer", "software engineer"],
"locations": ["Remote", "New York, NY"],
"job_sources": ["indeed", "glassdoor"],
"pages_to_scrape": 5
}"application_settings": {
"auto_apply": true,
"max_applications_per_day": 20,
"delay_between_applications": 3
}"filters": {
"exclude_keywords": ["contract", "temporary"],
"required_keywords": ["python", "rest api"],
"min_salary": 80000,
"exclude_companies": ["Company X"]
}python main.pypython main.py --no-applypython main.py --testpython main.py --max-apply 10python main.py --config path/to/config.jsonHandles job scraping from various sources:
scrape_indeed(): Scrape Indeed.comscrape_glassdoor(): Scrape Glassdoor.comscrape_all_sources(): Scrape all configured sources
Handles automated job applications:
apply_to_job(): Apply to a single jobbatch_apply(): Apply to multiple jobsget_application_summary(): Get application statistics
Manages resume operations:
load_resume(): Load resume from filecustomize_resume(): Customize for specific jobsave_resume_version(): Save customized versionextract_keywords(): Extract keywords from job description
Database operations:
add_job(): Add job to databaseadd_application(): Record applicationupdate_application_status(): Update statusget_application_statistics(): Get statistics
Configuration management:
load_config(): Load configurationsave_config(): Save configurationget(): Get configuration valueset(): Set configuration value
Stores all scraped job listings with source information
Tracks submitted applications with status
Maintains history of application events and updates
pending: Application submittedapplied: Successfully appliedrejected: Application rejectedinterviewed: Interview scheduledoffered: Job offer received
- Start Small: Use
--max-apply 5to test before full automation - Monitor Logs: Check
logs/automation.logfor details - Customize Resume: Enable resume customization in config
- Set Delays: Use appropriate delays between applications (3-5 seconds)
- Regular Backups: Backup your database and resume files
- Update Filters: Adjust filters based on application results
# Selenium automatically downloads ChromeDriver
# If issues persist, install webdriver-manager:
pip install webdriver-manager- Check internet connection
- Verify job websites are accessible
- Try running in test mode first
- Verify resume file path in config
- Check logs for specific error messages
- Try manual application first to ensure credentials work
- Respect website terms of service
- Use appropriate delays between requests
- Some websites may block automated access
- LinkedIn has strict anti-scraping policies
- Consider using LinkedIn API instead
- LinkedIn scraping is disabled by default
- Support for more job boards (LinkedIn, ZipRecruiter, etc.)
- Email notification on application success
- Job description analysis and keyword matching
- Proxy support for large-scale scraping
- Web UI dashboard for monitoring
- Machine learning for job matching
- Interview preparation suggestions
See requirements.txt for all dependencies:
- beautifulsoup4: HTML parsing
- selenium: Browser automation
- requests: HTTP requests
- python-docx: Word document handling
This project is provided as-is for personal use.
For issues or questions:
- Check the logs in
logs/automation.log - Review the configuration in
config/config.json - Test with
--testflag for debugging
This tool is provided for educational and personal use. Users are responsible for:
- Complying with website terms of service
- Respecting website scraping policies
- Not using this tool for spam or harassment
- Verifying applications before submission