Watch the full product demo to see DevHire in action:
Applying to jobs on LinkedIn is tedious, time-consuming, and repetitive. Professionals spend hours:
- Manually searching for relevant job postings
- Reading job descriptions to find matches
- Customizing resumes for each application
- Filling out LinkedIn forms repeatedly
- Tracking which jobs they've applied to
DevHire automates this entire workflow, enabling job seekers to apply to dozens of relevant positions in minutes instead of hours, with AI-powered resume customization tailored to each job.
DevHire is a full-stack AI automation platform that:
- Extracts Your Profile β Uses a Chrome extension to capture LinkedIn credentials, cookies, and browser fingerprint
- Parses Your Resume β Analyzes your resume using Google Gemini AI to extract skills, experience, and qualifications
- Searches for Jobs β Scrapes LinkedIn using Playwright (headless browser) based on parsed job titles and keywords from your resume
- Tailors Resumes β Uses Google Gemini to dynamically generate job-specific resumes for each application with LaTeX rendering to PDF (supports 4 professional templates)
- Auto-Applies β Programmatically fills out LinkedIn's Easy Apply forms and submits applications with tailored resumes
- Generates Portfolios β π AI-powered portfolio website generator creates professional HTML/CSS portfolio pages from your resume (supports 5 templates)
- Tracks Progress β Provides real-time progress tracking and application history in the web dashboard
- Persistent Sessions β π Stores LinkedIn browser context in database for seamless session resumption
Result: Apply to 50+ tailored job applications in the time it used to take to apply to 5.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (Next.js 15 + React 19) β
β β¨ Dashboard, Login, Job Selection, Apply Flow β
β π Portfolio Builder, Pricing Plans, Password Resetβ
β Supabase Auth + Prisma ORM β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (API calls)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Backend (Python FastAPI + Uvicorn) β
β β’ /get-jobs β Parse resume & scrape LinkedIn β
β β’ /apply-jobs β Apply with tailored resumes β
β β’ /tailor β AI resume customization (Gemini) β
β β’ /store-cookie β Receive auth from extension β
β β’ π /portfolio β AI portfolio website generator β
β β’ π /logout β Clear LinkedIn context β
β β’ π /debug/* β Visual debugging dashboard β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (Browser control)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Chrome Extension (Manifest V3) β
β β’ Captures LinkedIn cookies & localStorage β
β β’ Sends credentials to backend β
β β’ Injects content scripts for data collection β
β β’ π Browser fingerprint collection β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agents (Async Python Tasks) β
β β’ Scraper Agent β LinkedIn job search (Playwright)β
β β’ Parser Agent β Resume analysis (Gemini AI) β
β β’ Tailor Agent β Resume generation (Gemini) β
β β’ Apply Agent β Form filling & submission β
β β’ π Portfolio Agent β Website generation (Groq) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Databases β
β β’ PostgreSQL (Prisma ORM) β Users, applied jobs β
β β’ π LinkedIn Context Storage β Session persist β
β β’ Supabase β Authentication & Auth helpers β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Component | Technology | Purpose |
|---|---|---|
| Frontend | Next.js 15, React 19, TypeScript | User dashboard, job browsing, application tracking, portfolio builder |
| Backend | Python FastAPI, Uvicorn | Job scraping, resume tailoring, form automation, portfolio generation |
| AI Engines | Google Gemini 2.5 Flash + Groq GPT | Resume parsing, job-resume matching, tailored resume generation, portfolio creation |
| Browser Automation | Playwright (async) | LinkedIn login, job scraping, form filling |
| Auth | Supabase + JWT + AES-256 | User registration, login, session management, credential encryption |
| Database | PostgreSQL + Prisma | User profiles, applied jobs, resume URLs, LinkedIn context |
| Extension | Chrome Manifest V3 | Cookie/fingerprint capture, credential sync, session persistence |
DevHire/
βββ my-fe/ # Next.js Frontend
β βββ app/
β β βββ Components/
β β β βββ Home.tsx # Landing page hero section
β β β βββ Jobs.tsx # Job search & listing
β β β βββ Apply.tsx # Application progress tracker
β β β βββ Tailor_resume.tsx # Resume tailoring UI
β β β βββ Login.tsx # Authentication
β β β βββ Register.tsx # π User registration with validation
β β β βββ ResetPass.tsx # π Password reset flow
β β β βββ JobCards.tsx # Job card display
β β β βββ PortfolioPage.tsx # π AI portfolio builder UI
β β β βββ Navbar.tsx # Navigation with session management
β β β βββ Features.tsx # Feature showcase with animations
β β β βββ Linkedin.tsx # LinkedIn credentials input
β β β βββ About.jsx # About page
β β β βββ Pricing/ # π Pricing plans (Basic/Pro)
β β β βββ Landing-Page/ # Landing page components
β β βββ api/ # Backend API routes (Next.js)
β β β βββ store/ # π Encrypted credential storage
β β β βββ get-data/ # π Retrieve encrypted credentials
β β β βββ User/ # User CRUD operations
β β βββ Jobs/ # Job pages
β β β βββ LinkedinUserDetails/ # LinkedIn login page
β β β βββ tailor/ # Resume tailoring page
β β β βββ portfolio/ # π Portfolio builder page
β β βββ apply/ # Application flow pages
β β βββ login/ # Login page
β β βββ register/ # π Registration page
β β βββ reset-password/ # π Password reset page
β β βββ pricing/ # π Pricing page
β β βββ about/ # About page
β β βββ utiles/
β β βββ agentsCall.ts # API calls to Python backend
β β βββ supabaseClient.ts # Supabase initialization
β β βββ getUserData.ts # User profile management
β β βββ useUploadResume.ts # π Resume upload hook
β β βββ database.ts # π Prisma client
β β βββ api.ts # π API URL configuration
β βββ prisma/
β β βββ schema.prisma # Database schema (PostgreSQL)
β βββ context/ # React context providers
β βββ store/ # Redux store configuration
β βββ package.json
β
βββ backend_python/ # Python FastAPI Backend
β βββ main/
β β βββ main.py # FastAPI app setup
β β βββ progress_dict.py # π Shared progress tracking
β β βββ routes/
β β βββ list_jobs.py # GET /get-jobs endpoint
β β βββ apply_jobs.py # POST /apply-jobs endpoint
β β βββ get_resume.py # POST /tailor endpoint
β β βββ cookie_receiver.py # POST /store-cookie endpoint
β β βββ progress_route.py # Progress tracking
β β βββ portfolio_generator.py # π Portfolio generation endpoint
β β βββ logout.py # π LinkedIn context cleanup
β β βββ debug_routes.py # π Visual debugging dashboard
β β βββ templates/ # π Resume template images
β βββ agents/
β β βββ scraper_agent.py # LinkedIn job scraper
β β βββ scraper_agent_optimized.py # Optimized async scraper
β β βββ apply_agent.py # Form filling & submission
β β βββ tailor.py # Resume AI customization (4 templates)
β β βββ parse_agent.py # Resume parsing
β β βββ portfolio_agent.py # π AI portfolio generator (5 templates)
β βββ database/
β β βββ db_engine.py # SQLAlchemy connection
β β βββ SchemaModel.py # User model (SQLAlchemy)
β β βββ linkedin_context.py # π LinkedIn session persistence
β βββ lib/
β β βββ session_manager.py # π Session management utilities
β βββ data_dump/ # π Playwright persistent storage
β βββ config.py # API keys & environment variables
β βββ requirements.txt # Python dependencies
β βββ Dockerfile # π Docker containerization
β βββ run.sh # Linux startup script
β βββ run.bat # π Windows startup script
β
βββ extension/ # Chrome Extension
β βββ manifest.json # Extension configuration
β βββ background.js # Service worker (cookie/storage sync)
β βββ content.js # Content script (fingerprint collection)
β
βββ next-app/ # π Alternative Next.js app
β βββ app/
β β βββ api/
β β βββ components/
β β βββ signin/
β β βββ signup/
β β βββ UserDetails/
β βββ prisma/
β
βββ schema.sql # π Database SQL schema
βββ prisma/ # Prisma schema (shared)
βββ schema.prisma # Database models
1. User logs in with LinkedIn on DevHire web app
2. Chrome extension captures LinkedIn cookies & localStorage
3. Extension sends data to backend (/store-cookie)
4. Backend stores encrypted credentials for session
1. User uploads resume (PDF/DOCX)
2. Backend extracts text using PyMuPDF + OCR (pytesseract)
3. Gemini AI parses resume β extracts job titles, skills, experience
4. Output: ["Full Stack Developer, Backend Engineer"] + ["Python, React, PostgreSQL"]
1. Frontend calls POST /get-jobs with:
- user_id, password, resume_url
2. Backend uses Playwright to:
- Login to LinkedIn with credentials
- Search for each parsed job title
- Scrape job listings (title, company, description, link)
3. Jobs returned to frontend, user selects 10-50 jobs
1. For each selected job:
- Frontend calls POST /tailor with:
- original_resume_url, job_description
2. Backend:
- Downloads original resume PDF
- Sends to Gemini: "Tailor this resume for this JD"
- Gemini generates LaTeX with highlighted relevant skills
- LaTeX rendered to PDF
- PDF converted to Base64
3. Base64 PDFs sent to frontend, stored in sessionStorage
1. Frontend calls POST /apply-jobs with:
- jobs: [{job_url, job_description}]
- user_id, password, resume_url
2. Backend applies in parallel batches (15 jobs per batch):
- Open LinkedIn Easy Apply modal via Playwright
- Detect form fields (experience, skills, resume upload)
- Fill fields with tailored answers + upload tailored PDF
- Click Submit
3. Real-time progress updates via /progress endpoint
4. Applications tracked in PostgreSQL (User.applied_jobs)
- Framework: Next.js 15.4.6 with App Router
- Language: TypeScript + React 19
- Styling: Tailwind CSS 4
- State Management: Redux Toolkit
- HTTP: Axios for API calls
- Auth: Supabase Auth + JWT
- Database ORM: Prisma Client
- Animations: Framer Motion
- UI Feedback: React Hot Toast (notifications)
- PDF Handling: PDF.js for rendering
- Framework: FastAPI (async web server)
- Async Runtime: Uvicorn (ASGI)
- Browser Automation: Playwright (async headless browser)
- Database: PostgreSQL + SQLAlchemy ORM
- AI Models:
- Google Gemini 2.5 Flash (resume parsing, tailoring) with fallback chain
- Groq GPT-oss-120b (portfolio generation)
- PDF Processing: PyMuPDF (fitz), pdf2image, pytesseract (OCR)
- Resume Generation: LaTeX rendering (4 templates), FPDF, python-docx
- Portfolio Generation: HTML/CSS (5 templates)
- HTTP: aiohttp for async requests
- Auth: browser-cookie3 for session cookies
- Containerization: Docker support
model User {
id String @id @default(uuid()) @db.Uuid
email String @unique
name String?
resume_url String?
applied_jobs String[] // Array of LinkedIn job URLs applied to
linkedin_context Json? // π Playwright browser session state
context_updated_at DateTime? // π LinkedIn context last update timestamp
}- Manifest: V3 (latest standard)
- Service Worker: background.js (cookie sync every 2 minutes)
- Content Script: content.js (data injection + fingerprint collection)
- Capabilities:
- Cookie capture (
.linkedin.comandwww.linkedin.comdomains) - localStorage/sessionStorage access
- π Browser fingerprint collection (userAgent, platform, language, timezone, hardware info, screen dimensions, viewport size, pixel ratio, color depth)
- π SPA navigation detection for re-injection
- Cookie capture (
- Node.js β₯ 20
- Python 3.9+
- PostgreSQL database (local or cloud)
- Google API Key (Gemini access)
- Supabase Project (optional, for auth)
- Chrome Browser (for extension & Playwright)
cd my-fe
npm install
npx prisma generate # Generate Prisma ClientCreate .env.local:
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_key
NEXT_PUBLIC_API_URL=http://localhost:8000
DATABASE_URL=postgresql://user:password@localhost:5432/devhire
cd backend_python
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
playwright install # Download browser binariesCreate config.py:
GOOGLE_API = "your_gemini_api_key"
GROQ_API = "your_groq_api_key" # For portfolio generation
LINKEDIN_ID = "your_linkedin_email"
LINKEDIN_PASSWORD = "your_linkedin_password"cd my-fe
npx prisma migrate dev --name initcd backend_python
python -m uvicorn main.main:app --reload --host 0.0.0.0 --port 8000cd my-fe
npm run devOpen http://localhost:3000 in browser.
- Open
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked"
- Select
DevHire/extension/folder
Input: Resume PDF Process:
- Extract text with OCR
- Send to Gemini: "Extract job titles and technical skills"
- Parse response
Output:
["Full Stack Developer", "Backend Engineer"] + ["Python", "React", "PostgreSQL", ...]
Input: Original resume + Job description Templates Available:
- Classic Professional
- Modern Minimalist
- Technical Focus
- Creative Design
Process:
prompt = f"""
Given this resume:
{resume_text}
And this job description:
{job_description}
Generate a tailored resume in LaTeX that:
1. Highlights relevant skills
2. Reorders experience by relevance
3. Emphasizes matching technologies
"""
response = gemini_model.generate_content(prompt)
# Response is LaTeX β rendered to PDF β Base64Output: Customized PDF resume (Base64 encoded)
Input: Resume data + Selected template Templates Available:
- Modern Developer Portfolio
- Minimalist Professional
- Creative Designer
- Tech Startup Style
- Corporate Executive
Process:
# Uses Groq GPT-oss-120b for HTML/CSS generation
prompt = f"""
Generate a complete, responsive portfolio website HTML/CSS based on:
- Resume data: {resume_data}
- Template style: {template_id}
Include sections for: About, Skills, Experience, Projects, Contact
"""
response = groq_client.generate(prompt)
# Returns production-ready HTML/CSSOutput: Complete HTML/CSS portfolio page with live preview
Example Flow:
# 1. Playwright opens LinkedIn Easy Apply
await page.click('button:has-text("Easy Apply")')
# 2. Detect form fields
experience_field = await page.query_selector('input[name="experience"]')
skills_field = await page.query_selector('input[name="skills"]')
# 3. Fill with AI-suggested answers
await experience_field.fill("5 years in Full Stack Development")
await skills_field.fill("Python, React, PostgreSQL")
# 4. Upload tailored resume
resume_input = await page.query_selector('input[type="file"]')
await resume_input.set_input_files(tailored_resume_path)
# 5. Submit
await page.click('button:has-text("Submit application")')Process:
# Save browser context to database
await save_linkedin_context(
email=user_email,
context={
"storage_state": browser_context.storage_state(),
"cookies": cookies,
"localStorage": local_storage
}
)
# Restore session on next visit - no re-login needed!
context = await load_linkedin_context(user_email)
browser_context = await browser.new_context(storage_state=context)Endpoints:
/debug/screenshots- View all debug screenshots/debug/gallery- HTML gallery for visual inspection/debug/images- Get images from recent scraper runs
| Feature | Basic (Free) | Pro (βΉ199/month) |
|---|---|---|
| Job Applications | 10/month | Unlimited |
| Resume Tailoring | 5/month | Unlimited |
| Portfolio Generation | 1 template | All 5 templates |
| LinkedIn Context Persistence | β | β |
| Priority Support | β | β |
| Metric | Value |
|---|---|
| Jobs Applied Per Hour | 50-100 (vs. 5-10 manual) |
| Resume Customization Time | <5 seconds per job (Gemini) |
| Job Scraping Speed | 20-30 jobs/minute |
| End-to-End Application Time | 10-15 minutes for 50 jobs |
| Success Rate (Easy Apply) | ~85% (varies by form complexity) |
- LinkedIn credentials encrypted with AES-256-CBC
- Session-based authentication (JWT)
- Credentials stored in HTTP-only cookies (encrypted)
- LinkedIn context stored in PostgreSQL (session persistence)
- CORS middleware restricts to authorized origins (extension ID + Vercel production URL)
- Supabase Auth for user verification
- PostgreSQL for encrypted data at rest
- Uses official LinkedIn API where available
- Respects robots.txt and rate limits
- Stealth mode Playwright (mimics human behavior)
Solution: Ensure package.json has "type": "module"
Solution: Run playwright install after pip install
Solution: Add delays between tailoring requests (5-10 second intervals). The system has automatic fallback to: gemini-2.5-flash β gemini-2.5-flash-lite β gemini-2.0-flash β gemini-1.5-flash
Solution:
- Update
LINKEDIN_IDandLINKEDIN_PASSWORDinconfig.py - Check if LinkedIn has changed login flow (inspect with DevTools)
- Ensure 2FA is disabled or use app passwords
- Use the debug dashboard at
/debug/galleryto see screenshots
Solution: Retry with simpler job description or increase context length in Gemini prompt
Solution:
- Ensure Groq API key is configured in
config.py - Check that resume data is properly parsed
- Try a different template
Solution:
- Check database connection in
db_engine.py - Verify
linkedin_contextcolumn exists in User table - Run database migrations:
npx prisma migrate dev
| Endpoint | Method | Purpose |
|---|---|---|
/get-jobs |
POST | Parse resume & fetch matching jobs from LinkedIn |
/apply-jobs |
POST | Apply to selected jobs with tailored resumes |
/tailor |
POST | Generate AI-tailored resume for a job |
/store-cookie |
POST | Receive LinkedIn cookies from extension |
/progress |
GET | Real-time application progress tracking |
/portfolio |
POST | π Generate AI-powered portfolio website |
/portfolio/templates |
GET | π Get available portfolio templates |
/templates |
GET | π Get available resume templates |
/logout |
DELETE | π Clear LinkedIn context from database |
/progress/scraping/{email} |
GET | π Job scraping progress by user |
/progress/applying/{email} |
GET | π Application progress by user |
| Endpoint | Method | Purpose |
|---|---|---|
/debug/capture |
GET | π Capture screenshot with Playwright |
/debug/screenshot |
GET | π Get captured screenshot |
/debug/html |
GET | π Get captured HTML |
/debug/screenshots |
GET | π Get all debug screenshots as base64 |
/debug/images |
GET | π Get debug images from scraper runs |
/debug/gallery |
GET | π HTML gallery to view screenshots |
/debug/cleanup |
DELETE | π Clear old debug files |
| Endpoint | Method | Purpose |
|---|---|---|
/api/store |
POST | π Encrypt LinkedIn credentials (AES-256-CBC) |
/api/get-data |
GET | π Retrieve encrypted credentials |
/api/User?action=insert |
POST | π Create new user |
/api/User?action=update |
POST | π Update user data |
/api/User?id={id} |
GET | π Fetch user by ID |
# Parse resume & get jobs
curl -X POST http://localhost:8000/get-jobs \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"file_url": "https://..../resume.pdf",
"password": "linkedin_password"
}'
# Tailor resume for a job
curl -X POST http://localhost:8000/tailor \
-H "Content-Type: application/json" \
-d '{
"job_desc": "We are looking for a Full Stack Developer...",
"resume_url": "https://..../resume.pdf"
}'
# Apply to jobs
curl -X POST http://localhost:8000/apply-jobs \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"password": "linkedin_password",
"resume_url": "https://..../resume.pdf",
"jobs": [
{
"job_url": "https://linkedin.com/jobs/123456",
"job_description": "Full Stack Developer..."
}
]
}'
# π Generate portfolio website
curl -X POST http://localhost:8000/portfolio \
-H "Content-Type: application/json" \
-d '{
"resume_url": "https://..../resume.pdf",
"template_id": "modern-developer"
}'
# π Get available portfolio templates
curl -X GET http://localhost:8000/portfolio/templates
# π Get available resume templates
curl -X GET http://localhost:8000/templates
# π Clear LinkedIn context (logout)
curl -X DELETE http://localhost:8000/logout?email=user@example.com
# π Check scraping progress
curl -X GET http://localhost:8000/progress/scraping/user@example.com
# π Check application progress
curl -X GET http://localhost:8000/progress/applying/user@example.com-
Async Python in Backend
- Read:
backend_python/agents/scraper_agent_optimized.py(async Playwright usage) - Learn: How Playwright async contexts handle multiple browser sessions
- Read:
-
Resume Tailoring Logic
- Read:
backend_python/agents/tailor.py(Gemini integration) - Learn: PDF text extraction, LaTeX generation, batch processing, 4 template system
- Read:
-
π Portfolio Generation
- Read:
backend_python/agents/portfolio_agent.py(Groq integration) - Learn: HTML/CSS generation from resume data, 5 template system
- Read:
-
Frontend State Management
- Read:
my-fe/app/Components/Jobs.tsx&Apply.tsx - Learn: Redux + sessionStorage for multi-step flows
- Read:
-
π Portfolio Builder UI
- Read:
my-fe/app/Components/PortfolioPage.tsx - Learn: Template selection, live preview, code export
- Read:
-
Database Models
- Read:
my-fe/prisma/schema.prisma&backend_python/database/SchemaModel.py - Learn: How Prisma ORM mirrors SQLAlchemy models, LinkedIn context storage
- Read:
-
π LinkedIn Session Persistence
- Read:
backend_python/database/linkedin_context.py - Learn: Browser context serialization and restoration
- Read:
- Create a feature branch:
git checkout -b feature/your-feature - Make changes and test locally
- Commit with clear messages:
git commit -m "Add resume parsing optimization" - Push and create a Pull Request
This project is licensed under the MIT License β see LICENSE file for details.
- Support for multiple job boards (Indeed, Glassdoor, Dice)
- Resume versioning & A/B testing
- Salary negotiation insights
- Interview prep with AI coaching
- Email notification tracking
- Mobile app (React Native)
- API marketplace for third-party integrations
- Cover letter generation
- Application analytics dashboard
- Multi-language resume support
For issues, questions, or feature requests:
- Open an issue on GitHub
- Email: support@devhire.dev
- Discord: Join our community
Made with β€οΈ to help developers land their dream jobs faster.
- β AI Portfolio Generator - Create professional portfolio websites from your resume
- β 5 Portfolio Templates - Modern, Minimalist, Creative, Startup, Corporate styles
- β 4 Resume Templates - Choose from multiple LaTeX resume designs
- β LinkedIn Session Persistence - No more re-logging in every session
- β Visual Debugging Dashboard - See exactly what the scraper sees
- β Pricing System - Basic (Free) and Pro tiers
- β Password Reset Flow - Full Supabase-powered password recovery
- β Browser Fingerprint Collection - Enhanced session authenticity
- β Docker Support - Easy containerized deployment
- β
Windows Support -
run.batfor Windows users - β Groq AI Integration - Additional AI model for portfolio generation
- β Gemini Fallback Chain - Automatic failover between Gemini models