Skip to content

Dev-Pross/DevHire

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DevHire – AI-Powered LinkedIn Job Application Automation

Demo Video

🎬 Demo Video

Watch the full product demo to see DevHire in action:

DevHire Demo

πŸ‘‰ Watch Full Demo on Loom


🎯 The Problem It Solves

Applying to jobs on LinkedIn is tedious, time-consuming, and repetitive. Professionals spend hours:

  • Manually searching for relevant job postings
  • Reading job descriptions to find matches
  • Customizing resumes for each application
  • Filling out LinkedIn forms repeatedly
  • Tracking which jobs they've applied to

DevHire automates this entire workflow, enabling job seekers to apply to dozens of relevant positions in minutes instead of hours, with AI-powered resume customization tailored to each job.


πŸš€ What DevHire Does

DevHire is a full-stack AI automation platform that:

  1. Extracts Your Profile – Uses a Chrome extension to capture LinkedIn credentials, cookies, and browser fingerprint
  2. Parses Your Resume – Analyzes your resume using Google Gemini AI to extract skills, experience, and qualifications
  3. Searches for Jobs – Scrapes LinkedIn using Playwright (headless browser) based on parsed job titles and keywords from your resume
  4. Tailors Resumes – Uses Google Gemini to dynamically generate job-specific resumes for each application with LaTeX rendering to PDF (supports 4 professional templates)
  5. Auto-Applies – Programmatically fills out LinkedIn's Easy Apply forms and submits applications with tailored resumes
  6. Generates Portfolios – πŸ†• AI-powered portfolio website generator creates professional HTML/CSS portfolio pages from your resume (supports 5 templates)
  7. Tracks Progress – Provides real-time progress tracking and application history in the web dashboard
  8. Persistent Sessions – πŸ†• Stores LinkedIn browser context in database for seamless session resumption

Result: Apply to 50+ tailored job applications in the time it used to take to apply to 5.


πŸ—οΈ Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        Frontend (Next.js 15 + React 19)             β”‚
β”‚  ✨ Dashboard, Login, Job Selection, Apply Flow    β”‚
β”‚  πŸ†• Portfolio Builder, Pricing Plans, Password Resetβ”‚
β”‚         Supabase Auth + Prisma ORM                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕ (API calls)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    Backend (Python FastAPI + Uvicorn)              β”‚
β”‚  β€’ /get-jobs – Parse resume & scrape LinkedIn     β”‚
β”‚  β€’ /apply-jobs – Apply with tailored resumes      β”‚
β”‚  β€’ /tailor – AI resume customization (Gemini)     β”‚
β”‚  β€’ /store-cookie – Receive auth from extension    β”‚
β”‚  β€’ πŸ†• /portfolio – AI portfolio website generator β”‚
β”‚  β€’ πŸ†• /logout – Clear LinkedIn context            β”‚
β”‚  β€’ πŸ†• /debug/* – Visual debugging dashboard       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕ (Browser control)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Chrome Extension (Manifest V3)                   β”‚
β”‚  β€’ Captures LinkedIn cookies & localStorage       β”‚
β”‚  β€’ Sends credentials to backend                   β”‚
β”‚  β€’ Injects content scripts for data collection    β”‚
β”‚  β€’ πŸ†• Browser fingerprint collection              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Agents (Async Python Tasks)                      β”‚
β”‚  β€’ Scraper Agent – LinkedIn job search (Playwright)β”‚
β”‚  β€’ Parser Agent – Resume analysis (Gemini AI)      β”‚
β”‚  β€’ Tailor Agent – Resume generation (Gemini)      β”‚
β”‚  β€’ Apply Agent – Form filling & submission         β”‚
β”‚  β€’ πŸ†• Portfolio Agent – Website generation (Groq) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Databases                                        β”‚
β”‚  β€’ PostgreSQL (Prisma ORM) – Users, applied jobs  β”‚
β”‚  β€’ πŸ†• LinkedIn Context Storage – Session persist  β”‚
β”‚  β€’ Supabase – Authentication & Auth helpers       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Components

Component Technology Purpose
Frontend Next.js 15, React 19, TypeScript User dashboard, job browsing, application tracking, portfolio builder
Backend Python FastAPI, Uvicorn Job scraping, resume tailoring, form automation, portfolio generation
AI Engines Google Gemini 2.5 Flash + Groq GPT Resume parsing, job-resume matching, tailored resume generation, portfolio creation
Browser Automation Playwright (async) LinkedIn login, job scraping, form filling
Auth Supabase + JWT + AES-256 User registration, login, session management, credential encryption
Database PostgreSQL + Prisma User profiles, applied jobs, resume URLs, LinkedIn context
Extension Chrome Manifest V3 Cookie/fingerprint capture, credential sync, session persistence

πŸ“ Project Structure

DevHire/
β”œβ”€β”€ my-fe/                          # Next.js Frontend
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ Components/
β”‚   β”‚   β”‚   β”œβ”€β”€ Home.tsx           # Landing page hero section
β”‚   β”‚   β”‚   β”œβ”€β”€ Jobs.tsx           # Job search & listing
β”‚   β”‚   β”‚   β”œβ”€β”€ Apply.tsx          # Application progress tracker
β”‚   β”‚   β”‚   β”œβ”€β”€ Tailor_resume.tsx  # Resume tailoring UI
β”‚   β”‚   β”‚   β”œβ”€β”€ Login.tsx          # Authentication
β”‚   β”‚   β”‚   β”œβ”€β”€ Register.tsx       # πŸ†• User registration with validation
β”‚   β”‚   β”‚   β”œβ”€β”€ ResetPass.tsx      # πŸ†• Password reset flow
β”‚   β”‚   β”‚   β”œβ”€β”€ JobCards.tsx       # Job card display
β”‚   β”‚   β”‚   β”œβ”€β”€ PortfolioPage.tsx  # πŸ†• AI portfolio builder UI
β”‚   β”‚   β”‚   β”œβ”€β”€ Navbar.tsx         # Navigation with session management
β”‚   β”‚   β”‚   β”œβ”€β”€ Features.tsx       # Feature showcase with animations
β”‚   β”‚   β”‚   β”œβ”€β”€ Linkedin.tsx       # LinkedIn credentials input
β”‚   β”‚   β”‚   β”œβ”€β”€ About.jsx          # About page
β”‚   β”‚   β”‚   β”œβ”€β”€ Pricing/           # πŸ†• Pricing plans (Basic/Pro)
β”‚   β”‚   β”‚   └── Landing-Page/      # Landing page components
β”‚   β”‚   β”œβ”€β”€ api/                   # Backend API routes (Next.js)
β”‚   β”‚   β”‚   β”œβ”€β”€ store/             # πŸ†• Encrypted credential storage
β”‚   β”‚   β”‚   β”œβ”€β”€ get-data/          # πŸ†• Retrieve encrypted credentials
β”‚   β”‚   β”‚   └── User/              # User CRUD operations
β”‚   β”‚   β”œβ”€β”€ Jobs/                  # Job pages
β”‚   β”‚   β”‚   β”œβ”€β”€ LinkedinUserDetails/ # LinkedIn login page
β”‚   β”‚   β”‚   β”œβ”€β”€ tailor/            # Resume tailoring page
β”‚   β”‚   β”‚   └── portfolio/         # πŸ†• Portfolio builder page
β”‚   β”‚   β”œβ”€β”€ apply/                 # Application flow pages
β”‚   β”‚   β”œβ”€β”€ login/                 # Login page
β”‚   β”‚   β”œβ”€β”€ register/              # πŸ†• Registration page
β”‚   β”‚   β”œβ”€β”€ reset-password/        # πŸ†• Password reset page
β”‚   β”‚   β”œβ”€β”€ pricing/               # πŸ†• Pricing page
β”‚   β”‚   β”œβ”€β”€ about/                 # About page
β”‚   β”‚   └── utiles/
β”‚   β”‚       β”œβ”€β”€ agentsCall.ts      # API calls to Python backend
β”‚   β”‚       β”œβ”€β”€ supabaseClient.ts  # Supabase initialization
β”‚   β”‚       β”œβ”€β”€ getUserData.ts     # User profile management
β”‚   β”‚       β”œβ”€β”€ useUploadResume.ts # πŸ†• Resume upload hook
β”‚   β”‚       β”œβ”€β”€ database.ts        # πŸ†• Prisma client
β”‚   β”‚       └── api.ts             # πŸ†• API URL configuration
β”‚   β”œβ”€β”€ prisma/
β”‚   β”‚   └── schema.prisma          # Database schema (PostgreSQL)
β”‚   β”œβ”€β”€ context/                   # React context providers
β”‚   β”œβ”€β”€ store/                     # Redux store configuration
β”‚   └── package.json
β”‚
β”œβ”€β”€ backend_python/                # Python FastAPI Backend
β”‚   β”œβ”€β”€ main/
β”‚   β”‚   β”œβ”€β”€ main.py               # FastAPI app setup
β”‚   β”‚   β”œβ”€β”€ progress_dict.py      # πŸ†• Shared progress tracking
β”‚   β”‚   └── routes/
β”‚   β”‚       β”œβ”€β”€ list_jobs.py      # GET /get-jobs endpoint
β”‚   β”‚       β”œβ”€β”€ apply_jobs.py     # POST /apply-jobs endpoint
β”‚   β”‚       β”œβ”€β”€ get_resume.py     # POST /tailor endpoint
β”‚   β”‚       β”œβ”€β”€ cookie_receiver.py # POST /store-cookie endpoint
β”‚   β”‚       β”œβ”€β”€ progress_route.py # Progress tracking
β”‚   β”‚       β”œβ”€β”€ portfolio_generator.py # πŸ†• Portfolio generation endpoint
β”‚   β”‚       β”œβ”€β”€ logout.py         # πŸ†• LinkedIn context cleanup
β”‚   β”‚       β”œβ”€β”€ debug_routes.py   # πŸ†• Visual debugging dashboard
β”‚   β”‚       └── templates/        # πŸ†• Resume template images
β”‚   β”œβ”€β”€ agents/
β”‚   β”‚   β”œβ”€β”€ scraper_agent.py             # LinkedIn job scraper
β”‚   β”‚   β”œβ”€β”€ scraper_agent_optimized.py   # Optimized async scraper
β”‚   β”‚   β”œβ”€β”€ apply_agent.py               # Form filling & submission
β”‚   β”‚   β”œβ”€β”€ tailor.py                    # Resume AI customization (4 templates)
β”‚   β”‚   β”œβ”€β”€ parse_agent.py               # Resume parsing
β”‚   β”‚   └── portfolio_agent.py           # πŸ†• AI portfolio generator (5 templates)
β”‚   β”œβ”€β”€ database/
β”‚   β”‚   β”œβ”€β”€ db_engine.py          # SQLAlchemy connection
β”‚   β”‚   β”œβ”€β”€ SchemaModel.py        # User model (SQLAlchemy)
β”‚   β”‚   └── linkedin_context.py   # πŸ†• LinkedIn session persistence
β”‚   β”œβ”€β”€ lib/
β”‚   β”‚   └── session_manager.py    # πŸ†• Session management utilities
β”‚   β”œβ”€β”€ data_dump/                # πŸ†• Playwright persistent storage
β”‚   β”œβ”€β”€ config.py                 # API keys & environment variables
β”‚   β”œβ”€β”€ requirements.txt          # Python dependencies
β”‚   β”œβ”€β”€ Dockerfile                # πŸ†• Docker containerization
β”‚   β”œβ”€β”€ run.sh                    # Linux startup script
β”‚   └── run.bat                   # πŸ†• Windows startup script
β”‚
β”œβ”€β”€ extension/                     # Chrome Extension
β”‚   β”œβ”€β”€ manifest.json             # Extension configuration
β”‚   β”œβ”€β”€ background.js             # Service worker (cookie/storage sync)
β”‚   └── content.js                # Content script (fingerprint collection)
β”‚
β”œβ”€β”€ next-app/                      # πŸ†• Alternative Next.js app
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”œβ”€β”€ signin/
β”‚   β”‚   β”œβ”€β”€ signup/
β”‚   β”‚   └── UserDetails/
β”‚   └── prisma/
β”‚
β”œβ”€β”€ schema.sql                     # πŸ†• Database SQL schema
└── prisma/                        # Prisma schema (shared)
    └── schema.prisma             # Database models

πŸ”„ User Journey & Data Flow

Step 1: User Authenticates

1. User logs in with LinkedIn on DevHire web app
2. Chrome extension captures LinkedIn cookies & localStorage
3. Extension sends data to backend (/store-cookie)
4. Backend stores encrypted credentials for session

Step 2: Resume Upload & Parsing

1. User uploads resume (PDF/DOCX)
2. Backend extracts text using PyMuPDF + OCR (pytesseract)
3. Gemini AI parses resume β†’ extracts job titles, skills, experience
4. Output: ["Full Stack Developer, Backend Engineer"] + ["Python, React, PostgreSQL"]

Step 3: Job Search

1. Frontend calls POST /get-jobs with:
   - user_id, password, resume_url
2. Backend uses Playwright to:
   - Login to LinkedIn with credentials
   - Search for each parsed job title
   - Scrape job listings (title, company, description, link)
3. Jobs returned to frontend, user selects 10-50 jobs

Step 4: Resume Tailoring (Per Job)

1. For each selected job:
   - Frontend calls POST /tailor with:
     - original_resume_url, job_description
2. Backend:
   - Downloads original resume PDF
   - Sends to Gemini: "Tailor this resume for this JD"
   - Gemini generates LaTeX with highlighted relevant skills
   - LaTeX rendered to PDF
   - PDF converted to Base64
3. Base64 PDFs sent to frontend, stored in sessionStorage

Step 5: Batch Application

1. Frontend calls POST /apply-jobs with:
   - jobs: [{job_url, job_description}]
   - user_id, password, resume_url
2. Backend applies in parallel batches (15 jobs per batch):
   - Open LinkedIn Easy Apply modal via Playwright
   - Detect form fields (experience, skills, resume upload)
   - Fill fields with tailored answers + upload tailored PDF
   - Click Submit
3. Real-time progress updates via /progress endpoint
4. Applications tracked in PostgreSQL (User.applied_jobs)

πŸ› οΈ Tech Stack Deep Dive

Frontend (Next.js)

  • Framework: Next.js 15.4.6 with App Router
  • Language: TypeScript + React 19
  • Styling: Tailwind CSS 4
  • State Management: Redux Toolkit
  • HTTP: Axios for API calls
  • Auth: Supabase Auth + JWT
  • Database ORM: Prisma Client
  • Animations: Framer Motion
  • UI Feedback: React Hot Toast (notifications)
  • PDF Handling: PDF.js for rendering

Backend (Python)

  • Framework: FastAPI (async web server)
  • Async Runtime: Uvicorn (ASGI)
  • Browser Automation: Playwright (async headless browser)
  • Database: PostgreSQL + SQLAlchemy ORM
  • AI Models:
    • Google Gemini 2.5 Flash (resume parsing, tailoring) with fallback chain
    • Groq GPT-oss-120b (portfolio generation)
  • PDF Processing: PyMuPDF (fitz), pdf2image, pytesseract (OCR)
  • Resume Generation: LaTeX rendering (4 templates), FPDF, python-docx
  • Portfolio Generation: HTML/CSS (5 templates)
  • HTTP: aiohttp for async requests
  • Auth: browser-cookie3 for session cookies
  • Containerization: Docker support

Database (PostgreSQL + Prisma)

model User {
  id                 String    @id @default(uuid()) @db.Uuid
  email              String    @unique
  name               String?
  resume_url         String?
  applied_jobs       String[]  // Array of LinkedIn job URLs applied to
  linkedin_context   Json?     // πŸ†• Playwright browser session state
  context_updated_at DateTime? // πŸ†• LinkedIn context last update timestamp
}

Chrome Extension

  • Manifest: V3 (latest standard)
  • Service Worker: background.js (cookie sync every 2 minutes)
  • Content Script: content.js (data injection + fingerprint collection)
  • Capabilities:
    • Cookie capture (.linkedin.com and www.linkedin.com domains)
    • localStorage/sessionStorage access
    • πŸ†• Browser fingerprint collection (userAgent, platform, language, timezone, hardware info, screen dimensions, viewport size, pixel ratio, color depth)
    • πŸ†• SPA navigation detection for re-injection

πŸš€ Getting Started

Prerequisites

  • Node.js β‰₯ 20
  • Python 3.9+
  • PostgreSQL database (local or cloud)
  • Google API Key (Gemini access)
  • Supabase Project (optional, for auth)
  • Chrome Browser (for extension & Playwright)

Environment Setup

1. Clone & Install Frontend

cd my-fe
npm install
npx prisma generate  # Generate Prisma Client

Create .env.local:

NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_key
NEXT_PUBLIC_API_URL=http://localhost:8000
DATABASE_URL=postgresql://user:password@localhost:5432/devhire

2. Install Backend Dependencies

cd backend_python
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
playwright install  # Download browser binaries

Create config.py:

GOOGLE_API = "your_gemini_api_key"
GROQ_API = "your_groq_api_key"  # For portfolio generation
LINKEDIN_ID = "your_linkedin_email"
LINKEDIN_PASSWORD = "your_linkedin_password"

3. Setup Database

cd my-fe
npx prisma migrate dev --name init

4. Run Backend

cd backend_python
python -m uvicorn main.main:app --reload --host 0.0.0.0 --port 8000

5. Run Frontend

cd my-fe
npm run dev

Open http://localhost:3000 in browser.

6. Load Chrome Extension

  1. Open chrome://extensions/
  2. Enable "Developer mode"
  3. Click "Load unpacked"
  4. Select DevHire/extension/ folder

πŸ”‘ Key Features & Examples

Feature 1: Smart Job Parsing

Input: Resume PDF Process:

  • Extract text with OCR
  • Send to Gemini: "Extract job titles and technical skills"
  • Parse response Output: ["Full Stack Developer", "Backend Engineer"] + ["Python", "React", "PostgreSQL", ...]

Feature 2: Intelligent Resume Tailoring (4 Templates)

Input: Original resume + Job description Templates Available:

  • Classic Professional
  • Modern Minimalist
  • Technical Focus
  • Creative Design

Process:

prompt = f"""
Given this resume:
{resume_text}

And this job description:
{job_description}

Generate a tailored resume in LaTeX that:
1. Highlights relevant skills
2. Reorders experience by relevance
3. Emphasizes matching technologies
"""
response = gemini_model.generate_content(prompt)
# Response is LaTeX β†’ rendered to PDF β†’ Base64

Output: Customized PDF resume (Base64 encoded)

Feature 3: πŸ†• AI Portfolio Website Generator (5 Templates)

Input: Resume data + Selected template Templates Available:

  1. Modern Developer Portfolio
  2. Minimalist Professional
  3. Creative Designer
  4. Tech Startup Style
  5. Corporate Executive

Process:

# Uses Groq GPT-oss-120b for HTML/CSS generation
prompt = f"""
Generate a complete, responsive portfolio website HTML/CSS based on:
- Resume data: {resume_data}
- Template style: {template_id}

Include sections for: About, Skills, Experience, Projects, Contact
"""
response = groq_client.generate(prompt)
# Returns production-ready HTML/CSS

Output: Complete HTML/CSS portfolio page with live preview

Feature 4: Automated Job Application

Example Flow:

# 1. Playwright opens LinkedIn Easy Apply
await page.click('button:has-text("Easy Apply")')

# 2. Detect form fields
experience_field = await page.query_selector('input[name="experience"]')
skills_field = await page.query_selector('input[name="skills"]')

# 3. Fill with AI-suggested answers
await experience_field.fill("5 years in Full Stack Development")
await skills_field.fill("Python, React, PostgreSQL")

# 4. Upload tailored resume
resume_input = await page.query_selector('input[type="file"]')
await resume_input.set_input_files(tailored_resume_path)

# 5. Submit
await page.click('button:has-text("Submit application")')

Feature 5: πŸ†• LinkedIn Session Persistence

Process:

# Save browser context to database
await save_linkedin_context(
    email=user_email,
    context={
        "storage_state": browser_context.storage_state(),
        "cookies": cookies,
        "localStorage": local_storage
    }
)

# Restore session on next visit - no re-login needed!
context = await load_linkedin_context(user_email)
browser_context = await browser.new_context(storage_state=context)

Feature 6: πŸ†• Visual Debugging Dashboard

Endpoints:

  • /debug/screenshots - View all debug screenshots
  • /debug/gallery - HTML gallery for visual inspection
  • /debug/images - Get images from recent scraper runs

οΏ½ Pricing Plans

Feature Basic (Free) Pro (β‚Ή199/month)
Job Applications 10/month Unlimited
Resume Tailoring 5/month Unlimited
Portfolio Generation 1 template All 5 templates
LinkedIn Context Persistence ❌ βœ…
Priority Support ❌ βœ…

οΏ½πŸ“Š Performance & Metrics

Metric Value
Jobs Applied Per Hour 50-100 (vs. 5-10 manual)
Resume Customization Time <5 seconds per job (Gemini)
Job Scraping Speed 20-30 jobs/minute
End-to-End Application Time 10-15 minutes for 50 jobs
Success Rate (Easy Apply) ~85% (varies by form complexity)

πŸ”’ Security & Privacy

Credential Handling

  • LinkedIn credentials encrypted with AES-256-CBC
  • Session-based authentication (JWT)
  • Credentials stored in HTTP-only cookies (encrypted)
  • LinkedIn context stored in PostgreSQL (session persistence)

Data Protection

  • CORS middleware restricts to authorized origins (extension ID + Vercel production URL)
  • Supabase Auth for user verification
  • PostgreSQL for encrypted data at rest

LinkedIn Compliance

  • Uses official LinkedIn API where available
  • Respects robots.txt and rate limits
  • Stealth mode Playwright (mimics human behavior)

πŸ› Troubleshooting

"Cannot use import statement outside a module"

Solution: Ensure package.json has "type": "module"

"Playwright browser download failed"

Solution: Run playwright install after pip install

"Gemini API rate limit exceeded"

Solution: Add delays between tailoring requests (5-10 second intervals). The system has automatic fallback to: gemini-2.5-flash β†’ gemini-2.5-flash-lite β†’ gemini-2.0-flash β†’ gemini-1.5-flash

"LinkedIn login failing"

Solution:

  • Update LINKEDIN_ID and LINKEDIN_PASSWORD in config.py
  • Check if LinkedIn has changed login flow (inspect with DevTools)
  • Ensure 2FA is disabled or use app passwords
  • Use the debug dashboard at /debug/gallery to see screenshots

"Resume tailoring returns LaTeX errors"

Solution: Retry with simpler job description or increase context length in Gemini prompt

"Portfolio generation not working"

Solution:

  • Ensure Groq API key is configured in config.py
  • Check that resume data is properly parsed
  • Try a different template

"LinkedIn context not persisting"

Solution:

  • Check database connection in db_engine.py
  • Verify linkedin_context column exists in User table
  • Run database migrations: npx prisma migrate dev

πŸ“š API Endpoints

Frontend Calls Backend

Endpoint Method Purpose
/get-jobs POST Parse resume & fetch matching jobs from LinkedIn
/apply-jobs POST Apply to selected jobs with tailored resumes
/tailor POST Generate AI-tailored resume for a job
/store-cookie POST Receive LinkedIn cookies from extension
/progress GET Real-time application progress tracking
/portfolio POST πŸ†• Generate AI-powered portfolio website
/portfolio/templates GET πŸ†• Get available portfolio templates
/templates GET πŸ†• Get available resume templates
/logout DELETE πŸ†• Clear LinkedIn context from database
/progress/scraping/{email} GET πŸ†• Job scraping progress by user
/progress/applying/{email} GET πŸ†• Application progress by user

Debug Endpoints (Development)

Endpoint Method Purpose
/debug/capture GET πŸ†• Capture screenshot with Playwright
/debug/screenshot GET πŸ†• Get captured screenshot
/debug/html GET πŸ†• Get captured HTML
/debug/screenshots GET πŸ†• Get all debug screenshots as base64
/debug/images GET πŸ†• Get debug images from scraper runs
/debug/gallery GET πŸ†• HTML gallery to view screenshots
/debug/cleanup DELETE πŸ†• Clear old debug files

Next.js API Routes

Endpoint Method Purpose
/api/store POST πŸ†• Encrypt LinkedIn credentials (AES-256-CBC)
/api/get-data GET πŸ†• Retrieve encrypted credentials
/api/User?action=insert POST πŸ†• Create new user
/api/User?action=update POST πŸ†• Update user data
/api/User?id={id} GET πŸ†• Fetch user by ID

Example Requests

# Parse resume & get jobs
curl -X POST http://localhost:8000/get-jobs \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user123",
    "file_url": "https://..../resume.pdf",
    "password": "linkedin_password"
  }'

# Tailor resume for a job
curl -X POST http://localhost:8000/tailor \
  -H "Content-Type: application/json" \
  -d '{
    "job_desc": "We are looking for a Full Stack Developer...",
    "resume_url": "https://..../resume.pdf"
  }'

# Apply to jobs
curl -X POST http://localhost:8000/apply-jobs \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user123",
    "password": "linkedin_password",
    "resume_url": "https://..../resume.pdf",
    "jobs": [
      {
        "job_url": "https://linkedin.com/jobs/123456",
        "job_description": "Full Stack Developer..."
      }
    ]
  }'

# πŸ†• Generate portfolio website
curl -X POST http://localhost:8000/portfolio \
  -H "Content-Type: application/json" \
  -d '{
    "resume_url": "https://..../resume.pdf",
    "template_id": "modern-developer"
  }'

# πŸ†• Get available portfolio templates
curl -X GET http://localhost:8000/portfolio/templates

# πŸ†• Get available resume templates
curl -X GET http://localhost:8000/templates

# πŸ†• Clear LinkedIn context (logout)
curl -X DELETE http://localhost:8000/logout?email=user@example.com

# πŸ†• Check scraping progress
curl -X GET http://localhost:8000/progress/scraping/user@example.com

# πŸ†• Check application progress
curl -X GET http://localhost:8000/progress/applying/user@example.com

πŸŽ“ Learning Resources

Understanding Key Workflows

  1. Async Python in Backend

    • Read: backend_python/agents/scraper_agent_optimized.py (async Playwright usage)
    • Learn: How Playwright async contexts handle multiple browser sessions
  2. Resume Tailoring Logic

    • Read: backend_python/agents/tailor.py (Gemini integration)
    • Learn: PDF text extraction, LaTeX generation, batch processing, 4 template system
  3. πŸ†• Portfolio Generation

    • Read: backend_python/agents/portfolio_agent.py (Groq integration)
    • Learn: HTML/CSS generation from resume data, 5 template system
  4. Frontend State Management

    • Read: my-fe/app/Components/Jobs.tsx & Apply.tsx
    • Learn: Redux + sessionStorage for multi-step flows
  5. πŸ†• Portfolio Builder UI

    • Read: my-fe/app/Components/PortfolioPage.tsx
    • Learn: Template selection, live preview, code export
  6. Database Models

    • Read: my-fe/prisma/schema.prisma & backend_python/database/SchemaModel.py
    • Learn: How Prisma ORM mirrors SQLAlchemy models, LinkedIn context storage
  7. πŸ†• LinkedIn Session Persistence

    • Read: backend_python/database/linkedin_context.py
    • Learn: Browser context serialization and restoration

🀝 Contributing

  1. Create a feature branch: git checkout -b feature/your-feature
  2. Make changes and test locally
  3. Commit with clear messages: git commit -m "Add resume parsing optimization"
  4. Push and create a Pull Request

πŸ“ License

This project is licensed under the MIT License – see LICENSE file for details.


πŸŽ‰ Future Enhancements

  • Support for multiple job boards (Indeed, Glassdoor, Dice)
  • Resume versioning & A/B testing
  • Salary negotiation insights
  • Interview prep with AI coaching
  • Email notification tracking
  • Mobile app (React Native)
  • API marketplace for third-party integrations
  • Cover letter generation
  • Application analytics dashboard
  • Multi-language resume support

πŸ“§ Contact & Support

For issues, questions, or feature requests:


Made with ❀️ to help developers land their dream jobs faster.


πŸ“‹ Changelog

Latest Updates (v2.0)

  • βœ… AI Portfolio Generator - Create professional portfolio websites from your resume
  • βœ… 5 Portfolio Templates - Modern, Minimalist, Creative, Startup, Corporate styles
  • βœ… 4 Resume Templates - Choose from multiple LaTeX resume designs
  • βœ… LinkedIn Session Persistence - No more re-logging in every session
  • βœ… Visual Debugging Dashboard - See exactly what the scraper sees
  • βœ… Pricing System - Basic (Free) and Pro tiers
  • βœ… Password Reset Flow - Full Supabase-powered password recovery
  • βœ… Browser Fingerprint Collection - Enhanced session authenticity
  • βœ… Docker Support - Easy containerized deployment
  • βœ… Windows Support - run.bat for Windows users
  • βœ… Groq AI Integration - Additional AI model for portfolio generation
  • βœ… Gemini Fallback Chain - Automatic failover between Gemini models