Skip to content

AI agents like GitHub Copilot/Cursor take project context, send it to an LLM, get code or commands, and apply them directly to files using cursor / github tools (functions)—enabling real-time, context-aware coding automation.

Notifications You must be signed in to change notification settings

ndk123-web/mini-cursor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

� Mini-Cursor: AI Website Builder Agent

An intelligent AI agent that creates complete frontend websites through natural language commands using Google's Gemini AI.

✨ Features

  • 🤖 AI-Powered: Uses Google Gemini 2.5 Flash for intelligent website generation
  • 🛠️ Dual Tools: Terminal commands + Direct file writing capabilities
  • 💻 Cross-Platform: Works on Windows, macOS, and Linux
  • 📁 Smart Structure: Automatically creates folders and organized file structures
  • 🎨 Complete Websites: Generates HTML, CSS, and JavaScript files
  • 🔄 Interactive: Continuous conversation loop for iterative development

Their core job is to:

  1. Understand your request (natural language or partial code)
  2. Collect project context (files, structure, errors, dependencies, cursor location)
  3. Ask the LLM for step-by-step instructions (it gives instruction to tools and github/cursor tools handles our local files)
  4. Run those instructions through editor-integrated tools
  5. Reflect the changes instantly in your project

🎯 Live Example

User Request:

Ask me anything--> create calculator app

AI Agent Response & Actions:

// Step 1: AI analyzes request
"User wants calculator app, I need to create folder and files"

// Step 2: Execute folder creation
{ name: "executeCommand", args: { command: "mkdir calculator" } }Success: Folder created

// Step 3: Write HTML file
{ name: "writeFile", args: { 
  path: "calculator/index.html", 
  content: "<!DOCTYPE html>..." 
}}
 File written successfully: calculator/index.html

// Step 4: Write CSS file  
{ name: "writeFile", args: { 
  path: "calculator/style.css", 
  content: "* { box-sizing: border-box; }..." 
}}
 File written successfully: calculator/style.css

// Step 5: Write JavaScript file
{ name: "writeFile", args: { 
  path: "calculator/script.js", 
  content: "const calculator = { displayValue: '0'... }" 
}}
 File written successfully: calculator/script.js

// Final Response
"Calculator app created successfully! ✅
Files generated:
- calculator/index.html (Calculator UI)  
- calculator/style.css (Modern styling)
- calculator/script.js (Full functionality)"

Result:

calculator/
├── index.html    # Complete calculator interface
├── style.css     # Grid layout + hover effects  
└── script.js     # Working calculator logic

Real implementation example is in index.js file


⚙️ Workflow Breakdown

1. User Input

You either:

  • Type a natural language request ("Create a login form")
  • Start writing code and trigger suggestions
  • Ask for a refactor or bug fix

2. Context Gathering

The agent collects relevant information:

  • Current file content
  • Other files in the project (folder tree, configs, related modules)
  • Cursor position
  • Error logs / terminal output (if relevant)
  • Git history or recent edits

This context ensures the LLM understands your full development state.

3. LLM Request

The gathered context + your request is sent to the LLM (e.g., GPT, Gemini, Claude). The prompt might look like:

{
  "activeFile": "src/App.js",
  "cursorPosition": 128,
  "openFiles": ["src/App.js", "src/components/Header.js"],
  "folderStructure": ["src/", "public/", "package.json"],
  "userQuery": "Add a signup form with email validation"
}

4. LLM Response with Commands

Instead of only returning plain text, the LLM can return structured commands like:

[
  { "action": "create_file", "path": "src/components/SignupForm.js", "content": "<JSX code here>" },
  { "action": "update_file", "path": "src/App.js", "changes": "import SignupForm and render it" }
]

5. Tool Execution

The agent has access to special editor tools:

  • File writer → create/edit/delete files
  • Search/replace → modify specific lines
  • Run commands → execute build/test scripts

It takes the LLM’s commands and executes them locally.

6. Changes in Editor

You instantly see:

  • New files created
  • Existing files updated
  • Errors resolved (if applicable)
  • Code inserted at the right place

🔄 Why This Feels "Magic"

The magic comes from the tight loop between:

  1. LLM reasoning → Plans changes
  2. Tool execution → Applies them instantly
  3. Real-time feedback → Updates context for the next LLM call

This loop continues until your task is complete — without you manually creating files or copy-pasting code.


📚 Summary

  • Agent = LLM + Context Gathering + Tool Execution
  • GitHub Copilot & Cursor can read your entire project state
  • They return not just text but commands that the editor executes
  • All changes happen locally, inside your current workspace

About

AI agents like GitHub Copilot/Cursor take project context, send it to an LLM, get code or commands, and apply them directly to files using cursor / github tools (functions)—enabling real-time, context-aware coding automation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published