Skip to content

Conversation

@naheel0
Copy link
Contributor

@naheel0 naheel0 commented Feb 10, 2026

🚀 BΞYTΞFLʘW | Pull Request Protocol

PR Type: (Choose one: feat | fix | refactor | docs | perf)
Issue Link: Fixes #


📝 System Summary

Provide a concise brief of the changes introduced to the stream.

🛠️ Technical Changes

  • Logic change in ...
  • New UI component added: ...
  • Database schema updated: ...

🧪 Quality Assurance (QA)

  • Linting: Code style matches the BeyteFlow grid.
  • Build: npm run build executed without errors.
  • Testing: New logic has been verified and tested.
  • Dark Mode: UI is high-contrast and neon-optimized.

🖼️ Visual Evidence

If this PR affects the UI, drop a screenshot or GIF below:


📡 Developer Authorization

  • I have performed a self-review of my code.
  • My changes generate no new warnings in the console.
  • I have updated the documentation (if applicable).

Authorized by: @naheel0
Timestamp: {{ 10/2/2026 }}


@naheel0 naheel0 requested a review from adithyanmkd as a code owner February 10, 2026 10:21
@vercel
Copy link

vercel bot commented Feb 10, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
readme-gen-ai Ready Ready Preview, Comment Feb 10, 2026 10:39am

@coderabbitai
Copy link

coderabbitai bot commented Feb 10, 2026

📝 Walkthrough

Summary by CodeRabbit

  • New Features
    • AI README generation now detects project tech stacks (Node.js, Python, Docker) and tailors content.
    • Generated READMEs follow an enhanced structured format with sections for overview, features, architecture, setup, and governance.
    • Installation and environment hints are customized based on detected dependencies and environment.
    • README output incorporates detected license and richer project context for clearer guidance.

Walkthrough

Adds richer README generation logic to the existing POST endpoint: discovers repository root files, detects tech stack (Node/Python/Docker) and license, constructs a detailed, role-based prompt embedding verified project context, and restructures the README blueprint and setup guidance accordingly.

Changes

Cohort / File(s) Summary
AI README Generation Logic
src/app/api/generate/route.ts
Replaced simple file-list with dynamic repository root discovery and fileListString; added tech stack detection (Node, Python, Docker) and licenseName; refactored prompt to a structured, role-based "Master Expert Prompt" containing PROJECT CONTEXT, strict README sections, and conditional setup hints.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

area: ai-logic, area: backend

Suggested reviewers

  • adithyanmkd

Poem

🐰 I hopped through files both large and small,

Found Node, Python, Docker — I saw them all.
A Master Prompt I stitched with care,
So README flowers blossom everywhere.
✨📚

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is a template with unchecked checkboxes and placeholder fields rather than actual details about the changeset. Replace the template with concrete information about the changes made, such as the tech stack detection enhancements and prompt structure improvements.
Title check ❓ Inconclusive The title 'update prompts' is vague and generic, not clearly summarizing the actual changes to the README generation logic and tech stack detection. Use a more descriptive title that reflects the main changes, such as 'Enhance README generation with dynamic tech stack detection and structured prompts'.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch frontend

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/app/api/generate/route.ts (1)

119-124: ⚠️ Potential issue | 🟡 Minor

No validation of the AI response; truncation risk with 4096 max tokens.

The prompt now requests significantly more content (tables, directory trees, badges, multiple detailed sections), but maxOutputTokens in gemini.ts is set to 4096. This may truncate the README mid-section. Additionally, if the response is blocked by safety filters, response.text() could return empty or throw.

Consider:

  1. Increasing maxOutputTokens in gemini.ts to accommodate the richer prompt.
  2. Adding a guard for empty/blocked responses before returning to the client.
const markdown = response.text();
if (!markdown) {
  return NextResponse.json(
    { error: "AI generation returned empty content. Please try again." },
    { status: 502 }
  );
}
🤖 Fix all issues with AI agents
In `@src/app/api/generate/route.ts`:
- Around line 97-99: The prompt in generate/route.ts asks for a tree-style
directory structure despite getRepoContents only returning root entries; update
the prompt assembly (where you call getRepoContents and compose the "Technical
Architecture" section) to either remove or soften the tree instruction and
instead request a flat file list or explicitly state the limitation that only
root-level files are available; locate references to getRepoContents and the
prompt text (e.g., the "Technical Architecture" / "tree-style directory
structure" string) and change it to ask for a flat list or an explicit note
about limited depth so the AI won't hallucinate nested folders.
- Around line 77-78: The prompt assembly currently uses fabricated fallback
strings for repoInfo?.description and repoInfo?.language ("A high-performance
software solution." and "Multilingual/Polyglot") which injects false claims;
update the code that constructs the description and language lines (the template
using repoInfo?.description and repoInfo?.language in route.ts) to use neutral,
explicit fallbacks such as "No description provided." and "Language unknown" (or
similar phrasing that signals absence) so the prompt does not imply unverified
facts.
- Around line 108-110: The README generation currently hardcodes "MIT" in the
"Detailed License" line; update the prompt/template in
src/app/api/generate/route.ts to use repoInfo.license (e.g.,
repoInfo.license.name or repoInfo.license.spdx_id) instead of the literal "MIT",
and fall back to a neutral instruction requesting the model infer the license
from repo metadata if repoInfo.license is missing; locate the prompt/template
string that contains "Detailed \"License\" section (MIT)" and replace it with a
dynamic insertion of the repository license or a clear fallback instruction.
- Line 80: The Tech Stack Context line in route.ts currently interpolates flags
directly and yields extra spaces or blank output; fix it by building a small
array of labels from hasNode, hasPython, hasDocker (e.g. [hasNode && "Node.js
Environment", hasPython && "Python Environment", hasDocker &&
"Containerized"].filter(Boolean).join(", ")), then use that joined string in
place of the current interpolation (or show a default like "None" when the array
is empty) so the "**Tech Stack Context**" line only contains correctly formatted
entries without extra spaces.
🧹 Nitpick comments (1)
src/app/api/generate/route.ts (1)

62-64: Stack detection misses common project markers.

pyproject.toml is the modern standard for Python projects (PEP 621) and is more common than setup.py in newer codebases. Similarly, docker-compose.yaml (.yaml extension) is equally valid as .yml.

Proposed fix
 const hasNode = files.includes("package.json");
-const hasPython = files.includes("requirements.txt") || files.includes("setup.py");
-const hasDocker = files.includes("Dockerfile") || files.includes("docker-compose.yml");
+const hasPython = files.includes("requirements.txt") || files.includes("setup.py") || files.includes("pyproject.toml");
+const hasDocker = files.includes("Dockerfile") || files.includes("docker-compose.yml") || files.includes("docker-compose.yaml");

Refactor README generation logic for improved accuracy and clarity. Enhance tech stack detection and update prompt structure with dynamic license information.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/app/api/generate/route.ts (1)

120-124: ⚠️ Potential issue | 🟡 Minor

No guard against empty or safety-blocked model output.

If the Gemini safety filters block the response, response.text() can return an empty string (or throw, depending on the SDK version). The client would receive { markdown: "" } with a 200 status, which is silently broken.

Proposed fix
     const result = await model.generateContent(prompt);
     const response = await result.response;
     const markdown = response.text();
 
+    if (!markdown) {
+      return NextResponse.json(
+        { error: "The model returned an empty response. The content may have been blocked by safety filters." },
+        { status: 502 }
+      );
+    }
+
     return NextResponse.json({ markdown });
🧹 Nitpick comments (1)
src/app/api/generate/route.ts (1)

95-96: "Minimum 5 features" may cause hallucinated content for small/trivial repos.

For repositories with limited functionality, forcing a minimum of 5 features will push the model to fabricate. Consider phrasing it as "List key features (aim for 3–5 where supported by the project context)" to give the model room to be accurate.

@naheel0 naheel0 added the area: ai-logic Related to Gemini prompts, tokens, or model responses. label Feb 10, 2026
@naheel0 naheel0 merged commit 93fc257 into main Feb 10, 2026
4 checks passed
@naheel0 naheel0 deleted the frontend branch February 10, 2026 10:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: ai-logic Related to Gemini prompts, tokens, or model responses.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant