Skip to content

CaptainFredric/ContentForge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

141 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

ContentForge β€” Deterministic Headline Scorer and Content Quality Gate

Start with the headline doctor Β· 50 endpoints behind it Β· Deterministic scoring in <50ms

Score content before you post. ContentForge is a deterministic headline scorer and before-publish quality gate that grades headlines, tweets, LinkedIn posts, and ad copy with an explainable A–F score, actionable suggestions, and a PASSED | REVIEW | FAILED verdict in under 50ms.

Think of it as a digital ruler for content quality. A ruler doesn't need a dataset to tell you something is 12 inches long β€” it just needs to be correctly calibrated. ContentForge's heuristic engine is that ruler: zero variance on the same input, zero hallucinations, fully auditable open-source logic. AI (Ollama or Gemini) kicks in only for generation endpoints like rewrites, hooks, and subject lines.

import requests

HEADERS = {"X-RapidAPI-Key": "YOUR_KEY", "X-RapidAPI-Host": "contentforge1.p.rapidapi.com"}

r = requests.post("https://contentforge1.p.rapidapi.com/v1/score_tweet",
    headers=HEADERS,
    json={"text": "I'm working on a new project."})
# β†’ {"score": 32, "grade": "C", "quality_gate": "FAILED", "suggestions": [...]}

r = requests.post("https://contentforge1.p.rapidapi.com/v1/score_tweet",
    headers=HEADERS,
    json={"text": "Got 100 signups in 24 hours πŸš€ Here's the copy that converted: #buildinpublic"})
# β†’ {"score": 91, "grade": "A", "quality_gate": "PASSED", "suggestions": []}

β†’ Start free on RapidAPI β€” no credit card required, 300 requests/month on BASIC.


Current Status (v1.8.0)

Component Status Notes
ContentForge API βœ… Live https://contentforge-api-lpp9.onrender.com
RapidAPI Listing βœ… Public 50 endpoints, 4-tier pricing
Keep-warm cron βœ… Active cron-job.org pings /v1/status every 10 min (no LLM call)
Gemini backend βœ… Configured gemini-2.0-flash on Render (1500 RPD free tier)
Ollama local βœ… Running Scoring uses zero AI calls β€” pure heuristics
Twitter bots βœ… Active Multi-account state machine, health scoring
Legal docs βœ… Done docs/TERMS_OF_USE.md, docs/TERMS_AND_CONDITIONS.md

Calibration Status

ContentForge is deterministic today. That means the same input always produces the same score and audit trail.

It is not the same thing as saying the engine is fully validated against outcome data yet. Calibration is still in progress, and the public log lives in docs/validation.md.

Current practical framing:

  • Use it as an explainable pre-flight quality gate.
  • Treat scores as heuristic guidance, not guaranteed performance prediction.
  • Expect weights to keep improving as blind test data comes in.

Calibration tooling now lives in:

Quick start:

python scripts/calibrate_content.py \
  --input docs/calibration_dataset_template.csv \
  --report-json docs/calibration_report.json \
  --report-md docs/calibration_report.md \
  --examples-json docs/calibration_examples.json

Launch Feedback So Far

Early launch feedback on Reddit pushed the positioning in a clearer direction:

  • The calibration challenge got more useful engagement than the broad feature pitch.
  • People understood "same input, same score" quickly, but trusted it more once the proof story was explicit.
  • Before or after comparison is easier to believe than a giant endpoint list.
  • The Chrome extension and headline workflow are easier to grasp than the full API surface on first contact.

The current landing page and docs now reflect that narrower first impression. Full notes live in docs/reddit-launch-notes.md.


Why Deterministic Scoring?

Every LLM-based scorer has the same flaw: ask it to score the same tweet twice and you'll get two different answers. For a professional content workflow, that's not a tool β€” that's a vibe check.

ContentForge's scoring layer is pure Python heuristics. Same input β†’ same output, every time. The logic is open source; you can read exactly why a post scored 74 and not 83. This is the Deterministic Advantage:

ContentForge LLM-based scoring
Response time <50ms 1–5 seconds
Variance on same input 0% ~15–30%
Explainability Full β€” every deduction itemised Black box
Cost per call Free (heuristics) $0.001–0.01 per call
Self-hostable βœ… (python scripts/api_prototype.py) Depends on provider

All 50 Endpoints

Instant Scorers (no AI, <50ms)

Endpoint What It Does
POST /v1/score_tweet Score a tweet 0–100 with grade + quality gate
POST /v1/score_linkedin_post Score a LinkedIn post for professional engagement
POST /v1/score_instagram Score an Instagram caption for saves and reach
POST /v1/score_youtube_title Score a YouTube title for CTR and SEO
POST /v1/score_youtube_description Score a YouTube description for watch time
POST /v1/score_email_subject Score an email subject line for open rate
POST /v1/score_readability Flesch–Kincaid + grade level + suggestions
POST /v1/score_threads Score a Threads post
POST /v1/score_facebook Score a Facebook post
POST /v1/score_tiktok Score a TikTok caption
POST /v1/score_pinterest Score a Pinterest pin description
POST /v1/score_reddit Score a Reddit post/title
POST /v1/analyze_headline Headline power word detection + CTR scoring
POST /v1/analyze_hashtags Hashtag strategy audit across platforms
POST /v1/score_content Single unified endpoint β€” pass platform param
GET /v1/analyze_headline GET variant for quick headline scoring

Multi-Content & Comparison

Endpoint What It Does
POST /v1/score_multi Score one post across all platforms simultaneously
POST /v1/ab_test Head-to-head score comparison of two drafts

AI Generation (Ollama β†’ Gemini fallback)

Endpoint What It Does
POST /v1/auto_improve Score β†’ if not PASSED β†’ AI rewrite β†’ re-score loop (up to 5 iterations) β€” returns best version + full iteration history
POST /v1/compose_assist Generate 2–5 rewrite variants, score each, return ranked with quality gates
POST /v1/improve_headline Rewrite a weak headline N times, sorted by score
POST /v1/generate_hooks Scroll-stopping openers for any topic/style
POST /v1/rewrite Rewrite for Twitter, LinkedIn, email, or blog
POST /v1/tweet_ideas Tweet ideas for a niche with hashtags
POST /v1/content_calendar 7-day content calendar with ready-to-post drafts
POST /v1/thread_outline Full Twitter thread: hook + body + CTA close
POST /v1/generate_bio Optimised social bio, auto-trimmed to platform limits
POST /v1/generate_ad_copy Google/Meta ad copy with CTA and compliance signals
POST /v1/generate_subject_line AI email subject line with open-rate optimisation

Quality Operations (QOps)

Endpoint What It Does
POST /v1/quality_gate Batch PASSED/REVIEW/FAILED verdict for up to 10 posts
GET /v1/platform_friction Real-time platform state (rate limits, algo signals)
POST /v1/proof_export Export scored posts + engagement delta as proof report

Utility

Endpoint What It Does
GET /health Service health: LLM backend, usage stats
GET /v1/status Lightweight ping β€” version, endpoint count

(Full 50-endpoint list with request/response schemas: RapidAPI docs)


Self-Hosting

ContentForge runs fully locally with Ollama. No external AI calls needed for scoring.

git clone https://github.com/CaptainFredric/ContentForge.git
cd ContentForge
pip install -r requirements.txt
python scripts/api_prototype.py
# β†’ Listening on http://localhost:5000

What runs locally with zero external calls:

  • All 12 platform scorers (deterministic, <50ms)
  • Quality gate evaluation (PASSED / REVIEW / FAILED)
  • Rate limiting and proof dashboard

What uses Ollama locally or falls back to Gemini:

  • Hook generation, rewrites, bio generation, subject lines, ad copy

LLM chain: Ollama first β†’ Gemini 2.5 Flash if Ollama unavailable β†’ model rotation. If Ollama is running locally, nothing leaves your machine for AI calls.

License: AGPL-3.0


Pricing (via RapidAPI)

Plan Price AI calls/mo Requests/mo
BASIC Free 50 300
PRO $9.99/mo 750 1,000
ULTRA $29.99/mo 3,000 4,000
MEGA $99/mo 18,000 20,000

All plans include every endpoint. Heuristic scoring calls don't count against your AI quota.

β†’ Get your free API key


Architecture

scripts/
└── api_prototype.py         # ContentForge Flask API β€” all 50 endpoints (incl. /v1/auto_improve)
extension/
β”œβ”€β”€ manifest.json            # Chrome extension (Manifest V3)
β”œβ”€β”€ popup.html / popup.js    # Score, compare, rewrite from the toolbar
β”œβ”€β”€ content.js / content.css # Real-time scoring badge on X, LinkedIn, etc.
└── background.js            # Service worker β€” API calls + offline fallback
deploy/
β”œβ”€β”€ render.yaml              # Render Blueprint
β”œβ”€β”€ openapi.json             # OpenAPI 3.0.3 spec (50 paths)
└── Procfile                 # Gunicorn start command
docs/
β”œβ”€β”€ ContentForge_API_Documentation.md
└── RapidAPI_GettingStarted.md

Contributing

PRs against main. One feature/fix per PR. Open an issue first. See CONTRIBUTING.md.

License

Affero General Public License v3.0. See LICENSE.

Acknowledgements

Early development scaffolding adapted from MoneyPrinterV2 by @DevBySami.

About

Before-publish content scoring API. Deterministic heuristics, <50ms, no AI in the scoring path. Self-hostable.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors