Skip to content

AdrianBesleaga/text-as-code

Repository files navigation

⚡ text-as-code

An IDE for Persuasion

Treat psychological influence as executable code.
Write text that compiles into specific human reactions.

100% offline. Runs entirely in your browser. No server. No API keys. Your data never leaves your device.

ProblemConceptHow It WorksModulesTech StackGetting StartedRoadmap


text-as-code demo screenshot

The Problem

Every day, billions of messages are sent — cold emails, job applications, sales pitches, landing pages. Most of them fail. Not because the information is wrong, but because the psychology is wrong.

What you wrote Why it fails What works
"I hope to hear from you" Hedging → signals low status "Let's align on next steps Thursday"
"Was responsible for managing a team" Passive voice → weak authority "Led a 12-person engineering team"
"Buy now!" High friction word → triggers resistance "Unlock instant access"
"Our product is the best" Unquantified claim → zero credibility "Trusted by 4,200+ teams since 2021"

These aren't style preferences. They're predictable psychological patterns backed by decades of behavioral science research.

text-as-code treats these patterns as rules — like a linter for code, but for human persuasion.


Concept

What if text had a compiler?

INPUT:  "I think our product might be able to help you."
           ^^^^                ^^^^^
           WARN: Hedging       WARN: Uncertainty marker

OUTPUT: "Our platform eliminates 6 hours of manual work per week."
         ✅ Authority          ✅ Quantified impact

text-as-code is a real-time analysis engine that:

  1. Parses your text against psychological rule sets
  2. Highlights weak spots (🔴), missed opportunities (🟡), and strong patterns (🟢)
  3. Explains the cognitive science behind each flag
  4. Suggests 1-click rewrites — powered by an AI running entirely inside your browser

Think ESLint for persuasion. Or Grammarly, but for influence instead of grammar.


How It Works

Two-Pass Analysis

Every piece of text runs through two complementary engines — both running 100% client-side:

Pass 1 — Static Rules (Instant, Deterministic)

A curated library of 50+ regex/NLP patterns that catch common psychological missteps in milliseconds:

{
  "id": "WEAK_HEDGE",
  "pattern": "\\b(I think|maybe|perhaps|I hope)\\b",
  "severity": "warning",
  "trigger": "cognitive_weakness",
  "psychology": "Hedging activates uncertainty bias in the reader, reducing perceived competence by ~40%.",
  "suggestion": "Remove the hedge. State with authority."
}

Rules are tagged by module — a resume gets different checks than a sales email.

Pass 2 — In-Browser LLM (Deep, Contextual)

For nuances that patterns can't catch — tone, framing, narrative arc — an LLM runs directly in your browser via WebGPU:

"Your cold email uses reciprocity correctly in paragraph 1,
 but the CTA in paragraph 3 creates cognitive overload by
 presenting 3 choices. Reduce to a single, binary CTA."

No API calls. No server. The model runs on your GPU locally.

Runtime Technology When it's used
WebLLM WebGPU-accelerated (30-70 tok/s) Modern browsers with GPU
Transformers.js WASM fallback (4x faster with WebGPU in v4) Older browsers, no GPU

Models are downloaded once and cached in IndexedDB for fully offline use.


The Linter UI

The editor feels like VS Code meets Grammarly:

Color Meaning Example
🔴 Red High cognitive friction — reader will bounce Sentence over 40 words
🟡 Yellow Weak framing — missed persuasion opportunity Passive voice, no CTA
🟢 Green Strong — psychologically effective Quantified claim, pattern interrupt

Hover over any highlight → see the psychological principle, the research behind it, and a 1-click rewrite.

A Persuasion Score (0-100) gives an at-a-glance quality metric, with a radar chart showing coverage across psychological triggers.


Modules

The system routes text through discipline-specific psychological rule sets:

🎯 Sales & Closing

Primary Triggers What the engine does
Loss Aversion Flags missing scarcity/urgency framing
Risk Reversal Detects absence of guarantees or social proof
Assumed Close Rewrites weak asks into confident next-step language

📄 Job Applications & Resumes

Primary Triggers What the engine does
Authority & Competence Strips passive voice, injects power verbs
Halo Effect Ensures strong opening creates positive bias cascade
Quantified Impact Forces "increased sales" → "drove 20% revenue increase"

📧 Cold Email & Outreach

Primary Triggers What the engine does
Curiosity / Info Gap Optimizes subject lines to force opens
Pattern Interrupt Flags generic openings that get deleted
Cognitive Ease Measures reading time, flags complexity

Tech Stack

Zero backend. Everything runs in the browser.

Layer Technology Why
Build Vite Fast HMR, small bundles
UI React 18 Component model, hooks, concurrent features
Editor TipTap (ProseMirror) Custom decorations for highlights
AI (Primary) WebLLM (@mlc-ai/web-llm) OpenAI-compatible API, WebGPU, 30-70 tok/s
AI (Fallback) Transformers.js v4 (@huggingface/transformers) WASM + WebGPU, 1200+ models, offline caching
State Zustand Lightweight, no boilerplate
Storage IndexedDB Persist docs & cached models offline
Styling Vanilla CSS Full control, no framework overhead

Recommended Models

Model Size Best for
Phi-3.5-mini-instruct (q4) ~2 GB Deep analysis, best quality
SmolLM2-1.7B-Instruct (q4) ~1 GB Fast analysis, lower-end GPUs
Qwen2.5-1.5B-Instruct (q4) ~1 GB Transformers.js fallback

Project Structure

text-as-code/
├── src/
│   ├── main.jsx
│   ├── App.jsx
│   ├── index.css                  # Design system
│   │
│   ├── engine/                    # 🧠 Psychological analysis core
│   │   ├── rules/                 # Static rule definitions (JSON)
│   │   │   ├── common.json        # 15 cross-module rules
│   │   │   ├── sales.json         # 7 sales-specific rules
│   │   │   ├── resume.json        # 7 resume-specific rules
│   │   │   └── cold-email.json    # 8 cold email rules
│   │   ├── modules.js             # Module registry + trigger weights
│   │   ├── static-analyzer.js     # Pass 1: regex rule matcher
│   │   ├── llm-analyzer.js        # Pass 2: in-browser LLM (Phase 4)
│   │   ├── prompts.js             # Module-specific system prompts
│   │   └── scorer.js              # Persuasion score (0-100)
│   │
│   ├── ai/                        # 🤖 In-browser AI runtime (Phase 4)
│   │   ├── provider.jsx           # React context for AI
│   │   ├── webllm.js              # WebLLM adapter
│   │   ├── transformers.js        # Transformers.js adapter
│   │   └── model-manager.js       # Download, cache, fallback
│   │
│   ├── components/                # 🎨 UI
│   │   ├── Editor/
│   │   │   └── Editor.jsx         # TipTap editor + highlight plugin
│   │   ├── Suggestions/
│   │   │   └── SuggestionList.jsx # Issue cards with psychology + fixes
│   │   ├── Score/
│   │   │   └── ScorePanel.jsx     # Ring gauge + trigger bars
│   │   └── ModuleSwitcher.jsx     # Sales / Resume / Cold Email
│   │
│   └── store/
│       └── editor-store.js        # Zustand state management
│
├── index.html
├── package.json
├── vite.config.js
└── README.md

Getting Started

Prerequisites

  • A modern browser with WebGPU support (Chrome 113+, Edge 113+, Safari 18+)
  • Node.js ≥ 18 (for development only)
  • No API keys needed. No server needed.

Install & Run

# Clone
git clone https://github.com/AdrianBesleaga/text-as-code.git
cd text-as-code

# Install
npm install

# Start dev server
npm run dev

Open http://localhost:5173 — paste your text, select a module, and watch the psychological analysis in real time.

First Launch

  1. The app loads instantly with static rules — no download needed
  2. Click "Download AI Model" to enable deep LLM analysis (~1-2 GB, one-time)
  3. Once cached, the app works fully offline — disconnect your network and try it

Privacy

Your text never leaves your browser. There is no server. There are no API calls. The AI model runs on your local GPU via WebGPU. All data is stored in IndexedDB on your device. You can verify this by opening DevTools → Network tab while using the app.

This makes text-as-code safe to use with:

  • Confidential business proposals
  • Salary negotiation emails
  • Legal correspondence
  • Anything you wouldn't paste into ChatGPT

Roadmap

Phase 1 — Static Linter ✅

  • 37 psychological rules across 3 modules
  • TipTap editor with color-coded highlights (red/yellow/blue)
  • Persuasion Score (0-100) with trigger coverage bars
  • Inline issue cards with psychology explanation + fix suggestion
  • Module switching (Sales, Resume, Cold Email)

Phase 2 — In-Browser AI

  • WebLLM integration (WebGPU)
  • Transformers.js fallback (WASM)
  • Model download manager with progress UI + model selection
  • Deep contextual analysis via local LLM

Phase 3 — Power Features

  • Document persistence (IndexedDB)
  • Export to PDF / Markdown / Clipboard
  • Custom rule editor (create your own psychological rules)
  • Side-by-side before/after comparison view

Phase 4 — Distribution

  • Chrome Extension (inject into Gmail, LinkedIn, Google Docs)
  • PWA / installable offline app
  • Community rule packs (share & import modules)

Philosophy

Text is the most underrated technology. It's the interface between every human interaction — hiring, selling, persuading, connecting. Yet we treat writing as an art, not a science.

text-as-code treats writing as engineering:

  • Patterns are bugs. They can be detected and fixed.
  • Principles are functions. They can be composed and applied.
  • Outcomes are tests. They can be measured and optimized.

Write text that compiles into the reaction you want.


License

MIT


text-as-code — Because every word is an instruction to the human mind.

About

Each word has power to influence decisions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages