Skip to content

Kush614/ClawIntel

Repository files navigation

ClawIntel

Competitive Intelligence Battle Cards for Sales Teams — an OpenClaw plugin + interactive 3D UI that gives sales reps instant, grounded competitive analysis before every call.

Built for The Agent Toolkit - OpenClaw Hack Day 2026.

Live Demo

ClawIntel UI

  • UI: http://localhost:5174 (3D interactive dashboard with 5 AI sales characters)
  • API: http://localhost:3001 (7 MCP tools exposed as REST endpoints)
  • Backend: Full 4-sponsor pipeline running with live API keys

What It Does

Ask competitive questions through AI sales characters and get structured, grounded battle cards:

User: How does Acme compare on enterprise pricing?

# Battle Card: Acme (Confidence: 90%)

## Comparison
- Our Enterprise: $49/user/mo (all-inclusive, no overage fees)
- Acme Enterprise: $39/user/mo + 30-40% API overage = $5,265/mo at 100 users
- TCO at scale: We're 22% cheaper

## Talking Points
- Transparent pricing: no hidden fees, unlimited API calls
- Real-time sync (sub-second) vs their 15-minute batch delay
- Full HIPAA compliance (certified) vs their beta status

## Objection Handlers
- "Acme is cheaper per seat" -> Factor in API overages, we win on TCO at 50+ users
- "They have more integrations" -> We cover Salesforce, HubSpot, Slack natively

Sources: ClawIntel Product Documentation (72% relevance)

Architecture (Deep Dive)

End-to-End Request Flow

                                    ClawIntel Pipeline
                                    ==================

  User Query ──> [1. Civic Auth] ──> [2. Redis Cache Check] ──> Cache Hit? ──> Return Card
                   |                        |
                   | JWKS JWT verify         | Semantic similarity search
                   | Role + permissions      | (vector-based, not exact match)
                   |                        |
                   v                  Cache Miss
              Identity +                    |
              Audit Log                     v
                                  [3. Redis Intel Search]
                                     |
                                     | Existing competitor data?
                                     v
                           [4. Apify 5-Actor Scrape] ──> [5. Redis Store]
                              |  |  |  |  |                    |
                              |  |  |  |  |              Content-hash dedup
                              |  |  |  |  |              Delta detection
                              v  v  v  v  v                    |
                           website-crawler                     v
                           docs-crawler              [6. Civic Hub Enrichment]
                           google-search                  (MCP proxy for
                           linkedin-co                   external tools)
                           g2-reviews                          |
                                                               v
                                                  [7. Contextual AI 3-Hop Chain]
                                                       |
                                                       | Hop 2: Product docs
                                                       | Hop 3: Cross-reference
                                                       | Hop 4: Synthesize + cite
                                                       v
                                                  [8. Battle Card]
                                                       |
                                                       v
                                                  [9. Redis Cache Write]
                                                       |
                                                       v
                                                  Return to User

Data Flow Per Sponsor

 APIFY                    REDIS                   CONTEXTUAL AI           CIVIC
 =====                    =====                   =============           =====
 5 actors ──scrape──>  Store intel ──retrieve──> Hop 1: competitor    Auth every
 in parallel           (deduped)                 data from Redis      request via
     |                     |                         |                JWKS/OIDC
     |                 Semantic cache             Hop 2: product          |
     |                 (vector search)            docs query          Role-based
     |                     |                         |                output filter
     |                 Event bus                  Hop 3: cross-ref        |
     |                 (scrape events)            competitor vs us    Audit trail
     |                     |                         |                (every action)
     |                 Per-user state             Hop 4: synthesize       |
     |                 (session scoped)           battle card +       Hub enrichment
     |                     |                     citations            (MCP proxy)
     v                     v                         v                    v
 Raw data ──────>  Stored + cached ──────>  Grounded output ──>  Secured + logged

4 Sponsor Integrations (All Load-Bearing)

1. Apify — Live Multi-Actor Scraping Pipeline

File: src/integrations/apify.ts

5 specialized actors run in parallel for comprehensive sales intelligence:

Actor Purpose Output
apify/website-content-crawler Competitor pages (pricing, changelog, careers) Structured page data
apify/website-content-crawler Deep docs/features crawl Product gap analysis
apify/google-search-scraper External mentions, reviews, news Market perception
dev_fusion/Linkedin-Company-Scraper Company headcount, industry (no cookies) Hiring signals
powerai/g2-product-reviews-scraper G2 reviews with pros/cons Objection fuel

Key features:

  • Delta detection: Compares current vs previous scrapes across pricing, features, hiring, headcount, and sentiment. Returns changes[] array so the battle card highlights what's new.
  • Parallel execution: All 5 actors start simultaneously. Results are collected as they finish.
  • Smart field mapping: Each actor has custom output parsing — LinkedIn uses profileUrls/companyName/specialities, G2 uses review_question_answers format.
  • Graceful degradation: If any actor fails, the pipeline continues with data from the remaining actors.

How it's used: When intel_battle_card is called and competitor data is stale (or missing), the full 5-actor pipeline fires. Fresh data is stored in Redis and ingested into Contextual AI's knowledge base.

2. Redis — Semantic Cache + Event Bus + State Store

File: src/integrations/redis-cache.ts

Uses agent-memory-server (Redis Cloud) — not raw Redis commands. This is the state backbone:

Capability How It's Used Namespace
Semantic search Vector-based retrieval over stored competitor intel — queries like "pricing" match entries about "cost", "plans", "tiers" {ns}
Intelligent caching Battle cards cached with similarity matching. Repeat/similar queries return in <100ms instead of 30s {ns}:cache
Content deduplication SHA-256 content-hash dedup prevents redundant storage. Same page scraped twice = stored once {ns}
Event bus Real-time scrape completion events via working memory. UI can poll for updates {ns}:events
Per-user state Session-scoped working context for multi-turn conversations. Each Civic-authenticated user has isolated memory {ns}:user:{id}
Summary views Aggregated competitor intelligence dashboards — freshness, entry counts, topic distribution {ns}
memoryPrompt() Redis-native contextual retrieval combining working memory + long-term memory for richer context {ns}:sessions

"This only works because of Redis" moments:

  1. Two reps ask about the same competitor within 5 minutes → second rep gets cached result in <100ms
  2. Apify scrapes 50 pages but 45 are identical to last scrape → only 5 stored (dedup)
  3. Rep asks "how's their pricing?" then "what about support?" → working memory carries competitor context across turns
  4. Manager asks "what's changed recently?" → event bus surfaces last scrape deltas

3. Contextual AI — 3-Hop Grounded Retrieval

File: src/integrations/contextual.ts

Every claim traces back to a source document. This is a TRUE multi-hop retrieval chain where each hop is a separate API call:

Hop What Happens API Call Output
Hop 1 Competitor data retrieved from Redis redis.searchIntel() Raw competitor intel (pricing, features, etc.)
Hop 2 Product documentation queried POST /applications/{id}/query Our product's capabilities + positioning
Hop 3 Cross-reference competitor vs product POST /applications/{id}/query Point-by-point comparison with citations
Hop 4 Synthesize battle card with citations POST /applications/{id}/query Structured sections: comparison, talking points, objections

Why this matters for sales:

  • "Their API has rate limits" → Citation: competitor docs page, scraped 2 hours ago
  • "We're HIPAA certified" → Citation: our compliance docs, section 4.2
  • "TCO is 22% cheaper" → Citation: cross-reference of both pricing pages

Contextual AI setup:

  • Datastore: Holds uploaded product documentation (ingested via /datastores/{id}/documents)
  • Application: Connected to datastore, handles all query hops
  • Response format: {message: {content, role}, retrieval_contents: [...], attributions: [...]}

4. Civic — Identity, RBAC, Audit Trail

File: src/integrations/civic.ts

3 Civic capabilities used:

Capability Implementation Purpose
@civic/auth-mcp McpServerAuth JWKS-based JWT verification via Civic's OIDC well-known config Every API call is authenticated
Civic Hub MCP proxy for external tool enrichment (tools/list, tools/call) Access 85+ MCP servers for additional competitive data
Session + RBAC Per-user sessions, role permissions, complete audit trail Granular access control for sensitive intel

Role-based access control:

Role Permissions Output Filter
rep Battle cards, search, status Standard battle card
manager + Analytics, trends, audit + Trend analysis, team insights
admin + Scrape, config, user management Full unfiltered access

Why Civic is load-bearing:

  • Remove auth → anyone can access your competitive intelligence (security disaster)
  • Remove identity → no per-user query isolation in Redis
  • Remove RBAC → raw scraped data exposed to all roles (reps see admin-only intel)
  • Remove audit → no compliance trail for sensitive competitive data
  • Remove Hub → lose external data enrichment capabilities

Interactive 3D UI

5 AI Sales Characters, each specialized:

Character Role Specialty
Alex Rivera "The Closer" Rep Battle cards, objection handling
Sarah Chen "The Strategist" Manager Trends, team insights
Marcus Webb "The Intel Chief" Admin System health, scrape ops
Priya Patel "The Scout" Rep Research, hiring signals, G2 reviews
Jordan Blake "The Negotiator" Manager Pricing analysis, deal strategy

Tech stack:

  • React 19 + TypeScript
  • Three.js via React Three Fiber (3D character scene)
  • Framer Motion (animations)
  • Tailwind CSS v4 (styling)

UI features:

  • Real-time backend health monitoring (10s polling)
  • Live battle card rendering with confidence scores, section icons, citation badges
  • Dashboard with live Redis metrics, competitor freshness, event timeline
  • Automatic fallback to demo mode when backend is offline

7 MCP Tools

Tool Access Description Sponsors Used
intel_authenticate All Verify identity via Civic Auth (JWKS or demo tokens) Civic
intel_battle_card rep+ Generate competitive battle cards (full 4-sponsor pipeline) All 4
intel_search rep+ Semantic search over stored competitor intel Redis
intel_scrape admin Trigger 5-actor Apify scrape with delta detection Apify, Redis, Contextual
intel_status rep+ View Redis metrics, events, competitor freshness, sessions Redis, Civic
intel_configure admin Manage competitors and user roles Civic, Redis
intel_audit admin View Civic-powered audit trail of all actions Civic

Project Structure

clawintel/
  src/
    index.ts                    # Plugin entry — 7 MCP tools, battle card pipeline
    server.ts                   # Express API server wrapping tools as REST endpoints
    types.ts                    # Shared types, config parsing, env var fallbacks
    demo.ts                     # Demo harness for local testing
    integrations/
      apify.ts                  # 5-actor parallel scraping + delta detection
      redis-cache.ts            # agent-memory-server client (semantic cache, events, state)
      contextual.ts             # 3-hop grounded retrieval chain
      civic.ts                  # JWKS auth, RBAC, sessions, Hub enrichment, audit
  ui/
    src/
      App.tsx                   # Main app — character selection, chat, live data
      api.ts                    # Backend API client
      components/
        Scene3D.tsx             # Three.js 3D character scene
        CharacterPanel.tsx      # Character selection sidebar
        ChatWindow.tsx          # Chat interface with typing indicators
        BattleCardView.tsx      # Battle card renderer (live + demo mode)
        Dashboard.tsx           # Live metrics dashboard
      data.ts                   # Demo data for offline mode
      types.ts                  # UI type definitions
  openclaw.plugin.json          # OpenClaw plugin manifest
  .env.example                  # Required environment variables

Quick Start

# 1. Clone and install
git clone https://github.com/Kush614/ClawIntel.git && cd ClawIntel
npm install

# 2. Set up environment
cp .env.example .env
# Add your keys:
#   APIFY_API_KEY=apify_api_...
#   CONTEXTUAL_API_KEY=key-...
#   CONTEXTUAL_APP_ID=...           (create at contextual.ai)
#   CONTEXTUAL_DATASTORE_ID=...     (create at contextual.ai)
#   CIVIC_API_KEY=...
#   AGENT_MEMORY_SERVER_URL=http://localhost:8000
#   REDIS_URL=redis://...           (Redis Cloud connection string)

# 3. Start Redis agent-memory-server (separate terminal)
cd ../agent-memory-server
uv sync
PYTHONIOENCODING=utf-8 uv run agent-memory api --port 8000

# 4. Build and start backend API
cd ../ClawIntel
npm run build
node --env-file=.env dist/server.js

# 5. Start UI (separate terminal)
cd ui && npm install && npm run dev

Environment Variables

Variable Required Description
APIFY_API_KEY Yes Apify API key for 5-actor scraping pipeline
CONTEXTUAL_API_KEY Yes Contextual AI API key for grounded retrieval
CONTEXTUAL_APP_ID Yes Contextual AI application ID (create via their dashboard)
CONTEXTUAL_DATASTORE_ID Yes Contextual AI datastore ID for document ingestion
CIVIC_API_KEY Yes Civic Auth API key for JWKS verification
AGENT_MEMORY_SERVER_URL Yes URL of agent-memory-server (default: http://localhost:8000)
REDIS_URL Yes Redis Cloud connection string (used by agent-memory-server)
OPENAI_API_KEY Yes* Required by agent-memory-server for embeddings (text-embedding-3-small)

*Set in the agent-memory-server's .env, not in ClawIntel's .env.


Tech Stack

Layer Technology
Backend TypeScript, OpenClaw Plugin SDK, Express 5
Frontend React 19, Three.js (React Three Fiber), Framer Motion, Tailwind CSS v4
Scraping Apify Cloud (5 actors)
Storage Redis Cloud (30MB free tier) via agent-memory-server
Retrieval Contextual AI (3-hop grounded retrieval with citations)
Auth Civic (@civic/auth-mcp, JWKS/OIDC, Hub MCP proxy)
Embeddings OpenAI text-embedding-3-small (via agent-memory-server)

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages