Signal-based engagement analytics for elizaOS agents.
Turn every conversation into actionable insights. This plugin captures, analyzes, and reacts to user engagement signals in real-time—helping you understand why users stay, leave, or convert.
Traditional analytics count events: "User sent 5 messages."
This plugin captures meaning: "User expressed frustration after not getting their question answered."
Understanding user behavior at this level enables:
| Capability | What It Does |
|---|---|
| Retention optimization | Identify at-risk users before they ghost |
| Product improvement | Understand what works and what doesn't |
| Monetization | Find users ready for premium features |
| Response adjustment | Automatically guide agents to respond better |
import { engagementPlugin } from '@elizaos/plugin-engagement';
// Add to your agent's plugins
const agent = {
plugins: [engagementPlugin],
// ...
};That's it. The plugin automatically:
- Captures signals from every message
- Tracks user sessions
- Emits alerts for friction/monetization
- Injects guidance when users struggle
| Events (Old Way) | Signals (This Plugin) |
|---|---|
| User sent message | User asked specific question |
| User returned | User expressed frustration |
| Session ended | User gave up after confusion |
Signals capture intent and emotion, not just actions.
The #1 predictor of retention. Users who start with "hi" typically have much higher ghost rates than those asking specific questions (e.g., ~70% vs ~30%).
| Type | Example | Ghost Risk |
|---|---|---|
greeting_only |
"hi", "hello" | 🔴 High |
confused |
"what is this?" | 🔴 High |
specific_question |
"how do I reset my password?" | 🟢 Low |
command |
"show me the weather" | 🟢 Low |
feature_probe |
"can you do X?" | 🟡 Medium |
When users express satisfaction:
thanks- Gratitude expressionshelpful- "that was helpful"perfect- Strong positive reactionssolved- "it works!", "that fixed it"
Early warning of churn:
expressed_confusion- "I don't understand"frustration- "this is so frustrating"gave_up- "nevermind", "forget it"capability_doubt- "can you actually do that?"
Users ready for more:
feature_request- "can you also do X?"limit_hit- "is there a limit?"willingness_to_pay- "how much does this cost?"
Sessions are the natural unit of engagement. A session ends when:
- 30 minutes pass between messages (configurable)
- User explicitly leaves
- Room closes
Session outcomes:
| Outcome | Meaning |
|---|---|
success |
User received value |
friction |
User left frustrated |
quick_exit |
User ghosted quickly |
inconclusive |
Can't determine |
┌─────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ Actions (GENERATE_ENGAGEMENT_REPORT) │
│ Providers (engagementProvider, signalContextProvider) │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────▼───────────────────────────────┐
│ ANALYSIS LAYER │
│ EngagementService - Metric calculation │
│ SignalDetectorService - Rules-based pattern matching │
│ SignalEnricherService - LLM-based deep analysis │
│ SessionTrackerService - Session boundary detection │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────▼───────────────────────────────┐
│ REACTION LAYER │
│ Event Handlers - React to signals │
│ Signal Context - Inject guidance into responses │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────▼───────────────────────────────┐
│ CAPTURE LAYER │
│ engagementSignalEvaluator - Real-time signal capture │
│ Message metadata storage │
└─────────────────────────────────────────────────────────────┘
Every user message is analyzed for signals:
// Automatic - runs on every message via evaluator
// Signals stored in message metadataconst sessionTracker = runtime.getService('session-tracker') as SessionTrackerService;
const stats = await sessionTracker.getEntitySessionStats(userId);
console.log(stats.successRate); // % of sessions with value
console.log(stats.frictionRate); // % of sessions with friction
console.log(stats.quickExitRate); // % of ghost sessionsThe plugin emits events you can listen to:
// Events emitted automatically
runtime.on('engagement:friction', (alert) => {
console.log(`Friction detected: ${alert.signals.join(', ')}`);
// Trigger Slack notification, update CRM, etc.
});
runtime.on('engagement:monetization', (alert) => {
console.log(`Hot lead: ${alert.entityId}`);
// Add to sales queue
});
runtime.on('engagement:session_end', (summary) => {
console.log(`Session ended: ${summary.outcome}`);
});Available events:
| Event | When |
|---|---|
engagement:friction |
User is struggling |
engagement:churn_risk |
User may leave |
engagement:monetization |
User wants more |
engagement:value_moment |
User received value |
engagement:first_message |
New user's first message |
engagement:session_end |
Session concluded |
When friction is detected, the signalContextProvider injects guidance into the agent's prompt:
⚠️ **ENGAGEMENT SIGNAL: User is experiencing friction**
Signals detected: expressed_confusion
**Recommendation**: Simplify your explanation, use concrete examples,
and check if the new explanation helps.
This helps the agent respond appropriately without manual intervention.
For high-value moments, the plugin runs LLM analysis:
// Automatic for:
// - First messages (understand intent)
// - Friction signals (understand what went wrong)
// - Monetization signals (understand needs)
// Returns enriched insights like:
// "User is trying to automate their email workflow but confused about API limits"Generate comprehensive reports via action:
// User can ask: "Generate an engagement report for the last 7 days"
// Or programmatically:
const engagementService = runtime.getService('engagement') as EngagementService;
const metrics = await engagementService.calculateCoreMetrics();
console.log(`Ghost rate: ${metrics.ghostRate}%`);
console.log(`Return rate: ${metrics.returnRate}%`);
console.log(`Deep conversation rate: ${metrics.deepConversationRate}%`);Core analytics engine.
const service = runtime.getService('engagement') as EngagementService;
// Core metrics
const metrics = await service.calculateCoreMetrics(startTime);
// Signal-based analysis
const firstMsgDist = await service.getFirstMessageDistribution(startTime);
const frictionAnalysis = await service.getFrictionAnalysis(startTime);
const monetizationAnalysis = await service.getMonetizationAnalysis(startTime);
const activationGap = await service.getActivationGapAnalysis(startTime);
// Power users
const powerUsers = await service.analyzePowerUsers(20, startTime);
// Full report
const report = await service.generateEngagementReport(startTime);Rules-based signal extraction.
const detector = runtime.getService('signal-detector') as SignalDetectorService;
// Extract signals from a message
const signals = detector.extractSignals(
userMessage,
agentResponse,
{ isFirstMessage: true }
);
console.log(signals.firstMessageType); // 'greeting_only' | 'specific_question' | ...
console.log(signals.hasFriction); // boolean
console.log(signals.frictionSignals); // ['expressed_confusion', ...]
console.log(signals.hasValue); // boolean
console.log(signals.hasMonetization); // booleanSession boundary detection.
const tracker = runtime.getService('session-tracker') as SessionTrackerService;
// Track a message (called automatically by evaluator)
const { session, isNewSession, endedSession } = await tracker.trackMessage(
entityId,
roomId,
signals,
firstMessageType
);
// Get session history
const history = await tracker.getSessionHistory(entityId);
// Get stats
const stats = await tracker.getEntitySessionStats(entityId);LLM-based signal enrichment.
const enricher = runtime.getService('signal-enricher') as SignalEnricherService;
// Check if enrichment should run
const trigger = enricher.shouldEnrich(quickSignals, { isFirstMessage: true });
if (trigger) {
const result = await enricher.enrich({
messageText,
responseText,
quickSignals,
triggerReason: trigger,
entityId,
roomId,
});
console.log(result.intent); // 'get_help' | 'explore' | ...
console.log(result.insight); // LLM-generated insight
console.log(result.confidence); // 0-1
}Key types exported from the plugin:
import type {
// Core metrics
CoreEngagementMetrics,
EngagementReport,
PowerUserAnalysis,
// Signals
QuickSignals,
FirstMessageType,
FrictionSignal,
ValueSignal,
MonetizationSignal,
// Sessions
ActiveSession,
SessionSummary,
SessionOutcome,
// Alerts
EngagementAlert,
SessionEndAlert,
// Context
SignalContext,
} from '@elizaos/plugin-engagement';Choose how signals are extracted from messages:
| Mode | Speed | Accuracy | Cost | Best For |
|---|---|---|---|---|
rules |
⚡ <1ms | Good | Free | High-volume, cost-sensitive (default) |
llm |
🐢 ~200-500ms | Excellent | Token cost | Maximum accuracy |
hybrid |
⚡/🐢 | Great | Moderate | Balance of speed and depth |
Set via environment variable:
ENGAGEMENT_SIGNAL_MODE=rules # or 'llm' or 'hybrid' (default: rules)Or via plugin config:
// In character file or agent config
plugins: [
{
name: '@elizaos/plugin-engagement',
config: {
signalExtractionMode: 'rules', // 'rules' | 'llm' | 'hybrid'
verbose: false,
showBanner: true,
}
}
]Mode Details:
rules(Fast, Default): Uses regex pattern matching. Best for high-volume deployments where speed matters.llm(Accurate): Uses LLM for ALL signal extraction. More nuanced understanding but adds latency.hybrid(Balanced): Rules-based first, LLM only for high-value moments (first messages, friction).
When using llm mode, you can batch multiple messages together for more efficient processing:
| Setting | Default | Description |
|---|---|---|
llmBatchSize |
1 | Messages to batch before LLM processing |
llmBatchTimeoutMs |
5000 | Max wait time (ms) before processing partial batch |
llmPriorityBypass |
true | Process friction/monetization signals immediately |
Set via environment variables:
ENGAGEMENT_SIGNAL_MODE=llm
ENGAGEMENT_LLM_BATCH_SIZE=5 # Batch 5 messages together
ENGAGEMENT_LLM_BATCH_TIMEOUT=3000 # Max 3 second wait
ENGAGEMENT_LLM_PRIORITY_BYPASS=true # Immediate processing for high-priority signalsOr via plugin config:
plugins: [
{
name: '@elizaos/plugin-engagement',
config: {
signalExtractionMode: 'llm',
llmBatchSize: 5,
llmBatchTimeoutMs: 3000,
llmPriorityBypass: true,
}
}
]Batch Size Guidelines:
| Batch Size | Use Case |
|---|---|
| 1 | Most responsive, highest cost (default) |
| 3-5 | Good balance of context and responsiveness |
| 10+ | Best for high-volume, cost-sensitive deployments |
Priority Bypass: When enabled, friction and monetization signals are detected first using fast rules-based checking. If found, they bypass the batch queue and are processed immediately—ensuring real-time intervention for critical moments.
Customize how the plugin reacts to signals:
import { registerEngagementHandlers } from '@elizaos/plugin-engagement';
// In your plugin init or setup
registerEngagementHandlers(runtime, {
handleFriction: true, // React to friction signals
handleChurnRisk: true, // React to churn risk
handleMonetization: true, // React to monetization signals
handleValue: true, // React to value moments
handleFirstMessage: true, // React to risky first messages
contextTtl: 5 * 60 * 1000, // Context expires after 5 min
verbose: false, // Enable for debugging
});Customize session detection:
// SessionTrackerService accepts config in constructor
// Default: 30 minute timeout, 2 message minimum for non-quick-exit
// These are the defaults:
{
sessionTimeoutMs: 30 * 60 * 1000, // 30 minutes
minMessagesForSession: 2, // Fewer = quick_exit
emitSessionEnd: true, // Emit events
debug: false, // Verbose logging
}Customize LLM enrichment:
// SignalEnricherService config
{
enabled: true, // Enable LLM enrichment
model: ModelType.OBJECT_SMALL, // Import from @elizaos/core
temperature: 0.3, // Lower = more consistent
maxTokens: 500, // Token limit
enrichFirstMessage: true, // Enrich first messages
enrichFriction: true, // Enrich friction signals
enrichMonetization: true, // Enrich monetization signals
}Run the test suite:
cd packages/plugin-engagement
bun testTest coverage:
- Signal detection (28 tests): First message, value, friction, monetization, sentiment
- Session tracking (17 tests): Lifecycle, signal aggregation, outcome classification
- Event handlers (17 tests): Context structure, TTL, recommendations
runtime.on('engagement:churn_risk', async (alert) => {
// Send to Slack
await slack.send({
channel: '#churn-alerts',
text: `🚨 Churn risk: User ${alert.entityId} - ${alert.signals.join(', ')}`,
});
// Log for analysis
analytics.track('churn_risk_detected', {
userId: alert.entityId,
signals: alert.signals,
severity: alert.severity,
});
});runtime.on('engagement:monetization', async (alert) => {
if (alert.signals.includes('willingness_to_pay')) {
// Hot lead - add to sales queue
await crm.addLead({
userId: alert.entityId,
source: 'engagement_signal',
intent: alert.insight || 'Expressed WTP',
priority: 'high',
});
}
});const service = runtime.getService('engagement') as EngagementService;
const weekAgo = Date.now() - 7 * 24 * 60 * 60 * 1000;
const report = await service.generateEngagementReport(weekAgo);
console.log(`
Weekly Engagement Report
========================
Total Users: ${report.metrics.totalInteractingUsers}
Ghost Rate: ${report.metrics.ghostRate.toFixed(1)}%
Return Rate: ${report.metrics.returnRate.toFixed(1)}%
Deep Conversations: ${report.metrics.deepConversationRate.toFixed(1)}%
Top Issues:
${report.diagnosis.issues.map(i => `- ${i}`).join('\n')}
Top Strengths:
${report.diagnosis.strengths.map(s => `- ${s}`).join('\n')}
`);import { matchesAnyPattern } from '@elizaos/plugin-engagement';
// Add your own patterns
const CUSTOM_PATTERNS = [
/\b(upgrade|premium)\b/i,
/\b(enterprise|team plan)\b/i,
];
function detectCustomSignal(text: string): boolean {
return matchesAnyPattern(text, CUSTOM_PATTERNS);
}import { ENGAGEMENT_EVENTS } from '@elizaos/plugin-engagement';
// Register custom handler
runtime.registerEvent(ENGAGEMENT_EVENTS.VALUE_MOMENT, async (alert) => {
// Your custom logic
if (alert.signals.includes('solved')) {
// User had problem solved - good time to ask for feedback
await triggerFeedbackRequest(alert.entityId);
}
});- Phase 1: Rules-based signal detection
- Phase 2: LLM enrichment for high-value moments
- Phase 3: Session tracking and outcome classification
- Phase 4: Event handlers and response adjustment
- Phase 5: HTTP API endpoints
- Phase 6: Dashboard UI component
- Phase 7: Predictive churn scoring
- Phase 8: A/B testing framework
Score users by churn risk in real-time.
Reactive analytics tell you WHAT happened: "User X ghosted." Predictive scoring tells you WHAT WILL happen: "User Y has 78% churn probability."
| Factor | Weight | Description |
|---|---|---|
| First Message Type | 25% | Greeting-only = 90 risk, specific question = 30 risk |
| Friction Ratio | 20% | High friction-to-value ratio increases risk |
| Value Ratio | 15% | More value signals = less risk |
| Session Pattern | 15% | Quick exits and friction sessions increase risk |
| Recency | 10% | Days since last activity |
| Conversation Depth | 10% | Avg messages per session |
| Return Behavior | 5% | Has user returned? |
| Tier | Score | Action |
|---|---|---|
| Low | <30 | Normal engagement |
| Medium | 30-60 | Monitor closely |
| High | 60-80 | Proactive intervention |
| Critical | >80 | Immediate recovery |
const churnService = runtime.getService('churn-scoring') as ChurnScoringService;
// Score a user
const score = await churnService.getScore(entityId);
console.log(score.score); // 0-100
console.log(score.tier); // 'low' | 'medium' | 'high' | 'critical'
console.log(score.topReasons); // ["Started with greeting only", ...]
console.log(score.recommendations); // ["Include clear CTA in responses", ...]
console.log(score.confidence); // 0-1 based on data availability
// Get intervention recommendation
const intervention = churnService.getIntervention(score);
console.log(intervention.type); // 'none' | 'monitor' | 'enhance_response' | 'recovery'
console.log(intervention.priority); // 'low' | 'medium' | 'high' | 'urgent'Test response strategies and measure impact on retention.
"Does adding a CTA reduce ghost rate?" Without A/B testing, this is a guess. With it, it's data.
1. Define Experiment
│
▼
2. Assign Users to Variants (deterministic hash)
│
▼
3. Apply Variant Config
│
▼
4. Track Outcomes (signals, sessions)
│
▼
5. Analyze Results (p-value, confidence intervals)
│
▼
6. Ship Winner
| Template | Hypothesis |
|---|---|
greeting_cta |
Adding CTA to greeting responses reduces ghost rate |
friction_recovery |
Acknowledging confusion explicitly improves outcomes |
response_length |
Shorter responses improve engagement |
followup_question |
Ending with questions increases depth |
const experiments = runtime.getService('experiments') as ExperimentService;
// Create from template
const exp = experiments.createFromTemplate('greeting_cta');
experiments.startExperiment(exp.id);
// Get variant for user
const variant = experiments.getVariant(exp.id, entityId);
if (variant?.config.addCta) {
response += '\n\nWhat would you like help with?';
}
// Track outcomes (automatic with session tracking)
experiments.recordSessionOutcomes(entityId, {
returned: true,
sessionSuccess: true,
frictionRate: 0.1,
messagesPerSession: 5,
valueSignals: 3,
});
// Analyze results
const results = experiments.analyzeExperiment(exp.id);
console.log(results.overallRecommendation);
// "Ship treatment: 23% improvement (p=0.02)"- Binary metrics: Two-proportion z-test
- Continuous metrics: Welch's t-test
- Confidence level: 95%
- Power analysis: Included in results
MIT