diff --git a/.gitignore b/.gitignore index 54e65dd..ee7aee9 100644 --- a/.gitignore +++ b/.gitignore @@ -38,6 +38,8 @@ out/ # Logs logs/ *.log +verif_log.txt +verif_analytics_log.txt # Temporary files tmp/ @@ -49,3 +51,24 @@ backups/ docs/.vitepress/cache/ docs/.vitepress/dist/ + +# Windows +Thumbs.db +ehthumbs.db +Desktop.ini + +# Executables +*.exe +*.dll + +# Local/Secret files +test_output.txt +secrets.json +*.local + +# Hackathon runtime files +hackathon/watch/processed/ +hackathon/watch/failed/ +hackathon/sample-data/demo-call.mp3 +__pycache__/ +*.pyc diff --git a/docs/jsonld-ex-evaluation.md b/docs/jsonld-ex-evaluation.md new file mode 100644 index 0000000..5f803f1 --- /dev/null +++ b/docs/jsonld-ex-evaluation.md @@ -0,0 +1,40 @@ +# Evaluation of @jsonld-ex/core Integration for vCon MCP Server + +## Executive Summary + +The `@jsonld-ex/core` library extends JSON-LD with features specifically targeted at AI/ML data modeling, security, and validation. Given the vCon standard's focus on conversation data, analysis, and integrity, this library offers significant potential benefits, particularly for **Analysis** and **Integrity** layers. However, integration should be approached as an **enhancement layer** rather than a core replacement to maintain strict compliance with the IETF vCon draft. + +## Feature Mapping + +| vCon Feature | @jsonld-ex Feature | Potential Benefit | +| :--- | :--- | :--- | +| **Analysis Confidence** | `@confidence` | **High**. Standardizes confidence scoring (0.0-1.0) in analysis outputs (transcripts, sentiment), replacing ad-hoc fields. | +| **Content Integrity** | `@integrity` | **High**. Provides a standard mechanism for cryptographic content verification, superseding manual `content_hash` checks. | +| **Embeddings** | `@vector` | **Medium**. Standardizes vector representation. Useful if vCons are exchanged between systems using different vector stores. | +| **Provenance** | `@source` | **Medium**. Enhances tracking of which model/vendor generated an analysis, linking directly to model cards or endpoints. | +| **Validation** | `@shape` | **High**. Offers native JSON-LD validation, potentially more robust than JSON Schema for graph-based data. | + +## Pros & Cons + +### Pros +1. **Standardization**: Moves ad-hoc metadata (like confidence scores) into a standardized, interoperable format. +2. **Security**: Native support for integrity checks and signing (`@integrity`) is critical for trusted AI pipelines. +3. **Interoperability**: Makes vCon data more consumable by other JSON-LD aware AI agents and tools. +4. **Future-Proofing**: Aligns with the trend of using Knowledge Graphs for AI memory. + +### Cons +1. **Complexity**: JSON-LD processing (expansion/compaction) introduces overhead compared to raw JSON handling. +2. **Compliance Risk**: The IETF vCon draft defines a strict JSON schema. Adding `@` properties directly might require using the `extensions` mechanism to remain compliant. +3. **Dependency**: Adds a core dependency. If the library is experimental or lacks broad adoption, it introduces maintenance risk. + +## Recommendation + +**Proceed with integration as a Plugin/Extension.** + +We should **NOT** replace the core `VCon` type or storage model immediately. Instead, we should integrate `@jsonld-ex/core` to enhance specific capabilities: + +1. **Enhanced Analysis Plugin**: Create a plugin that outputs Analysis objects enriched with `@confidence` and `@source`. +2. **Integrity Verification Tool**: Use `@jsonld-ex` to implement a robust verification tool that checks `@integrity` of vCons. +3. **Export as JSON-LD**: Add an API endpoint `GET /vcons/:uuid/jsonld` that returns the vCon expanded with JSON-LD context, allowing external tools to leverage the semantic data. + +This approach provides the benefits of semantic AI data without breaking existing IETF compliance or performance for standard operations. diff --git a/docs/jsonld-integration.md b/docs/jsonld-integration.md new file mode 100644 index 0000000..ff6b7c4 --- /dev/null +++ b/docs/jsonld-integration.md @@ -0,0 +1,107 @@ +# JSON-LD Integration & Integrity Guide + +This guide details the `jsonld-ex` integration in the vCon MCP Server, which adds semantic web capabilities, AI metadata enrichment, and cryptographic integrity to vCons. + +## Overview + +The integration provides three key features: +1. **JSON-LD Context**: Maps vCon terms to standard URIs. +2. **Enrichment**: Adds `@confidence` scores and `@source` provenance to Analysis objects. +3. **Integrity**: Provides tamper-evident signing using SHA-256 hashing. + +## 1. JSON-LD Context + +The vCon server now supports converting standard vCons to JSON-LD format. This allows vCons to be linked with other semantic data. + +### Usage + +```typescript +import { toJsonLd } from '../src/jsonld/context.js'; +import { VCon } from '../src/types/vcon.js'; + +const vcon: VCon = { ... }; // Your standard vCon +const jsonLdVcon = toJsonLd(vcon); + +console.log(jsonLdVcon['@context']); +// Outputs: ["https://validator.vcon.dev/vcon.jsonld", ...] +``` + +## 2. Analysis Enrichment + +AI extensions allow you to attach metadata to `analysis` blocks, such as confidence scores and model sources. + +### `@confidence` +A formatted float (0.0 - 1.0) indicating the certainty of the analysis. + +### `@source` +A URI indicating the origin of the analysis (e.g., specific model endpoint). + +### Example + +```typescript +import { enrichAnalysis } from '../src/jsonld/enrichment.js'; + +const analysis = { + type: "transcript", + vendor: "openai", + body: "Hello world" +}; + +// Add confidence (0.98) and source +const enriched = enrichAnalysis( + analysis, + 0.98, + "https://api.openai.com/v1/chat/completions" +); + +// Result: +// { +// ...analysis, +// "@confidence": 0.98, +// "@source": "https://api.openai.com/v1/chat/completions" +// } +``` + +## 3. Integrity & Signing + +Ensure vCons have not been tampered with by adding a cryptographic signature. The server uses a deterministic SHA-256 hash of the vCon content (excluding the `@integrity` field itself). + +### Signing a vCon + +```typescript +import { signVCon } from '../src/jsonld/integrity.js'; + +const vcon = { ... }; +const signedVCon = signVCon(vcon); + +console.log(signedVCon['@integrity']); +// Outputs: "sha256-a1b2c3d4..." +``` + +### Verifying Integrity + +Verification recalculates the hash and compares it to the `@integrity` field. + +```typescript +import { verifyIntegrity } from '../src/jsonld/integrity.js'; + +const isValid = verifyIntegrity(signedVCon); + +if (isValid) { + console.log("vCon is authentic and untampered."); +} else { + console.error("Integrity check failed! Data may be corrupted or tampered."); +} +``` + +### How it Verification Works +1. Removes the existing `@integrity` field. +2. Serializes the JSON using `fast-json-stable-stringify` (deterministic ordering). +3. Computes SHA-256 hash. +4. Compares computed hash with the provided hash. + +## Best Practices + +* **Sign Last**: Always sign the vCon *after* all modifications (including enrichment) are complete. +* **Enrich First**: Add confidence scores and sources before signing so they are protected by the integrity hash. +* **Transport**: JSON-LD vCons are valid JSON and can be stored/transmitted exactly like standard vCons. diff --git a/docs/mongodb/architecture.md b/docs/mongodb/architecture.md new file mode 100644 index 0000000..01e585a --- /dev/null +++ b/docs/mongodb/architecture.md @@ -0,0 +1,62 @@ +# MongoDB Architecture in vCon MCP Server + +This document outlines the architectural design of the MongoDB integration within the vCon MCP Server. + +## Overview + +The server supports a dual-database architecture, allowing it to run with either Supabase (PostgreSQL) or MongoDB as the backend. This is achieved through strict interface abstraction and dynamic dependency injection. + +## Core Interfaces + +All database interactions are governed by the following interfaces defined in `src/db/interfaces.ts` and `src/db/types.ts`: + +1. **`IVConQueries`**: + - Defines CRUD operations for vCons (Create, Read, Update, Delete). + - Defines Search operations (Keyword, Semantic, Hybrid). +2. **`IDatabaseInspector`**: + - Provides methods to inspect database structure (collections/tables, indexes, schema). + - Provides database statistics. +3. **`IDatabaseAnalytics`**: + - Provides business logic analytics (growth trends, tagging stats, attachment breakdowns). +4. **`IDatabaseSizeAnalyzer`**: + - Analyzes storage usage and provides smart recommendations for query limits. + +## MongoDB Implementation + +The MongoDB implementation resides in `src/db/`: + +| Component | Class | File | description | +| :--- | :--- | :--- | :--- | +| **Client** | `MongoDatabaseClient` | `mongo-client.ts` | Manages connection pool. | +| **Queries** | `MongoVConQueries` | `mongo-queries.ts` | Implements `IVConQueries`. | +| **Inspector** | `MongoDatabaseInspector` | `mongo-inspector.ts` | Implements `IDatabaseInspector`. | +| **Analytics** | `MongoDatabaseAnalytics` | `mongo-analytics.ts` | Implements `IDatabaseAnalytics`. | +| **Size** | `MongoDatabaseSizeAnalyzer` | `mongo-size-analyzer.ts` | Implements `IDatabaseSizeAnalyzer`. | + +### Data Model + +- **Collection**: `vcons` + - Stores the full vCon object as a single document. + - Uses MongoDB Text Index for keyword search. +- **Collection**: `vcon_embeddings` + - Stores vector embeddings separately to allow for optimized vector search. + - Fields: `vcon_id`, `embedding` (array of floats), `created_at`. + - Uses Atlas Vector Search Index (`vector_index`) for semantic search. + +### Aggregation Pipelines + +Complex analytics are implemented using MongoDB Aggregation Framework. +- **Growth Trends**: Uses `$group` by date parts of `created_at`. +- **Tag Analytics**: Uses `$project` and `$unwind` to normalize tag arrays/objects before grouping. +- **Vector Search**: Uses `$vectorSearch` stage (available in Atlas) for similarity search. + +## Dependency Injection + +The server determines which backend to use at runtime in `src/server/setup.ts`: + +1. Checks `process.env.DB_TYPE`. +2. If `'mongodb'`, dynamically imports MongoDB classes and initializes `MongoClient`. +3. If `'supabase'` (default), initializes `SupabaseClient`. +4. Injects the selected implementation into `ServerContext`. + +This allows the core server logic and MCP tools to remain agnostic of the underlying database. diff --git a/docs/mongodb/indexes.md b/docs/mongodb/indexes.md new file mode 100644 index 0000000..59788fe --- /dev/null +++ b/docs/mongodb/indexes.md @@ -0,0 +1,39 @@ +# MongoDB Atlas Vector Search Index Definition + +To enable Semantic Search and Hybrid Search, you must create a Vector Search Index on the `vcon_embeddings` collection in MongoDB Atlas. + +## 1. Create the Index + +1. Go to your MongoDB Atlas Cluster. +2. Navigate to **Atlas Search** -> **Vector Search**. +3. Click **Create Index**. +4. Select your Database and the **`vcon_embeddings`** collection. +5. Enter the **Index Name**: `vector_index` (This name is hardcoded in `MongoVConQueries`). +6. Choose **JSON Editor** and paste the following configuration: + +```json +{ + "fields": [ + { + "type": "vector", + "path": "embedding", + "numDimensions": 384, + "similarity": "cosine" + }, + { + "type": "filter", + "path": "vcon_id" + }, + { + "type": "filter", + "path": "content_type" + } + ] +} +``` + +## 2. Verify + +Once the index status changes to **Active**, the `semanticSearch` functionality in the vCon MCP server will automatically start returning results. + +> **Note**: The `numDimensions` is set to **384** to match the `text-embedding-3-small` (or similar) model used in `embed-vcons.ts`. If you change the embedding model, update this value accordingly. diff --git a/docs/mongodb/setup.md b/docs/mongodb/setup.md new file mode 100644 index 0000000..2705aea --- /dev/null +++ b/docs/mongodb/setup.md @@ -0,0 +1,84 @@ +# MongoDB Setup Guide for vCon MCP Server + +This guide provides instructions on how to set up the vCon MCP Server with a MongoDB backend. + +## Prerequisites +- **Node.js**: v18 or higher +- **MongoDB**: v6.0 or higher (Atlas recommended for Vector Search) +- **OpenAI API Key**: Required for generating embeddings + +## 1. Environment Configuration + +Create or update your `.env` file with the following variables: + +```env +# Database Selection (mongodb or supabase) +DB_TYPE=mongodb + +# MongoDB Connection String +# Format: mongodb+srv://:@.mongodb.net/?appName= +MONGO_URL=mongodb+srv://user:pass@cluster.mongodb.net/?appName=vcon-app + +# Optional: Specific Database Name (default: vcon) +MONGO_DB_NAME=vcon + +# Embedding Configuration (Required for Vector Search) +OPENAI_API_KEY=sk-proj-... +``` + +## 2. Atlas Vector Search Setup + +To enable Semantic and Hybrid search, you must create a Vector Search Index on your MongoDB Atlas cluster. + +1. **Create Collection**: Ensure the `vcon_embeddings` collection exists in your database. +2. **Create Index**: + - Go to **Atlas UI** -> **Database** -> **Search**. + - Click **Create Search Index**. + - Select **JSON Editor**. + - Select output database and collection: `vcon.vcon_embeddings`. + - Name the index: `vector_index`. + - Input the following definition: + +```json +{ + "fields": [ + { + "numDimensions": 1536, + "path": "embedding", + "similarity": "cosine", + "type": "vector" + }, + { + "path": "vcon_id", + "type": "filter" + }, + { + "path": "created_at", + "type": "filter" + } + ] +} +``` + +> [!NOTE] +> If you are using a different embedding model (e.g., Azure OpenAI), ensure `numDimensions` matches your model's output (e.g., 1536 for text-embedding-3-small). + +## 3. Text Search Index + +For standard keyword search functionality, a text index is required on the `vcons` collection. The server will attempt to create this automatically on startup, but you can also create it manually: + +```javascript +db.vcons.createIndex({ "$**": "text" }, { name: "TextIndex" }) +``` + +## 4. Verification + +Run the verification scripts to ensure everything is configured correctly: + +```bash +# Verify Core CRUD and Search +npx tsx scripts/verify-mongo.ts + +# Verify Analytics and Inspector +npx tsx scripts/verify-mongo-analytics.ts +``` diff --git a/docs/mongodb/usage.md b/docs/mongodb/usage.md new file mode 100644 index 0000000..09b3dc4 --- /dev/null +++ b/docs/mongodb/usage.md @@ -0,0 +1,63 @@ +# Using vCon MCP Server with MongoDB + +## Running the Server + +To start the server with MongoDB support: + +1. Ensure prerequisites are met (see [setup.md](./setup.md)). +2. Run the development server: + +```bash +# Using cross-env for cross-platform compatibility +cross-env DB_TYPE=mongodb npm run dev +``` + +Or set the environment variable in your shell/IDE configuration. + +## Verification Scripts + +We provide dedicated scripts to verify the MongoDB integration. + +### 1. Core Verification (`scripts/verify-mongo.ts`) +Tests basic CRUD operations and Search. + +```bash +npx tsx scripts/verify-mongo.ts +``` + +**What it tests:** +- Connecting to MongoDB. +- Creating a sample vCon. +- Reading the vCon by UUID. +- Performing a keyword search. +- Performing a semantic search (requires embeddings). +- Deleting the sample vCon. + +### 2. Analytics Verification (`scripts/verify-mongo-analytics.ts`) +Tests Inspector, Analytics, and Size Analyzer components. + +```bash +npx tsx scripts/verify-mongo-analytics.ts +``` + +**What it tests:** +- `getDatabaseShape`: Lists collections and counts. +- `getDatabaseStats`: Checks index usage and storage stats. +- `getDatabaseAnalytics`: Aggregates dummy data for trends. +- `getSmartSearchLimits`: Recommends limits based on DB size. + +## Troubleshooting + +### Connection Errors +- **Error**: `MongoServerError: bad auth : Authentication failed.` + - **Fix**: Check `MONGO_URL`. Ensure user/password are correct and IP is whitelisted in Atlas. + +### Search Errors +- **Error**: `MongoServerError: PlanExecutor error during aggregation :: caused by :: Index 'vector_index' not found.` + - **Fix**: You must create the Vector Search Index in Atlas. See [setup.md](./setup.md). +- **Error**: `text index required for $text query` + - **Fix**: The server attempts to create a text index on startup. Check logs for index creation errors, or run `db.vcons.createIndex({ "$**": "text" })` manually. + +### Analytics Errors +- **Error**: `Stage not supported` + - **Fix**: Ensure you are using a compatible MongoDB version (v6.0+ recommended). diff --git a/docs/mongodb/walkthrough.md b/docs/mongodb/walkthrough.md new file mode 100644 index 0000000..ee91617 --- /dev/null +++ b/docs/mongodb/walkthrough.md @@ -0,0 +1,76 @@ +# MongoDB Support Implementation Walkthrough + +I have successfully implemented MongoDB support for the vCon MCP Server, allowing it to use either Supabase (PostgreSQL) or MongoDB as the backend database. + +## Changes + +### 1. Database Abstraction +Refactored the codebase to introduce an interface-based database layer: +- **`src/db/interfaces.ts`**: Defined `IVConQueries` interface for standardization. +- **`src/db/queries.ts`**: Renamed `VConQueries` to `SupabaseVConQueries` and implemented the interface. +- **`src/db/mongo-queries.ts`**: Created `MongoVConQueries` implementing the interface for MongoDB. + +### 2. MongoDB Client +Implemented a robust MongoDB client with connection pooling and configuration: +- **`src/db/mongo-client.ts`**: Handles connection management. + - *Note*: Disabled `strict` mode in `serverApi` to support text indexes. + +### 3. Server Configuration +Updated server setup to dynamically choose the backend: +- **`src/server/setup.ts`**: Initializes `MongoVConQueries` if `DB_TYPE=mongodb` is set, otherwise defaults to Supabase. +- **`src/services/vcon-service.ts`**: Updated to use `IVConQueries` interface. + +### 4. Advanced Search (Phase 2) +Implemented support for MongoDB Atlas Vector Search: +- **`src/db/mongo-queries.ts`**: + - `semanticSearch`: Uses `$vectorSearch` aggregation stage. + - `hybridSearch`: Combines keyword search (Text Index) and semantic search (Vector Index) results. +- **`vcon_embeddings`**: Embeddings are stored in a separate collection, mirroring the Postgres structure. + +## Verification Results + +### Automated Verification +I created a standalone verification script `scripts/verify-mongo.ts` to test the full lifecycle: + +1. **Connection**: Successfully connected to the provided MongoDB instance. +2. **Initialization**: Created necessary indexes (text search, unique constraints). +3. **Creation**: Inserted a test vCon. +4. **Retrieval**: Fetched the vCon and verified data integrity. +5. **Search**: Successfully found the vCon using keyword search (text index). +6. **Update**: Added a dialog to the vCon. +7. **Deletion**: Deleted the vCon and verified it was removed. +8. **Vector Search**: Tested `semanticSearch` and `hybridSearch`. `semanticSearch` correctly reports empty results if the Atlas Index is missing (expected until user takes action), while `hybridSearch` falls back to keyword matching. + +### Log Output +``` +Starting MongoDB verification... +Initializing collections... +Creating vCon... +Created vCon: {"uuid":"fc762175-2650-4546-88ac-4552576a8e59","id":"fc762175-2650-4546-88ac-4552576a8e59"} +Retrieving vCon... +Retrieved vCon subject: "Test MongoDB VCon Verification" +Searching vCon... +Search results count: 1 +Found vCon in search results +Adding dialog... +Dialog added +Deleting vCon... +vCon deleted +Verified vCon is deleted (got expected error) +------------------------------------------------ +Starting Phase 2: Vector Search Verification +Created vCon for vector search +Inserted dummy embedding +Running semantic search... +Semantic search results: 0 +NOTE: Semantic search returned 0 results. This is EXPECTED if the Atlas Vector Search Index is not yet created. +To fix: Create a Vector Search Index on `vcon_embeddings` collection with definition provided in documentation. +Running hybrid search... +Hybrid search results: 1 +Cleaning up vector test data... +Verification SUCCESS +``` + +## Next Steps + +- **Phase 3**: Implement Analytics and Inspector specifically for MongoDB. diff --git a/examples/example1/README.md b/examples/example1/README.md new file mode 100644 index 0000000..60b68d7 --- /dev/null +++ b/examples/example1/README.md @@ -0,0 +1,34 @@ +# vCon MCP Features Example App + +This is a standalone Streamlit application built to demonstrate the MongoDB Analytics/Search and JSON-LD (`jsonld-ex`) features of the vCon MCP server. + +It uses mock data generated on-the-fly to simulate how the vCon server processes, enriches, signs, and searches conversation data. No actual MongoDB database or live vCon server is required to run this demo. + +## Features Showcased +- **JSON-LD Context**: Converting standard vCons to semantic web documents. +- **Analysis Enrichment**: Attaching `@confidence` and `@source` metadata to vendor analyses. +- **Cryptographic Integrity**: Tamper-evident signing of vCons using deterministic SHA-256 hashing. +- **MongoDB Analytics**: Simulated visualizations of database growth and tag usage. +- **Vector Search**: A simulated hybrid search experience finding vCons based on dialog content. + +## How to Run + +1. Ensure you have Python installed. +2. Navigate to this directory in your terminal: + ```bash + cd examples/example1 + ``` +3. Install the dependencies: + ```bash + pip install -r requirements.txt + ``` +4. Run the Streamlit app: + ```bash + streamlit run app.py + ``` + +A browser window should automatically open pointing to `http://localhost:8501`. + +## Using the Demo +- Explore the **JSON-LD & Integrity** tab to step through a vCon's lifecycle or load a custom `.vcon` file (a `sample.vcon` file is provided). You can use the "Reset to Random" button to start fresh or modify loaded data to see integrity verification fail. +- Explore the **Database & Analytics** tab to view the dashboard and run searches against the simulated database. diff --git a/examples/example1/app.py b/examples/example1/app.py new file mode 100644 index 0000000..a26dca1 --- /dev/null +++ b/examples/example1/app.py @@ -0,0 +1,145 @@ +import streamlit as st +import pandas as pd +import altair as alt +import json +import mock_data + +st.set_page_config(page_title="vCon Example App", page_icon="๐Ÿ“ž", layout="wide") + +# Initialize session state for the vCon lifecycle demo +if "current_vcon" not in st.session_state: + st.session_state.current_vcon = mock_data.generate_mock_vcon() + +if "jsonld_step" not in st.session_state: + st.session_state.jsonld_step = "raw" + +st.title("๐Ÿ“ž vCon Features Example App") +st.markdown("This application demonstrates the MongoDB and JSON-LD (`jsonld-ex`) features built for the vCon MCP server.") + +tab1, tab2 = st.tabs(["semantic Web & Integrity (`jsonld-ex`)", "Database & Analytics (MongoDB)"]) + +with tab1: + st.header("JSON-LD Lifecycle") + st.markdown("Step through the process of converting a standard vCon into a cryptographically sound semantic document.") + + # Action buttons + col1, col2, col3, col4, col5 = st.columns(5) + with col1: + if st.button("๐Ÿ”„ Reset to Random"): + st.session_state.current_vcon = mock_data.generate_mock_vcon() + st.session_state.jsonld_step = "raw" + st.rerun() + with col2: + if st.button("โž• 1. Add Context") and st.session_state.jsonld_step == "raw": + st.session_state.current_vcon = mock_data.to_jsonld(st.session_state.current_vcon) + st.session_state.jsonld_step = "context" + st.rerun() + with col3: + if st.button("โœจ 2. Enrich Analysis") and st.session_state.jsonld_step == "context": + st.session_state.current_vcon = mock_data.enrich_analysis( + st.session_state.current_vcon, + confidence=0.98, + source="https://api.openai.com/v1/chat/completions" + ) + st.session_state.jsonld_step = "enrich" + st.rerun() + with col4: + if st.button("๐Ÿ” 3. Sign vCon") and st.session_state.jsonld_step == "enrich": + st.session_state.current_vcon = mock_data.sign_vcon(st.session_state.current_vcon) + st.session_state.jsonld_step = "signed" + st.rerun() + with col5: + if st.button("โœ… 4. Verify Integrity") and st.session_state.jsonld_step == "signed": + is_valid, msg = mock_data.verify_integrity(st.session_state.current_vcon) + if is_valid: + st.success(msg) + else: + st.error(msg) + + st.divider() + + st.subheader("Load Custom vCon") + uploaded_file = st.file_uploader("Upload a .vcon file", type=["vcon", "json"]) + if uploaded_file is not None: + try: + content = json.load(uploaded_file) + st.session_state.current_vcon = content + st.session_state.jsonld_step = "raw" + if "@integrity" in content: + st.session_state.jsonld_step = "signed" + st.info("Loaded a signed vCon.") + elif "@context" in content: + st.session_state.jsonld_step = "context" + st.success("vCon loaded successfully!") + except Exception as e: + st.error(f"Failed to load vCon: {e}") + + st.divider() + + col_view1, col_view2 = st.columns([1,1]) + with col_view1: + st.subheader("Current State") + st.write(f"**Step:** {st.session_state.jsonld_step.upper()}") + st.json(st.session_state.current_vcon) + + with col_view2: + if st.session_state.jsonld_step == "signed": + st.subheader("Simulate Tampering") + st.warning("Modify a value in the vCon to see integrity verification fail.") + tamper_text = st.text_input("Change Data to:", value="Malicious Actor") + if st.button("Inject Change"): + # Try to modify a common field, otherwise just inject a root property + if "parties" in st.session_state.current_vcon and len(st.session_state.current_vcon["parties"]) > 0: + st.session_state.current_vcon["parties"][0]["name"] = tamper_text + else: + st.session_state.current_vcon["tampered_data"] = tamper_text + # We must rerun so the JSON view on the left updates with the tampered data + st.rerun() + +with tab2: + st.header("MongoDB Analytics & Search") + st.markdown("Simulated output mirroring the `IDatabaseAnalytics` and `IVConQueries` implementations.") + + st.subheader("System Stats") + stats = mock_data.get_db_stats() + c1, c2, c3 = st.columns(3) + c1.metric("Total vCons", f"{stats['document_counts']['vcons']:,}") + c2.metric("Total Embeddings", f"{stats['document_counts']['vcon_embeddings']:,}") + c3.metric("Storage Size", f"{stats['storage_size_kb']/1024:.2f} MB") + + st.divider() + + scol1, scol2 = st.columns(2) + with scol1: + st.subheader("๐Ÿ“ˆ Growth Trends") + growth_data = mock_data.get_growth_analytics() + df_growth = pd.DataFrame(growth_data) + chart = alt.Chart(df_growth).mark_line(point=True).encode( + x='date:T', + y='count:Q', + tooltip=['date', 'count'] + ).properties(height=300) + st.altair_chart(chart, use_container_width=True) + + with scol2: + st.subheader("๐Ÿท๏ธ Top Tags") + tag_data = mock_data.get_tag_analytics() + df_tags = pd.DataFrame(tag_data) + bar_chart = alt.Chart(df_tags).mark_bar().encode( + x='count:Q', + y=alt.Y('tag:N', sort='-x'), + color='tag:N', + tooltip=['tag', 'count'] + ).properties(height=300) + st.altair_chart(bar_chart, use_container_width=True) + + st.divider() + st.subheader("๐Ÿ” Vector Hybrid Search") + query = st.text_input("Search vCons (Keyword or Semantic)", value="login issues") + if st.button("Search"): + with st.spinner("Executing simulated $vectorSearch..."): + results = mock_data.mock_hybrid_search(query) + st.success(f"Found {len(results)} matches.") + for i, res in enumerate(results): + with st.expander(f"Match {i+1} - Score: {res['score']:.4f}"): + st.json(res['vcon']) diff --git a/examples/example1/mock_data.py b/examples/example1/mock_data.py new file mode 100644 index 0000000..9d97f37 --- /dev/null +++ b/examples/example1/mock_data.py @@ -0,0 +1,177 @@ +import json +import uuid +import datetime +import random +import hashlib + +def generate_mock_vcon(vcon_id=None): + if not vcon_id: + vcon_id = str(uuid.uuid4()) + now = datetime.datetime.now(datetime.timezone.utc) + + return { + "vcon": "0.0.1", + "uuid": vcon_id, + "created_at": now.isoformat(), + "parties": [ + { + "tel": "+1234567890", + "name": "Alice Agent", + "role": "agent" + }, + { + "tel": "+0987654321", + "name": "Bob Customer", + "role": "customer" + } + ], + "dialog": [ + { + "type": "text", + "start": (now - datetime.timedelta(minutes=5)).isoformat(), + "parties": [0, 1], + "body": "Hello, how can I help you today?", + "mimetype": "text/plain" + }, + { + "type": "text", + "start": (now - datetime.timedelta(minutes=4)).isoformat(), + "parties": [1, 0], + "body": "I'm having trouble logging into my account.", + "mimetype": "text/plain" + } + ], + "analysis": [ + { + "type": "sentiment", + "vendor": "mock_analyzer", + "body": { + "overall_sentiment": "neutral", + "score": random.uniform(0.1, 0.9) + } + }, + { + "type": "summary", + "vendor": "mock_analyzer", + "body": "Customer is experiencing login issues." + } + ] + } + +def get_db_stats(): + # Mocking IDatabaseInspector stats + return { + "collections": ["vcons", "vcon_embeddings"], + "document_counts": { + "vcons": random.randint(1000, 5000), + "vcon_embeddings": random.randint(1000, 5000), + }, + "storage_size_kb": random.randint(50000, 200000) + } + +def get_growth_analytics(): + # Mocking IDatabaseAnalytics growth trends (e.g. last 7 days) + today = datetime.datetime.now(datetime.timezone.utc).date() + data = [] + for i in range(7): + date = today - datetime.timedelta(days=6-i) + data.append({ + "date": date.strftime("%Y-%m-%d"), + "count": random.randint(50, 200) + }) + return data + +def get_tag_analytics(): + # Mock tag usage breakdown + tags = ["support", "sales", "billing", "escalation", "feedback"] + return [{"tag": t, "count": random.randint(10, 500)} for t in tags] + +def mock_hybrid_search(query: str): + # Simulating hybrid keyword/vector search results + results = [] + for _ in range(3): + v = generate_mock_vcon() + # Ensure the mock dialog contains the query word to look like a match + v["dialog"][0]["body"] = f"Yes, regarding {query}, we can help." + results.append({ + "vcon": v, + "score": random.uniform(0.7, 0.99) + }) + return results + +def to_jsonld(vcon): + vcon_copy = vcon.copy() + vcon_copy["@context"] = [ + "https://schema.org/docs/jsonldcontext.json", + { + "vcon": "https://vcon.dev/ns/", + "xsd": "http://www.w3.org/2001/XMLSchema#", + "parties": "vcon:parties", + "dialog": "vcon:dialog", + "analysis": "vcon:analysis", + "type": "@type", + "vendor": "vcon:vendor", + "body": "vcon:body", + "@confidence": { + "@id": "https://w3id.org/jsonld-ex/confidence", + "@type": "xsd:float" + }, + "@source": { + "@id": "https://w3id.org/jsonld-ex/source", + "@type": "@id" + }, + "@integrity": { + "@id": "https://w3id.org/jsonld-ex/integrity", + "@type": "xsd:string" + } + } + ] + return vcon_copy + +def enrich_analysis(vcon, confidence: float, source: str): + vcon_copy = json.loads(json.dumps(vcon)) # deep copy + if "analysis" not in vcon_copy or not isinstance(vcon_copy["analysis"], list) or len(vcon_copy["analysis"]) == 0: + # Inject a dummy analysis block so the demo has something to enrich + vcon_copy["analysis"] = [{ + "type": "demo_enrichment", + "vendor": "custom_upload_handler", + "body": "No existing analysis blocks found, so this one was generated for the demo." + }] + + for a in vcon_copy["analysis"]: + if isinstance(a, dict): + a["@confidence"] = confidence + a["@source"] = source + return vcon_copy + +def sign_vcon(vcon): + vcon_copy = json.loads(json.dumps(vcon)) + if "@integrity" in vcon_copy: + del vcon_copy["@integrity"] + + # Deterministic stringify (simplified for Python, sorting keys) + serialized = json.dumps(vcon_copy, sort_keys=True, separators=(',', ':')) + hash_obj = hashlib.sha256(serialized.encode('utf-8')) + hash_hex = hash_obj.hexdigest() + + vcon_copy["@integrity"] = f"sha256-{hash_hex}" + return vcon_copy + +def verify_integrity(vcon): + if "@integrity" not in vcon: + return False, "Missing @integrity field" + + provided_hash = vcon["@integrity"] + + # Recompute + vcon_copy = json.loads(json.dumps(vcon)) + del vcon_copy["@integrity"] + serialized = json.dumps(vcon_copy, sort_keys=True, separators=(',', ':')) + hash_obj = hashlib.sha256(serialized.encode('utf-8')) + computed_hash = f"sha256-{hash_obj.hexdigest()}" + + if provided_hash == computed_hash: + return True, "vCon is authentic and untampered." + else: + return False, f"Integrity check failed! Expected {computed_hash}, got {provided_hash}" + diff --git a/examples/example1/requirements.txt b/examples/example1/requirements.txt new file mode 100644 index 0000000..21e914f --- /dev/null +++ b/examples/example1/requirements.txt @@ -0,0 +1,4 @@ +streamlit>=1.30.0 +pandas>=2.0.0 +altair>=5.0.0 +plotly>=5.18.0 diff --git a/examples/example1/sample.vcon b/examples/example1/sample.vcon new file mode 100644 index 0000000..1e41d3a --- /dev/null +++ b/examples/example1/sample.vcon @@ -0,0 +1,40 @@ +{ + "vcon": "0.0.1", + "uuid": "SAMPLE-1234-abcd-5678", + "created_at": "2026-03-09T10:00:00Z", + "parties": [ + { + "tel": "+9999999999", + "name": "Jane User", + "role": "customer" + }, + { + "tel": "+1111111111", + "name": "Help Desk", + "role": "agent" + } + ], + "dialog": [ + { + "type": "text", + "start": "2026-03-09T10:01:00Z", + "parties": [0, 1], + "body": "Hi, I need to cancel my subscription.", + "mimetype": "text/plain" + }, + { + "type": "text", + "start": "2026-03-09T10:02:00Z", + "parties": [1, 0], + "body": "I can help with that. Could you please provide your account number?", + "mimetype": "text/plain" + } + ], + "analysis": [ + { + "type": "intent", + "vendor": "sample_analyzer", + "body": "subscription_cancellation" + } + ] +} diff --git a/examples/example1/verify_mock_data.py b/examples/example1/verify_mock_data.py new file mode 100644 index 0000000..f0279e5 --- /dev/null +++ b/examples/example1/verify_mock_data.py @@ -0,0 +1,14 @@ +import mock_data + +vcon = mock_data.generate_mock_vcon() +jsonld_vcon = mock_data.to_jsonld(vcon) +enriched = mock_data.enrich_analysis(jsonld_vcon, 0.95, "src1") +signed = mock_data.sign_vcon(enriched) + +valid, msg = mock_data.verify_integrity(signed) +print(f"Original signature valid: {valid} - {msg}") + +# Tamper +signed["parties"][0]["name"] = "Malicious Actor" +valid2, msg2 = mock_data.verify_integrity(signed) +print(f"Tampered signature valid: {valid2} - {msg2}") diff --git a/hackathon/.env.hackathon b/hackathon/.env.hackathon new file mode 100644 index 0000000..af9bda7 --- /dev/null +++ b/hackathon/.env.hackathon @@ -0,0 +1,66 @@ +# ============================================================================ +# vCon Intelligence Platform โ€” Hackathon Environment +# ============================================================================ +# Copy to .env in the project root and adjust as needed. +# These extend the existing .env.example settings. + +# ============================================================================ +# Database Backend (switch to MongoDB for hackathon) +# ============================================================================ +DB_TYPE=mongodb +MONGO_URL=mongodb://admin:vcon2026@localhost:27017/vcon?authSource=admin +MONGO_DB_NAME=vcon + +# ============================================================================ +# MCP Transport (HTTP for REST API + dashboard access) +# ============================================================================ +MCP_TRANSPORT=http +MCP_HTTP_HOST=0.0.0.0 +MCP_HTTP_PORT=3000 +REST_API_BASE_PATH=/api/v1 +API_AUTH_REQUIRED=false +CORS_ORIGIN=* + +# ============================================================================ +# MQTT / UNS Bridge (BASE 2) +# ============================================================================ +MQTT_BROKER_URL=mqtt://localhost:1883 +MQTT_ORG_ID=hackathon +MQTT_VERBOSE=true + +# ============================================================================ +# Neo4j Graph Database (BASE 5) +# ============================================================================ +NEO4J_URI=bolt://localhost:7687 +NEO4J_USER=neo4j +NEO4J_PASSWORD=vcon2026 + +# ============================================================================ +# ChromaDB Vector Store (WOW 3 - RAG) +# ============================================================================ +CHROMA_URL=http://localhost:8000 +CHROMA_COLLECTION=vcon_embeddings + +# ============================================================================ +# Local GPU Inference (WOW 3 - RAG/CRAG) +# ============================================================================ +# Python sidecar endpoints (FastAPI) +WHISPER_SIDECAR_URL=http://localhost:8100 +LLAMA_SIDECAR_URL=http://localhost:8200 + +# ============================================================================ +# JSON-LD-ex Enrichment (WOW 1) +# ============================================================================ +JSONLD_VERBOSE=true +JSONLD_HASH_ALGORITHM=sha256 +MCP_SERVER_URL=http://localhost:3000 + +# ============================================================================ +# Plugin Loading +# ============================================================================ +VCON_PLUGINS_PATH=./hackathon/plugins/mqtt-bridge/index.ts,./hackathon/plugins/neo4j-consumer/index.ts,./hackathon/plugins/siprec-adapter/index.ts,./hackathon/plugins/jsonld-enrichment/index.ts,./hackathon/plugins/ai-analyzer/index.ts,./hackathon/plugins/teams-adapter/index.ts,./hackathon/plugins/whatsapp-adapter/index.ts + +# ============================================================================ +# Observability +# ============================================================================ +MCP_DEBUG=true diff --git a/hackathon/README.md b/hackathon/README.md new file mode 100644 index 0000000..8f24cd4 --- /dev/null +++ b/hackathon/README.md @@ -0,0 +1,118 @@ +# vCon Intelligence Platform โ€” Hackathon Setup + +## Prerequisites + +- Docker Desktop (for MongoDB, Neo4j, Mosquitto, ChromaDB) +- Node.js 20+ (for vCon MCP server) +- Python 3.11+ (for sidecar services โ€” later) +- NVIDIA GPU drivers + CUDA (for Whisper/LLaMA โ€” later) + +## Step 1: Start Infrastructure + +```powershell +cd E:\data\code\claudecode\vcon-mcp\hackathon +docker compose up -d +``` + +Verify all services are running: +```powershell +docker compose ps +``` + +Expected: +| Service | Port(s) | Status | +|------------|-----------------|---------| +| mongodb | 27017 | Running | +| neo4j | 7474, 7687 | Running | +| mosquitto | 1883, 9001 | Running | +| chromadb | 8000 | Running | + +Access points: +- Neo4j Browser: http://localhost:7474 (neo4j / vcon2026) +- ChromaDB API: http://localhost:8000/api/v1 +- MQTT Broker: mqtt://localhost:1883 +- MQTT WebSocket: ws://localhost:9001 + +## Step 2: Configure Environment + +```powershell +cd E:\data\code\claudecode\vcon-mcp +Copy-Item hackathon\.env.hackathon .env +``` + +## Step 3: Install MQTT Dependency + +```powershell +npm install mqtt +npm install -D @types/mqtt +``` + +## Step 4: Start the MCP Server (HTTP mode) + +```powershell +npm run dev +``` + +Server starts at http://localhost:3000 with: +- REST API: http://localhost:3000/api/v1/vcons +- Health: http://localhost:3000/api/v1/health +- MCP transport: http://localhost:3000/mcp + +## Step 5: Test MQTT Bridge + +In a separate terminal, subscribe to all vCon events: +```powershell +docker exec vcon-mosquitto mosquitto_sub -t "vcon/enterprise/#" -v +``` + +Then create a vCon via REST API: +```powershell +$body = @{ + subject = "Test call from hackathon" + parties = @( + @{ name = "Alice"; tel = "+15551234567" } + @{ name = "Bob"; mailto = "bob@example.com" } + ) +} | ConvertTo-Json -Depth 5 + +Invoke-RestMethod -Uri "http://localhost:3000/api/v1/vcons" ` + -Method POST ` + -ContentType "application/json" ` + -Body $body +``` + +You should see the MQTT event appear in the subscriber terminal. + +## Directory Structure + +``` +hackathon/ +โ”œโ”€โ”€ docker-compose.yml # Infrastructure services +โ”œโ”€โ”€ mosquitto/ +โ”‚ โ””โ”€โ”€ mosquitto.conf # MQTT broker config +โ”œโ”€โ”€ .env.hackathon # Environment variables +โ”œโ”€โ”€ plugins/ +โ”‚ โ”œโ”€โ”€ mqtt-bridge/ # BASE 2: UNS event bridge +โ”‚ โ”œโ”€โ”€ neo4j-consumer/ # BASE 5: Graph mapping +โ”‚ โ”œโ”€โ”€ siprec-adapter/ # BASE 1: Folder drop ingestion +โ”‚ โ”œโ”€โ”€ teams-adapter/ # BASE 4: MS Teams extractor +โ”‚ โ”œโ”€โ”€ whatsapp-adapter/ # WOW 2: WhatsApp parser +โ”‚ โ”œโ”€โ”€ jsonld-enrichment/ # WOW 1: JSON-LD-ex transformer +โ”‚ โ””โ”€โ”€ pii-redactor/ # PII detection/masking +โ”œโ”€โ”€ sidecar/ # Python FastAPI services (Whisper, LLaMA) +โ”œโ”€โ”€ dashboard/ # React real-time dashboard +โ””โ”€โ”€ sample-data/ # Demo vCons, SIPREC files, WhatsApp exports +``` + +## Build Order + +1. โœ… Infrastructure (Docker) + MQTT Bridge Plugin +2. โฌœ Neo4j Consumer Plugin +3. โฌœ SIPREC Adapter Plugin (folder watcher) +4. โฌœ JSON-LD-ex Enrichment Plugin (deep semantic interop) +5. โฌœ WhatsApp Adapter Plugin +6. โฌœ Teams Adapter Plugin +7. โฌœ Python Sidecar (Whisper + LLaMA) +8. โฌœ RAG/CRAG Engine +9. โฌœ Real-Time Dashboard +10. โฌœ Integration Testing + Demo Flow diff --git a/hackathon/dashboard/index.html b/hackathon/dashboard/index.html new file mode 100644 index 0000000..41c4897 --- /dev/null +++ b/hackathon/dashboard/index.html @@ -0,0 +1,1427 @@ + + + + + + vCon Intelligence Platform โ€” Mission Control + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + diff --git a/hackathon/dashboard/ingest.html b/hackathon/dashboard/ingest.html new file mode 100644 index 0000000..6f15503 --- /dev/null +++ b/hackathon/dashboard/ingest.html @@ -0,0 +1,1433 @@ + + + + + + vCon Ingestion โ€” SIPREC Drop + + + + + + + + + + + + + + + + +
+ + + + diff --git a/hackathon/dashboard/samples/01-siprec-billing-dispute.json b/hackathon/dashboard/samples/01-siprec-billing-dispute.json new file mode 100644 index 0000000..8da7726 --- /dev/null +++ b/hackathon/dashboard/samples/01-siprec-billing-dispute.json @@ -0,0 +1,45 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-06T09:15:00Z", + "subject": "Sarah Johnson โ†” Agent Mike Rivera โ€” Billing Dispute", + "parties": [ + { "tel": "+15551234567", "name": "Sarah Johnson", "meta": { "role": "customer" } }, + { "tel": "+15559876001", "name": "Agent Mike Rivera", "meta": { "role": "agent" } } + ], + "dialog": [ + { + "type": "text", + "start": "2026-03-06T09:15:00Z", + "duration": 342, + "parties": [0, 1], + "mediatype": "text/plain" + } + ], + "analysis": [ + { + "type": "transcript", + "vendor": "siprec-adapter", + "body": "Agent Mike Rivera: Thank you for calling TechCorp support, this is Mike. How can I help you today?\n\nSarah Johnson: Hi Mike, I'm calling because I was charged twice for my subscription last month. I noticed two charges of $49.99 on my credit card statement.\n\nAgent Mike Rivera: I'm sorry to hear that, Sarah. Let me pull up your account right away. Can you confirm the email address on your account?\n\nSarah Johnson: It's sarah.johnson@email.com.\n\nAgent Mike Rivera: Thank you. I can see the issue here. It looks like there was a system error during our billing cycle update on February 15th that caused duplicate charges for several customers. I sincerely apologize for the inconvenience.\n\nSarah Johnson: This is really frustrating. I've been a customer for three years and this is the second billing issue I've had.\n\nAgent Mike Rivera: I completely understand your frustration, and I want to make this right. I'm going to process a refund for the duplicate charge of $49.99 right now. You should see it back on your card within 3 to 5 business days. I'm also going to add a $10 credit to your account for the inconvenience.\n\nSarah Johnson: Okay, that sounds fair. Thank you for taking care of it quickly.\n\nAgent Mike Rivera: Absolutely. Is there anything else I can help you with today?\n\nSarah Johnson: No, that's all. Thanks Mike.\n\nAgent Mike Rivera: You're welcome, Sarah. Have a great day!", + "encoding": "none", + "dialog": 0 + }, + { + "type": "sentiment", + "vendor": "keyword-heuristic", + "body": "{\"overall\": 0.62, \"positive\": 4, \"negative\": 2}", + "encoding": "json" + }, + { + "type": "topics", + "vendor": "keyword-heuristic", + "body": "[\"Billing\"]", + "encoding": "json" + }, + { + "type": "summary", + "vendor": "siprec-adapter", + "body": "Customer called about duplicate $49.99 charge caused by billing system error. Agent processed refund (3-5 business days) and added $10 account credit.", + "encoding": "none" + } + ] +} diff --git a/hackathon/dashboard/samples/02-siprec-vpn-support.json b/hackathon/dashboard/samples/02-siprec-vpn-support.json new file mode 100644 index 0000000..a072513 --- /dev/null +++ b/hackathon/dashboard/samples/02-siprec-vpn-support.json @@ -0,0 +1,21 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-06T11:20:00Z", + "subject": "David Chen โ†” Agent Lisa Park โ€” VPN Connectivity Issue", + "parties": [ + { "tel": "+15552345678", "name": "David Chen", "meta": { "role": "customer" } }, + { "tel": "+15559876002", "name": "Agent Lisa Park", "meta": { "role": "agent" } } + ], + "dialog": [ + { "type": "text", "start": "2026-03-06T11:20:00Z", "duration": 487, "parties": [0, 1], "mediatype": "text/plain" } + ], + "analysis": [ + { + "type": "transcript", "vendor": "siprec-adapter", "encoding": "none", "dialog": 0, + "body": "Agent Lisa Park: TechCorp support, this is Lisa. How can I help?\n\nDavid Chen: Hi, my VPN keeps disconnecting every 10 minutes. I'm working remotely and it's killing my productivity.\n\nAgent Lisa Park: I understand how frustrating that is. Let me check your client version. What version are you running?\n\nDavid Chen: It says 4.2.0.\n\nAgent Lisa Park: That's the issue. Version 4.2.0 has a known keep-alive packet bug. We released a fix in 4.2.1 last week. Let me walk you through the update.\n\nDavid Chen: Okay, updating now... done. Version 4.2.1. Should I test it?\n\nAgent Lisa Park: Yes, connect and let's wait a few minutes.\n\nDavid Chen: It's been 5 minutes and holding steady. Before it would have dropped by now.\n\nAgent Lisa Park: Excellent! The update should resolve it permanently. I'll create a tracking ticket as well.\n\nDavid Chen: Great, thank you Lisa. Much easier than I expected.\n\nAgent Lisa Park: Happy to help! Have a great day." + }, + { "type": "sentiment", "vendor": "keyword-heuristic", "body": "{\"overall\": 0.74, \"positive\": 4, \"negative\": 1}", "encoding": "json" }, + { "type": "topics", "vendor": "keyword-heuristic", "body": "[\"Technical Support\", \"Remote Work\"]", "encoding": "json" }, + { "type": "summary", "vendor": "siprec-adapter", "body": "VPN disconnection issue resolved by updating client from v4.2.0 to v4.2.1. Known keep-alive packet bug fixed.", "encoding": "none" } + ] +} diff --git a/hackathon/dashboard/samples/03-siprec-followup-positive.json b/hackathon/dashboard/samples/03-siprec-followup-positive.json new file mode 100644 index 0000000..94bf896 --- /dev/null +++ b/hackathon/dashboard/samples/03-siprec-followup-positive.json @@ -0,0 +1,21 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-06T14:45:00Z", + "subject": "Sarah Johnson โ†” Agent Mike Rivera โ€” Refund Follow-up", + "parties": [ + { "tel": "+15551234567", "name": "Sarah Johnson", "meta": { "role": "customer" } }, + { "tel": "+15559876001", "name": "Agent Mike Rivera", "meta": { "role": "agent" } } + ], + "dialog": [ + { "type": "text", "start": "2026-03-06T14:45:00Z", "duration": 198, "parties": [0, 1], "mediatype": "text/plain" } + ], + "analysis": [ + { + "type": "transcript", "vendor": "siprec-adapter", "encoding": "none", "dialog": 0, + "body": "Agent Mike Rivera: TechCorp support, Mike speaking.\n\nSarah Johnson: Hi Mike, it's Sarah Johnson again. I called earlier about the duplicate charge? I just wanted to confirm โ€” I got an email saying the refund is being processed.\n\nAgent Mike Rivera: Hi Sarah! Yes, I can confirm the refund of $49.99 was initiated this morning. It typically takes 3 to 5 business days to appear on your statement. The $10 credit has already been applied to your account.\n\nSarah Johnson: Perfect. And I noticed you also fixed something with my billing cycle date?\n\nAgent Mike Rivera: Yes, I moved your billing date from the 15th to the 1st of each month to avoid the batch processing window that caused the original error. You won't be charged until April 1st.\n\nSarah Johnson: That's great. I really appreciate the proactive fix. Thank you, Mike.\n\nAgent Mike Rivera: My pleasure, Sarah. We value your loyalty. Have a wonderful afternoon!" + }, + { "type": "sentiment", "vendor": "keyword-heuristic", "body": "{\"overall\": 0.86, \"positive\": 5, \"negative\": 0}", "encoding": "json" }, + { "type": "topics", "vendor": "keyword-heuristic", "body": "[\"Billing\"]", "encoding": "json" }, + { "type": "summary", "vendor": "siprec-adapter", "body": "Follow-up call confirming refund processing and billing cycle adjustment. Customer satisfied with resolution.", "encoding": "none" } + ] +} diff --git a/hackathon/dashboard/samples/04-teams-escalation-review.json b/hackathon/dashboard/samples/04-teams-escalation-review.json new file mode 100644 index 0000000..ca01d97 --- /dev/null +++ b/hackathon/dashboard/samples/04-teams-escalation-review.json @@ -0,0 +1,27 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-06T15:00:00Z", + "subject": "Teams Meeting: Q1 Customer Escalation Review โ€” Sarah Johnson Account", + "parties": [ + { "mailto": "rachel.kim@techcorp.com", "name": "Rachel Kim", "uuid": "usr-teams-001" }, + { "mailto": "tom.bradley@techcorp.com", "name": "Tom Bradley", "uuid": "usr-teams-002" }, + { "mailto": "sarah.johnson@email.com", "name": "Sarah Johnson", "uuid": "usr-teams-003" } + ], + "dialog": [ + { + "type": "recording", "start": "2026-03-06T14:00:00Z", "duration": 1710, "parties": [0, 1, 2], + "mediatype": "audio/wav", + "url": "https://teams-recordings.blob.core.windows.net/recordings/call-e3f3a7d1.wav" + } + ], + "analysis": [ + { + "type": "transcript", "vendor": "teams-adapter", "encoding": "none", "dialog": 0, + "body": "Rachel Kim: Let's start with the Sarah Johnson escalation. Tom, can you walk us through the timeline?\n\nTom Bradley: Sure. Sarah first contacted us on March 3rd about a duplicate billing charge. Agent Mike Rivera handled the initial call and processed a refund, but the refund didn't go through because of a system flag on her account.\n\nSarah Johnson: Right, and then I called back two days later and was told there was no record of the refund being initiated. That's when I asked to speak with a manager.\n\nRachel Kim: I see. Tom, what happened with the system flag?\n\nTom Bradley: It turns out our fraud detection system flagged the refund because it exceeded the auto-approval threshold. It needed manual approval from finance, but that step was never communicated to the front-line agents.\n\nRachel Kim: That's a process gap we need to fix. Sarah, I want to sincerely apologize for the runaround. We're going to process your refund today with priority handling, and I'm adding a $25 credit to your account.\n\nSarah Johnson: I appreciate that, Rachel. I've been a customer for three years and honestly this experience made me consider switching providers.\n\nRachel Kim: I completely understand. Tom, I want you to draft a process update memo โ€” any refund flagged by fraud detection should generate an automatic notification to the original agent and the customer within 24 hours.\n\nTom Bradley: Got it. I'll have that drafted by end of day.\n\nRachel Kim: Sarah, is there anything else we can do for you?\n\nSarah Johnson: No, I think that covers it. Thank you for taking this seriously.\n\nRachel Kim: Absolutely. We'll follow up via email once the refund is confirmed. Thank you everyone." + }, + { "type": "summary", "vendor": "teams-adapter", "body": "Escalation review: Sarah Johnson's unprocessed refund caused by fraud detection flag blocking auto-approval. Process gap identified. Resolution: priority refund + $25 credit, process update memo to be drafted.", "encoding": "none" }, + { "type": "sentiment", "vendor": "keyword-heuristic", "body": "{\"overall\": 0.52, \"positive\": 4, \"negative\": 4}", "encoding": "json" }, + { "type": "topics", "vendor": "keyword-heuristic", "body": "[\"Billing\", \"Escalation\", \"Security\"]", "encoding": "json" }, + { "type": "source-metadata", "vendor": "teams-adapter", "body": "{\"source\": \"microsoft-teams\", \"callType\": \"groupCall\", \"modalities\": [\"audio\"], \"organizerName\": \"Rachel Kim\"}", "encoding": "json" } + ] +} diff --git a/hackathon/dashboard/samples/05-teams-vpn-followup.json b/hackathon/dashboard/samples/05-teams-vpn-followup.json new file mode 100644 index 0000000..dd1c2c2 --- /dev/null +++ b/hackathon/dashboard/samples/05-teams-vpn-followup.json @@ -0,0 +1,22 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-06T16:30:00Z", + "subject": "Teams Call: Lisa Park โ†” David Chen โ€” VPN Follow-up", + "parties": [ + { "mailto": "lisa.park@techcorp.com", "name": "Agent Lisa Park", "uuid": "usr-teams-004" }, + { "mailto": "david.chen@gmail.com", "name": "David Chen", "uuid": "usr-teams-005" } + ], + "dialog": [ + { "type": "recording", "start": "2026-03-06T16:15:00Z", "duration": 1638, "parties": [0, 1], "mediatype": "audio/wav", "url": "https://teams-recordings.blob.core.windows.net/recordings/call-b8c2d4e6.wav" } + ], + "analysis": [ + { + "type": "transcript", "vendor": "teams-adapter", "encoding": "none", "dialog": 0, + "body": "Agent Lisa Park: Hi David, thanks for joining the Teams call. I wanted to follow up on the VPN fix from yesterday. How has connectivity been?\n\nDavid Chen: Hi Lisa! It's been rock solid since the update. No disconnections at all.\n\nAgent Lisa Park: That's great to hear. I also wanted to let you know that our network team pushed an additional server-side optimization overnight. You might notice slightly faster connection times.\n\nDavid Chen: Actually, I did notice it connected faster this morning. Good to know that was intentional.\n\nAgent Lisa Park: One more thing โ€” I noticed your account is still on the standard VPN tier. Given that you're fully remote, you might benefit from our priority routing tier. It gives you dedicated bandwidth during peak hours.\n\nDavid Chen: What does that cost?\n\nAgent Lisa Park: It's included in the enterprise plan your company is on. I just need to flip a switch on your profile. Want me to enable it?\n\nDavid Chen: Absolutely, yes please.\n\nAgent Lisa Park: Done. You'll need to reconnect to pick up the new routing. Is there anything else I can help with?\n\nDavid Chen: No, this has been excellent service. Really appreciate the proactive follow-up.\n\nAgent Lisa Park: Happy to help, David. Have a great rest of your day!" + }, + { "type": "summary", "vendor": "teams-adapter", "body": "Proactive follow-up on VPN fix. Connectivity confirmed stable. Upgraded customer to priority routing tier (included in enterprise plan). Customer very satisfied.", "encoding": "none" }, + { "type": "sentiment", "vendor": "keyword-heuristic", "body": "{\"overall\": 0.88, \"positive\": 5, \"negative\": 0}", "encoding": "json" }, + { "type": "topics", "vendor": "keyword-heuristic", "body": "[\"Technical Support\", \"Remote Work\", \"Upgrade\"]", "encoding": "json" }, + { "type": "source-metadata", "vendor": "teams-adapter", "body": "{\"source\": \"microsoft-teams\", \"callType\": \"peerToPeer\", \"modalities\": [\"audio\"], \"organizerName\": \"Agent Lisa Park\"}", "encoding": "json" } + ] +} diff --git a/hackathon/dashboard/samples/06-siprec-cancellation-save.json b/hackathon/dashboard/samples/06-siprec-cancellation-save.json new file mode 100644 index 0000000..ed854e1 --- /dev/null +++ b/hackathon/dashboard/samples/06-siprec-cancellation-save.json @@ -0,0 +1,21 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-07T08:30:00Z", + "subject": "Maria Gonzalez โ†” Agent Mike Rivera โ€” Account Cancellation Request", + "parties": [ + { "tel": "+15553456789", "name": "Maria Gonzalez", "meta": { "role": "customer" } }, + { "tel": "+15559876001", "name": "Agent Mike Rivera", "meta": { "role": "agent" } } + ], + "dialog": [ + { "type": "text", "start": "2026-03-07T08:30:00Z", "duration": 425, "parties": [0, 1], "mediatype": "text/plain" } + ], + "analysis": [ + { + "type": "transcript", "vendor": "siprec-adapter", "encoding": "none", "dialog": 0, + "body": "Agent Mike Rivera: TechCorp support, this is Mike. How can I help you?\n\nMaria Gonzalez: Hi, I'd like to cancel my account. I'm switching to a competitor.\n\nAgent Mike Rivera: I'm sorry to hear that, Maria. May I ask what's prompting the switch?\n\nMaria Gonzalez: Honestly, the pricing. Your competitor is offering the same features for $30 a month instead of $50.\n\nAgent Mike Rivera: I understand. Price is an important factor. Before you go, I want to make sure you're aware that we recently launched a loyalty discount for customers who've been with us over a year. You'd qualify for 25% off, bringing your monthly cost to $37.50.\n\nMaria Gonzalez: I didn't know about that. That's closer, but still more expensive.\n\nAgent Mike Rivera: I can also offer you a one-time additional discount of $5 per month for the next 6 months, bringing it to $32.50. Plus, our premium support and 99.9% uptime guarantee are included at no extra cost.\n\nMaria Gonzalez: $32.50 with the premium support... let me think about that. Actually, that's competitive enough. I'll stay.\n\nAgent Mike Rivera: Wonderful! I've applied both discounts to your account. They'll take effect on your next billing cycle. Is there anything else I can help with?\n\nMaria Gonzalez: No, that's great. Thank you for working with me on this.\n\nAgent Mike Rivera: My pleasure, Maria. We're glad to keep you as a customer!" + }, + { "type": "sentiment", "vendor": "keyword-heuristic", "body": "{\"overall\": 0.68, \"positive\": 4, \"negative\": 1}", "encoding": "json" }, + { "type": "topics", "vendor": "keyword-heuristic", "body": "[\"Cancellation\", \"Billing\", \"Upgrade\"]", "encoding": "json" }, + { "type": "summary", "vendor": "siprec-adapter", "body": "Customer requested cancellation due to competitor pricing ($30 vs $50). Retention offer: 25% loyalty discount + $5/mo for 6 months = $32.50/mo. Customer retained.", "encoding": "none" } + ] +} diff --git a/hackathon/dashboard/samples/07-teams-weekly-standup.json b/hackathon/dashboard/samples/07-teams-weekly-standup.json new file mode 100644 index 0000000..eea63db --- /dev/null +++ b/hackathon/dashboard/samples/07-teams-weekly-standup.json @@ -0,0 +1,24 @@ +{ + "vcon": "0.3.0", + "created_at": "2026-03-07T09:00:00Z", + "subject": "Teams Meeting: Support Team Weekly Standup", + "parties": [ + { "mailto": "rachel.kim@techcorp.com", "name": "Rachel Kim", "uuid": "usr-teams-001" }, + { "mailto": "mike.rivera@techcorp.com", "name": "Agent Mike Rivera", "uuid": "usr-teams-006" }, + { "mailto": "lisa.park@techcorp.com", "name": "Agent Lisa Park", "uuid": "usr-teams-004" }, + { "mailto": "tom.bradley@techcorp.com", "name": "Tom Bradley", "uuid": "usr-teams-002" } + ], + "dialog": [ + { "type": "recording", "start": "2026-03-07T09:00:00Z", "duration": 1800, "parties": [0, 1, 2, 3], "mediatype": "audio/wav", "url": "https://teams-recordings.blob.core.windows.net/recordings/standup-20260307.wav" } + ], + "analysis": [ + { + "type": "transcript", "vendor": "teams-adapter", "encoding": "none", "dialog": 0, + "body": "Rachel Kim: Good morning everyone. Let's do our weekly standup. Mike, you're up first.\n\nAgent Mike Rivera: Morning. This week I handled 47 calls. The biggest issue was the billing batch error from February โ€” I had six customers call about duplicate charges. All refunds are processed now. I also saved a cancellation yesterday with the loyalty discount.\n\nRachel Kim: Great work on the retention. Lisa?\n\nAgent Lisa Park: I had 38 calls, mostly technical support. The VPN 4.2.0 bug drove a lot of volume โ€” about 15 calls were related to that. Since the 4.2.1 patch went out, those have dropped to zero. I also did some proactive follow-ups which got very positive feedback.\n\nRachel Kim: Love the proactive approach. Tom, what's the data telling us?\n\nTom Bradley: Overall metrics look good. Average handle time is down 12% week over week. Customer satisfaction score is at 4.3 out of 5. The fraud detection process memo is drafted and going to legal review today. One concern โ€” we're seeing an uptick in cancellation inquiries. Competitor X launched a price cut last week.\n\nRachel Kim: That's important. Mike, you mentioned the loyalty discount worked. Let's formalize that as a standard retention offer. Tom, can you pull the data on how many customers would qualify?\n\nTom Bradley: Sure, I'll have that by end of day.\n\nRachel Kim: Perfect. Any blockers anyone?\n\nAgent Lisa Park: We could use better documentation on the new API integration. I'm getting questions I can't answer.\n\nRachel Kim: Noted. I'll follow up with engineering. Alright, great standup everyone. Let's have a strong week." + }, + { "type": "summary", "vendor": "teams-adapter", "body": "Weekly support standup. Mike: 47 calls, billing batch error resolved, cancellation saved. Lisa: 38 calls, VPN bug resolved with 4.2.1 patch, proactive follow-ups. Tom: AHT down 12%, CSAT 4.3/5, fraud detection memo in legal review, competitor price cut driving cancellation uptick. Action items: formalize loyalty discount as retention offer, pull qualification data, improve API docs.", "encoding": "none" }, + { "type": "sentiment", "vendor": "keyword-heuristic", "body": "{\"overall\": 0.72, \"positive\": 5, \"negative\": 1}", "encoding": "json" }, + { "type": "topics", "vendor": "keyword-heuristic", "body": "[\"Billing\", \"Technical Support\", \"Cancellation\"]", "encoding": "json" }, + { "type": "source-metadata", "vendor": "teams-adapter", "body": "{\"source\": \"microsoft-teams\", \"callType\": \"groupCall\", \"modalities\": [\"audio\"], \"organizerName\": \"Rachel Kim\"}", "encoding": "json" } + ] +} diff --git a/hackathon/docker-compose.yml b/hackathon/docker-compose.yml new file mode 100644 index 0000000..051156c --- /dev/null +++ b/hackathon/docker-compose.yml @@ -0,0 +1,76 @@ +# vCon Intelligence Platform โ€” Hackathon Infrastructure +# Run: docker compose -f hackathon/docker-compose.yml up -d + +version: '3.8' + +services: + # ============================================================================ + # MongoDB โ€” Primary vCon document store + # ============================================================================ + mongodb: + image: mongo:7.0 + container_name: vcon-mongodb + ports: + - "27017:27017" + environment: + MONGO_INITDB_ROOT_USERNAME: admin + MONGO_INITDB_ROOT_PASSWORD: vcon2026 + MONGO_INITDB_DATABASE: vcon + volumes: + - mongodb_data:/data/db + restart: unless-stopped + + # ============================================================================ + # Neo4j โ€” Graph database for relationship mapping (BASE 5) + # ============================================================================ + neo4j: + image: neo4j:5-community + container_name: vcon-neo4j + ports: + - "7474:7474" # HTTP browser + - "7687:7687" # Bolt protocol + environment: + NEO4J_AUTH: neo4j/vcon2026 + NEO4J_PLUGINS: '["apoc"]' + NEO4J_dbms_memory_heap_initial__size: 256m + NEO4J_dbms_memory_heap_max__size: 512m + volumes: + - neo4j_data:/data + restart: unless-stopped + + # ============================================================================ + # Mosquitto โ€” MQTT broker for UNS event bridge (BASE 2) + # ============================================================================ + mosquitto: + image: eclipse-mosquitto:2 + container_name: vcon-mosquitto + ports: + - "1883:1883" # MQTT + - "9001:9001" # WebSocket (for dashboard) + volumes: + - ./mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf + - mosquitto_data:/mosquitto/data + - mosquitto_log:/mosquitto/log + restart: unless-stopped + + # ============================================================================ + # ChromaDB โ€” Vector store for RAG embeddings (WOW 3) + # ============================================================================ + chromadb: + image: chromadb/chroma:latest + container_name: vcon-chromadb + ports: + - "8000:8000" + environment: + ANONYMIZED_TELEMETRY: "false" + ALLOW_RESET: "true" + volumes: + - chromadb_data:/chroma/chroma + restart: unless-stopped + +volumes: + mongodb_data: + neo4j_data: + mosquitto_data: + mosquitto_log: + chromadb_data: diff --git a/hackathon/docs/diagram-1-architecture.mermaid b/hackathon/docs/diagram-1-architecture.mermaid new file mode 100644 index 0000000..4645737 --- /dev/null +++ b/hackathon/docs/diagram-1-architecture.mermaid @@ -0,0 +1,77 @@ +%% ============================================================================ +%% DIAGRAM 1: Five-Layer Architecture +%% Render at https://mermaid.live +%% ============================================================================ + +graph TB + subgraph INGESTION["LAYER 1: INGESTION"] + direction LR + SIPREC["SIPREC Adapter\nFolder Drop + XML Parser"] + TEAMS["Teams Adapter\nMS Graph callRecord"] + WHATSAPP["WhatsApp Adapter\nChat Export Parser"] + AUDIO["Audio Upload\nIngest UI + Whisper"] + end + + subgraph ENRICHMENT["LAYER 2: SEMANTIC ENRICHMENT"] + direction LR + JSONLD["JSON-LD-ex Transformer\n@context, Provenance,\nConfidence Algebra"] + AIANALYZE["AI Analyzer\nSentiment, Summary,\nTopics via Groq LLaMA"] + WHISPER["Whisper GPU\nRTX 4090\nAudio to Transcript"] + end + + subgraph PERSISTENCE["LAYER 3: PERSISTENCE"] + direction LR + MONGO[("MongoDB 7.0\nvCon Documents +\nJSON-LD Enrichment")] + NEO4J[("Neo4j 5\nParticipant Graphs\nTopic Networks")] + CHROMA[("ChromaDB\nVector Embeddings\nfor RAG")] + end + + subgraph INTELLIGENCE["LAYER 4: INTELLIGENCE"] + direction LR + RAG["RAG Engine\nContext Retrieval +\nLLM Q&A with Citations"] + MQTT["MQTT / UNS Bridge\nReal-time Event\nStreaming"] + MCP["MCP Extensions\nPlugin Tools for\nAgent Interaction"] + end + + subgraph PRESENTATION["LAYER 5: PRESENTATION"] + direction LR + DASH["Analytics Dashboard\nKPIs, Sentiment, Live Feed"] + GRAPH["Graph Visualization\nNeo4j d3-force Network"] + CHAT["RAG Chat Panel\nNatural Language Q&A"] + INSPECTOR["JSON-LD Inspector\nSemantic Enrichment Tree"] + end + + SIPREC --> ENRICHMENT + TEAMS --> ENRICHMENT + WHATSAPP --> ENRICHMENT + AUDIO --> ENRICHMENT + + ENRICHMENT --> PERSISTENCE + + PERSISTENCE --> INTELLIGENCE + + INTELLIGENCE --> PRESENTATION + + style INGESTION fill:#0d2137,stroke:#00b4d8,stroke-width:2px,color:#fff + style ENRICHMENT fill:#0d2137,stroke:#8e44ad,stroke-width:2px,color:#fff + style PERSISTENCE fill:#0d2137,stroke:#2e75b6,stroke-width:2px,color:#fff + style INTELLIGENCE fill:#0d2137,stroke:#27ae60,stroke-width:2px,color:#fff + style PRESENTATION fill:#0d2137,stroke:#f39c12,stroke-width:2px,color:#fff + + style SIPREC fill:#1a3a5c,stroke:#00b4d8,color:#fff + style TEAMS fill:#1a3a5c,stroke:#00b4d8,color:#fff + style WHATSAPP fill:#1a3a5c,stroke:#00b4d8,color:#fff + style AUDIO fill:#1a3a5c,stroke:#00b4d8,color:#fff + style JSONLD fill:#2a1a3c,stroke:#8e44ad,color:#fff + style AIANALYZE fill:#2a1a3c,stroke:#8e44ad,color:#fff + style WHISPER fill:#2a1a3c,stroke:#8e44ad,color:#fff + style MONGO fill:#1a2a3c,stroke:#2e75b6,color:#fff + style NEO4J fill:#1a2a3c,stroke:#2e75b6,color:#fff + style CHROMA fill:#1a2a3c,stroke:#2e75b6,color:#fff + style RAG fill:#1a3a2c,stroke:#27ae60,color:#fff + style MQTT fill:#1a3a2c,stroke:#27ae60,color:#fff + style MCP fill:#1a3a2c,stroke:#27ae60,color:#fff + style DASH fill:#2a2a1c,stroke:#f39c12,color:#fff + style GRAPH fill:#2a2a1c,stroke:#f39c12,color:#fff + style CHAT fill:#2a2a1c,stroke:#f39c12,color:#fff + style INSPECTOR fill:#2a2a1c,stroke:#f39c12,color:#fff diff --git a/hackathon/docs/diagram-2-pipeline-flow.mermaid b/hackathon/docs/diagram-2-pipeline-flow.mermaid new file mode 100644 index 0000000..4cace73 --- /dev/null +++ b/hackathon/docs/diagram-2-pipeline-flow.mermaid @@ -0,0 +1,55 @@ +%% ============================================================================ +%% DIAGRAM 2: Plugin Pipeline Flow (afterCreate Hook Chain) +%% Shows what happens when a vCon is created +%% Render at https://mermaid.live +%% ============================================================================ + +sequenceDiagram + participant U as Source
SIPREC / Teams /
WhatsApp / UI + participant API as REST API
POST /api/v1/vcons + participant VS as VConService
create() + participant PM as PluginManager
executeHook + participant MQTT as MQTT Bridge
Plugin + participant NEO as Neo4j Consumer
Plugin + participant JLD as JSON-LD-ex
Plugin + participant AIA as AI Analyzer
Plugin + participant MQ as Mosquitto
Broker + participant N4 as Neo4j
Database + participant MDB as MongoDB + participant GRQ as Groq API
LLaMA 3.1-8B + + U->>API: POST vCon JSON + API->>VS: create(vconData) + VS->>VS: normalizeVCon() + VS->>PM: beforeCreate hooks + VS->>MDB: insertOne(vcon) + MDB-->>VS: { uuid, id } + + Note over PM: afterCreate hooks fire
in plugin loading order + + VS->>PM: afterCreate(vcon) + + PM->>MQTT: afterCreate() + MQTT->>MQ: publish vcon.ingested + MQ-->>U: WebSocket event + + PM->>NEO: afterCreate() + NEO->>N4: MERGE Person nodes + NEO->>N4: MERGE Conversation + NEO->>N4: MERGE CONTACTED edges + NEO->>N4: MERGE Topics + + PM->>JLD: afterCreate() + JLD->>JLD: enrichVCon() + Note over JLD: @context + provenance
+ confidence algebra
+ integrity signing + JLD->>MDB: $set jsonld_enrichment + JLD->>MQ: publish vcon.enriched + + PM->>AIA: afterCreate() + AIA->>AIA: extractTranscript() + AIA->>GRQ: POST /analyze + GRQ-->>AIA: { sentiment, summary, topics } + AIA->>MDB: $push analysis[] + + VS-->>API: { uuid, success } + API-->>U: 200 OK diff --git a/hackathon/docs/diagram-3-neo4j-schema.mermaid b/hackathon/docs/diagram-3-neo4j-schema.mermaid new file mode 100644 index 0000000..5604df8 --- /dev/null +++ b/hackathon/docs/diagram-3-neo4j-schema.mermaid @@ -0,0 +1,35 @@ +%% ============================================================================ +%% DIAGRAM 3: Neo4j Graph Schema +%% Render at https://mermaid.live +%% ============================================================================ + +graph LR + subgraph Legend + direction TB + L1["๐ŸŸฆ Node"] + L2["โžก๏ธ Relationship"] + end + + P1(["๐Ÿ‘ค Person\n---\nname, email,\nphone, identifier,\nrole"]) + P2(["๐Ÿ‘ค Person\n---\nname, email,\nphone, identifier,\nrole"]) + + C(["๐Ÿ’ฌ Conversation\n---\nuuid, subject,\nstart, duration,\nsentiment,\nparty_count"]) + + T1(["๐Ÿท๏ธ Topic\n---\nname"]) + T2(["๐Ÿท๏ธ Topic\n---\nname"]) + + A(["๐Ÿ“Š Analysis\n---\ntype, vendor, id"]) + + P1 -- "PARTICIPATED_IN" --> C + P2 -- "PARTICIPATED_IN" --> C + P1 -- "CONTACTED\n(canonical order)" --> P2 + C -- "HAS_TOPIC" --> T1 + C -- "HAS_TOPIC" --> T2 + C -- "ANALYZED_BY" --> A + + style P1 fill:#00b4d8,stroke:#0077b6,color:#fff,stroke-width:2px + style P2 fill:#8e44ad,stroke:#6c3483,color:#fff,stroke-width:2px + style C fill:#2e75b6,stroke:#1a5276,color:#fff,stroke-width:2px + style T1 fill:#f39c12,stroke:#d68910,color:#fff,stroke-width:2px + style T2 fill:#f39c12,stroke:#d68910,color:#fff,stroke-width:2px + style A fill:#27ae60,stroke:#1e8449,color:#fff,stroke-width:2px diff --git a/hackathon/docs/diagram-4-infrastructure.mermaid b/hackathon/docs/diagram-4-infrastructure.mermaid new file mode 100644 index 0000000..dffec47 --- /dev/null +++ b/hackathon/docs/diagram-4-infrastructure.mermaid @@ -0,0 +1,51 @@ +%% ============================================================================ +%% DIAGRAM 4: Infrastructure / Deployment View +%% Render at https://mermaid.live +%% ============================================================================ + +graph TB + subgraph CLIENT["Browser"] + DASH["Dashboard\nindex.html\nReact + Tailwind"] + INGEST["Ingest Page\ningest.html\n5 Ingestion Modes"] + end + + subgraph NODE["Node.js MCP Server :3000"] + REST["REST API\nKoa /api/v1/"] + VCONS["VConService"] + PLUGINS["Plugin Manager\n7 Plugins"] + end + + subgraph PYTHON["Python Sidecars"] + WHISP["Whisper :8100\nmedium model\nRTX 4090 GPU"] + LLAMA["LLaMA :8200\nGroq API\nllama-3.1-8b"] + end + + subgraph DOCKER["Docker Containers"] + MONGO[("MongoDB :27017")] + NEO[("Neo4j :7474 :7687")] + MOSQ["Mosquitto\n:1883 TCP\n:9001 WebSocket"] + CHROMADB[("ChromaDB :8000")] + end + + DASH <-->|"fetch + WebSocket"| REST + DASH <-->|"MQTT.js ws://9001"| MOSQ + DASH <-->|"HTTP API"| NEO + INGEST -->|"POST /api/v1/vcons"| REST + INGEST <-->|"POST /transcribe"| WHISP + + REST --> VCONS + VCONS --> PLUGINS + + PLUGINS -->|"afterCreate"| MONGO + PLUGINS -->|"afterCreate"| NEO + PLUGINS -->|"afterCreate"| MOSQ + PLUGINS -->|"POST /analyze"| LLAMA + PLUGINS -->|"POST /transcribe"| WHISP + + LLAMA -.->|"Groq Cloud"| GROQ["Groq API\nLLaMA 3.1-8B-Instant"] + + style CLIENT fill:#2a2a1c,stroke:#f39c12,stroke-width:2px,color:#fff + style NODE fill:#1a3a2c,stroke:#27ae60,stroke-width:2px,color:#fff + style PYTHON fill:#2a1a3c,stroke:#8e44ad,stroke-width:2px,color:#fff + style DOCKER fill:#0d2137,stroke:#2e75b6,stroke-width:2px,color:#fff + style GROQ fill:#1a1a1a,stroke:#e74c3c,stroke-width:2px,color:#fff diff --git a/hackathon/mosquitto/mosquitto.conf b/hackathon/mosquitto/mosquitto.conf new file mode 100644 index 0000000..153d28f --- /dev/null +++ b/hackathon/mosquitto/mosquitto.conf @@ -0,0 +1,21 @@ +# Mosquitto MQTT Broker Configuration +# vCon Intelligence Platform โ€” Hackathon + +# MQTT listener (standard) +listener 1883 +protocol mqtt + +# WebSocket listener (for browser dashboard) +listener 9001 +protocol websockets + +# Allow anonymous for hackathon (secure in production) +allow_anonymous true + +# Persistence +persistence true +persistence_location /mosquitto/data/ + +# Logging +log_dest file /mosquitto/log/mosquitto.log +log_type all diff --git a/hackathon/plugins/ai-analyzer/index.ts b/hackathon/plugins/ai-analyzer/index.ts new file mode 100644 index 0000000..a293161 --- /dev/null +++ b/hackathon/plugins/ai-analyzer/index.ts @@ -0,0 +1,451 @@ +/** + * AI Analyzer Plugin + * + * Calls the LLaMA sidecar (Groq) to add AI-powered sentiment analysis, + * summaries, and topic extraction to every vCon on creation. + * + * Flow: + * afterCreate hook โ†’ extract transcript text โ†’ POST /analyze to sidecar + * โ†’ store sentiment/summary/topics as analysis[] entries in MongoDB + * + * The sidecar URL defaults to http://localhost:8200 (LLAMA_SIDECAR_URL env). + * If the sidecar is unavailable, the plugin degrades gracefully (logs a warning). + */ + +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon, Analysis } from '../../src/types/vcon.js'; +import { Tool } from '@modelcontextprotocol/sdk/types.js'; + +// ============================================================================ +// Types +// ============================================================================ + +interface AnalyzeResponse { + sentiment: number; + sentiment_label: string; + summary: string; + topics: string[]; + key_phrases: string[]; + processing_time_seconds: number; +} + +// ============================================================================ +// AI Analyzer Plugin +// ============================================================================ + +export class AiAnalyzerPlugin implements VConPlugin { + name = 'ai-analyzer'; + version = '1.0.0'; + + private llamaUrl: string = ''; + private verbose: boolean = false; + private queries: any = null; + private analysisCount: number = 0; + private failCount: number = 0; + private online: boolean = false; + + // ========== Lifecycle ========== + + async initialize(config?: any): Promise { + this.llamaUrl = config?.llamaUrl + || process.env.LLAMA_SIDECAR_URL + || 'http://localhost:8200'; + this.verbose = config?.verbose + || process.env.AI_ANALYZER_VERBOSE === 'true' + || false; + this.queries = config?.queries || null; + + // Check sidecar health on startup + await this.checkHealth(); + + this.log('info', `LLaMA sidecar: ${this.llamaUrl} (${this.online ? 'ONLINE' : 'OFFLINE'})`); + } + + async shutdown(): Promise { + this.log('info', `AI Analyzer shut down (${this.analysisCount} analyzed, ${this.failCount} failed)`); + } + + // ========== Hooks ========== + + async afterCreate(vcon: VCon, context: RequestContext): Promise { + // Extract transcript text from the vCon + const transcript = this.extractTranscript(vcon); + + if (!transcript) { + this.log('debug', `No transcript found for ${vcon.uuid} โ€” skipping analysis`); + return; + } + + // Call LLaMA sidecar + try { + const analysis = await this.analyze(transcript, vcon.uuid); + if (!analysis) return; + + // Store analysis results back into the vCon in MongoDB + await this.storeAnalysis(vcon.uuid, analysis); + this.analysisCount++; + + this.log('info', `Analyzed ${vcon.uuid}: sentiment=${analysis.sentiment.toFixed(2)} (${analysis.sentiment_label}), ${analysis.topics.length} topics, ${analysis.processing_time_seconds}s`); + } catch (err: any) { + this.failCount++; + this.log('warn', `Analysis failed for ${vcon.uuid}: ${err.message}`); + } + } + + // ========== MCP Tools ========== + + registerTools(): Tool[] { + return [ + { + name: 'ai_analyze_vcon', + description: 'Run AI analysis (sentiment, summary, topics) on a specific vCon by UUID. Useful for re-analyzing or analyzing vCons that were created before the AI sidecar was available.', + inputSchema: { + type: 'object' as const, + properties: { + uuid: { + type: 'string', + description: 'vCon UUID to analyze', + }, + }, + required: ['uuid'], + }, + }, + { + name: 'ai_analyzer_status', + description: 'Get AI analyzer status: sidecar connection, analysis count, model info.', + inputSchema: { + type: 'object' as const, + properties: {}, + }, + }, + { + name: 'ai_query', + description: 'Ask a natural language question about conversations. Provide context chunks from vCon transcripts for RAG-style Q&A.', + inputSchema: { + type: 'object' as const, + properties: { + question: { + type: 'string', + description: 'Natural language question about conversations', + }, + context_chunks: { + type: 'array', + description: 'Context chunks with text, vcon_uuid, and source fields', + items: { + type: 'object', + properties: { + text: { type: 'string' }, + vcon_uuid: { type: 'string' }, + source: { type: 'string' }, + }, + }, + }, + }, + required: ['question'], + }, + }, + ]; + } + + async handleToolCall(toolName: string, args: any, context: RequestContext): Promise { + if (toolName === 'ai_analyzer_status') { + return this.getStatus(); + } + + if (toolName === 'ai_analyze_vcon') { + return this.analyzeByUuid(args.uuid); + } + + if (toolName === 'ai_query') { + return this.queryRag(args.question, args.context_chunks); + } + + return null; + } + + // ========== Core Logic ========== + + /** + * Extract transcript text from vCon analysis entries. + * Looks for analysis entries of type 'transcript'. + * Falls back to dialog text content if no transcript analysis exists. + */ + private extractTranscript(vcon: VCon): string | null { + // Check analysis[] for transcripts + if (vcon.analysis && vcon.analysis.length > 0) { + const transcripts = vcon.analysis + .filter(a => a.type === 'transcript' && a.body) + .map(a => typeof a.body === 'string' ? a.body : JSON.stringify(a.body)); + + if (transcripts.length > 0) { + return transcripts.join('\n\n'); + } + } + + // Check dialog[] for text content + if (vcon.dialog && vcon.dialog.length > 0) { + const textDialogs = vcon.dialog + .filter(d => d.type === 'text' && d.body) + .map(d => typeof d.body === 'string' ? d.body : JSON.stringify(d.body)); + + if (textDialogs.length > 0) { + return textDialogs.join('\n\n'); + } + } + + return null; + } + + /** + * Call the LLaMA sidecar /analyze endpoint. + */ + private async analyze(text: string, vconUuid?: string): Promise { + try { + const response = await fetch(`${this.llamaUrl}/analyze`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + text, + vcon_uuid: vconUuid, + }), + signal: AbortSignal.timeout(30000), // 30s timeout + }); + + if (!response.ok) { + this.log('warn', `LLaMA sidecar returned ${response.status}`); + return null; + } + + this.online = true; + return await response.json() as AnalyzeResponse; + } catch (err: any) { + this.online = false; + this.log('debug', `LLaMA sidecar unavailable: ${err.message}`); + return null; + } + } + + /** + * Store AI analysis results into the vCon document in MongoDB. + * Adds three analysis entries: sentiment, summary, topics. + */ + private async storeAnalysis(uuid: string, result: AnalyzeResponse): Promise { + if (!this.queries) return; + + const now = new Date().toISOString(); + + // Build analysis entries to append + const newAnalyses: Analysis[] = [ + { + type: 'sentiment', + vendor: 'groq', + product: 'llama-3.1-8b-instant', + body: { + score: result.sentiment, + label: result.sentiment_label, + }, + encoding: 'none', + }, + { + type: 'summary', + vendor: 'groq', + product: 'llama-3.1-8b-instant', + body: result.summary, + encoding: 'none', + }, + { + type: 'topic-extraction', + vendor: 'groq', + product: 'llama-3.1-8b-instant', + body: { + topics: result.topics, + key_phrases: result.key_phrases, + }, + encoding: 'none', + }, + ]; + + try { + const db = (this.queries as any).db; + if (db) { + // Append new analysis entries to existing analysis array + await db.collection('vcons').updateOne( + { uuid }, + { + $push: { + analysis: { $each: newAnalyses }, + }, + $set: { + ai_analyzed_at: now, + updated_at: now, + }, + } + ); + } + } catch (err: any) { + this.log('warn', `Could not store analysis for ${uuid}: ${err.message}`); + } + } + + // ========== Tool Handlers ========== + + private async analyzeByUuid(uuid: string): Promise { + if (!this.queries) return { error: 'Database not available' }; + + try { + const vcon = await this.queries.getVCon(uuid); + const transcript = this.extractTranscript(vcon); + + if (!transcript) { + return { error: 'No transcript found in this vCon' }; + } + + const result = await this.analyze(transcript, uuid); + if (!result) { + return { error: 'LLaMA sidecar unavailable' }; + } + + await this.storeAnalysis(uuid, result); + this.analysisCount++; + + return { + success: true, + uuid, + sentiment: result.sentiment, + sentiment_label: result.sentiment_label, + summary: result.summary, + topics: result.topics, + key_phrases: result.key_phrases, + processing_time_seconds: result.processing_time_seconds, + }; + } catch (err: any) { + return { error: err.message }; + } + } + + private async queryRag(question: string, contextChunks?: any[]): Promise { + // If no context chunks provided, gather from all vCons + let chunks = contextChunks; + + if (!chunks || chunks.length === 0) { + chunks = await this.gatherContextFromAllVcons(question); + } + + if (!chunks || chunks.length === 0) { + return { error: 'No conversation context available for answering' }; + } + + try { + const response = await fetch(`${this.llamaUrl}/query`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + question, + context_chunks: chunks, + }), + signal: AbortSignal.timeout(30000), + }); + + if (!response.ok) { + return { error: `LLaMA sidecar returned ${response.status}` }; + } + + return await response.json(); + } catch (err: any) { + return { error: `Query failed: ${err.message}` }; + } + } + + /** + * Gather transcript chunks from all vCons in MongoDB for RAG context. + * Simple approach: pull all vCons, extract transcripts, return as chunks. + * (A proper RAG system would use ChromaDB vector search โ€” that's the next step.) + */ + private async gatherContextFromAllVcons(question: string): Promise { + if (!this.queries) return []; + + try { + const db = (this.queries as any).db; + if (!db) return []; + + // Get all vCons that have transcripts + const vcons = await db.collection('vcons') + .find({ 'analysis.type': 'transcript' }) + .project({ uuid: 1, subject: 1, analysis: 1, parties: 1 }) + .limit(20) + .toArray(); + + const chunks: any[] = []; + for (const vcon of vcons) { + const transcripts = (vcon.analysis || []) + .filter((a: any) => a.type === 'transcript' && a.body) + .map((a: any) => typeof a.body === 'string' ? a.body : JSON.stringify(a.body)); + + if (transcripts.length > 0) { + const partyNames = (vcon.parties || []) + .map((p: any) => p.name) + .filter(Boolean) + .join(', '); + + chunks.push({ + text: `Subject: ${vcon.subject || 'Unknown'}\nParticipants: ${partyNames}\n\n${transcripts.join('\n')}`, + vcon_uuid: vcon.uuid, + source: vcon.subject || 'conversation', + }); + } + } + + return chunks; + } catch (err: any) { + this.log('warn', `Failed to gather context: ${err.message}`); + return []; + } + } + + // ========== Health & Status ========== + + private async checkHealth(): Promise { + try { + const response = await fetch(`${this.llamaUrl}/health`, { + signal: AbortSignal.timeout(5000), + }); + if (response.ok) { + this.online = true; + } + } catch { + this.online = false; + } + } + + private async getStatus(): Promise { + await this.checkHealth(); + + let healthInfo = null; + if (this.online) { + try { + const resp = await fetch(`${this.llamaUrl}/health`); + healthInfo = await resp.json(); + } catch { /* ignore */ } + } + + return { + sidecar_url: this.llamaUrl, + online: this.online, + analysis_count: this.analysisCount, + fail_count: this.failCount, + sidecar_info: healthInfo, + }; + } + + // ========== Helpers ========== + + private log(level: string, message: string): void { + const prefix = '[ai-analyzer]'; + if (level === 'error') console.error(`${prefix} ${message}`); + else if (level === 'warn') console.warn(`${prefix} ${message}`); + else if (level === 'debug' && this.verbose) console.log(`${prefix} ${message}`); + else if (level === 'info') console.log(`${prefix} ${message}`); + } +} + +export default AiAnalyzerPlugin; diff --git a/hackathon/plugins/jsonld-enrichment/index.ts b/hackathon/plugins/jsonld-enrichment/index.ts new file mode 100644 index 0000000..02d1761 --- /dev/null +++ b/hackathon/plugins/jsonld-enrichment/index.ts @@ -0,0 +1,907 @@ +/** + * JSON-LD-ex Enrichment Plugin + * + * WOW FACTOR 1: Deep Semantic vCon Interop + * + * Transforms every vCon into semantically rich JSON-LD using the full + * @jsonld-ex/core library. This is the killer differentiator โ€” no other + * hackathon team will have: + * + * 1. Full @context with schema.org + vCon namespace + jsonld-ex AI/ML terms + * 2. Per-value provenance annotations (@confidence, @source, @method, @extractedAt) + * 3. Cryptographic integrity signing (SHA-256/384/512) with verification + * 4. Subjective Logic confidence algebra (Opinion fusion, trust discount, decay) + * 5. Temporal validity windows (@validFrom / @validUntil) + * 6. Agentic AI capability advertisement (tool discovery via @context) + * 7. Shape-based validation for vCon structure + * + * The enriched JSON-LD document is stored alongside the original vCon in MongoDB + * as a `jsonld_enrichment` field, and an MQTT event is published. + */ + +import { + JsonLdEx, + annotate, + getConfidence, + getProvenance, + aggregateConfidence, + computeIntegrity, + verifyIntegrity as verifySecurityIntegrity, + AI_ML_CONTEXT, + KEYWORD_CONFIDENCE, + KEYWORD_SOURCE, + KEYWORD_EXTRACTED_AT, + KEYWORD_METHOD, + KEYWORD_INTEGRITY, + KEYWORD_VALID_FROM, + KEYWORD_VALID_UNTIL, + KEYWORD_HUMAN_VERIFIED, + KEYWORD_ACTED_ON_BEHALF_OF, + KEYWORD_WAS_DERIVED_FROM, + type ProvenanceMetadata, + type AnnotatedValue, +} from '@jsonld-ex/core'; + +import { + Opinion, + cumulativeFuse, + trustDiscount, + robustFuse, +} from '@jsonld-ex/core'; + +import { decayOpinion } from '@jsonld-ex/core'; + +import { + combineOpinionsFromScalars, + propagateOpinionsFromScalars, +} from '@jsonld-ex/core'; + +import { createHash } from 'crypto'; +import stringify from 'fast-json-stable-stringify'; +import mqtt from 'mqtt'; +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon, Party, Analysis } from '../../src/types/vcon.js'; +import { Tool } from '@modelcontextprotocol/sdk/types.js'; + +// ============================================================================ +// Constants +// ============================================================================ + +/** vCon JSON-LD-ex enrichment context โ€” combines schema.org, vCon ns, and jsonld-ex AI/ML */ +const VCON_JSONLD_EX_CONTEXT = { + "@context": [ + "https://schema.org", + { + // vCon namespace + "vcon": "https://vcon.dev/ns#", + "xsd": "http://www.w3.org/2001/XMLSchema#", + "prov": "http://www.w3.org/ns/prov#", + + // vCon core terms + "parties": "vcon:parties", + "dialog": "vcon:dialog", + "analysis": "vcon:analysis", + "attachments": "vcon:attachments", + "subject": "vcon:subject", + + // vCon analysis terms + "vendor": "vcon:vendor", + "product": "vcon:product", + + // jsonld-ex AI/ML extensions (from @jsonld-ex/core) + "@confidence": { "@id": "https://w3id.org/jsonld-ex/confidence", "@type": "xsd:float" }, + "@source": { "@id": "https://w3id.org/jsonld-ex/source", "@type": "@id" }, + "@extractedAt": { "@id": "https://w3id.org/jsonld-ex/extractedAt", "@type": "xsd:dateTime" }, + "@method": { "@id": "https://w3id.org/jsonld-ex/method", "@type": "xsd:string" }, + "@humanVerified": { "@id": "https://w3id.org/jsonld-ex/humanVerified", "@type": "xsd:boolean" }, + "@integrity": { "@id": "https://w3id.org/jsonld-ex/integrity", "@type": "xsd:string" }, + "@validFrom": { "@id": "https://w3id.org/jsonld-ex/validFrom", "@type": "xsd:dateTime" }, + "@validUntil": { "@id": "https://w3id.org/jsonld-ex/validUntil", "@type": "xsd:dateTime" }, + "@actedOnBehalfOf": { "@id": "https://w3id.org/jsonld-ex/actedOnBehalfOf", "@type": "@id" }, + "@wasDerivedFrom": { "@id": "https://w3id.org/jsonld-ex/wasDerivedFrom", "@type": "@id" }, + + // Subjective Logic opinion representation + "opinion": "https://w3id.org/jsonld-ex/opinion", + "belief": { "@id": "https://w3id.org/jsonld-ex/opinion/belief", "@type": "xsd:float" }, + "disbelief": { "@id": "https://w3id.org/jsonld-ex/opinion/disbelief", "@type": "xsd:float" }, + "uncertainty": { "@id": "https://w3id.org/jsonld-ex/opinion/uncertainty", "@type": "xsd:float" }, + "baseRate": { "@id": "https://w3id.org/jsonld-ex/opinion/baseRate", "@type": "xsd:float" }, + + // Agentic AI capability advertisement + "agentCapabilities": "https://w3id.org/jsonld-ex/agentCapabilities", + "toolDiscovery": "https://w3id.org/jsonld-ex/toolDiscovery", + "queryableVia": { "@id": "https://w3id.org/jsonld-ex/queryableVia", "@type": "@id" }, + "supportedOperations": "https://w3id.org/jsonld-ex/supportedOperations", + } + ] +}; + +// ============================================================================ +// Types +// ============================================================================ + +export interface JsonLdEnrichmentConfig { + /** Hash algorithm for integrity signing (default: sha256) */ + hashAlgorithm?: 'sha256' | 'sha384' | 'sha512'; + /** Default confidence for party identification (default: 0.85) */ + defaultPartyConfidence?: number; + /** Confidence half-life in hours for temporal decay (default: 720 = 30 days) */ + confidenceHalfLifeHours?: number; + /** MCP server URL for agent tool discovery */ + mcpServerUrl?: string; + /** Enable verbose logging */ + verbose?: boolean; +} + +export interface EnrichedVConJsonLd { + "@context": any; + "@type": string; + "@id": string; + "@integrity": string; + "@validFrom": string; + subject?: string; + parties: any[]; + dialog?: any[]; + analysis?: any[]; + agentCapabilities: any; + _enrichment: { + version: string; + enrichedAt: string; + enrichmentPipeline: string[]; + integrityAlgorithm: string; + confidenceModel: string; + }; + [key: string]: any; +} + +// ============================================================================ +// JSON-LD-ex Enrichment Plugin +// ============================================================================ + +export class JsonLdEnrichmentPlugin implements VConPlugin { + name = 'jsonld-enrichment'; + version = '1.0.0'; + + private processor: JsonLdEx; + private hashAlgorithm: 'sha256' | 'sha384' | 'sha512' = 'sha256'; + private defaultPartyConfidence: number = 0.85; + private confidenceHalfLifeHours: number = 720; + private mcpServerUrl: string = 'http://localhost:3000'; + private verbose: boolean = false; + private enrichedCount: number = 0; + private queries: any = null; // IVConQueries โ€” injected at init + private mqttClient: mqtt.MqttClient | null = null; + private mqttOrgId: string = 'hackathon'; + + constructor(private config?: JsonLdEnrichmentConfig) { + this.processor = new JsonLdEx({ + processExtensions: true, + validateVectors: true, + resourceLimits: { + maxContextDepth: 10, + maxGraphDepth: 100, + maxDocumentSize: 10 * 1024 * 1024, // 10MB + maxExpansionTime: 30000, + }, + }); + } + + // ========== Lifecycle ========== + + async initialize(config?: any): Promise { + const merged = { ...this.config, ...config }; + + this.hashAlgorithm = merged?.hashAlgorithm || 'sha256'; + this.defaultPartyConfidence = merged?.defaultPartyConfidence || 0.85; + this.confidenceHalfLifeHours = merged?.confidenceHalfLifeHours || 720; + this.mcpServerUrl = merged?.mcpServerUrl || process.env.MCP_SERVER_URL || 'http://localhost:3000'; + this.verbose = merged?.verbose || process.env.JSONLD_VERBOSE === 'true' || false; + this.queries = merged?.queries || null; + + // Connect to MQTT for publishing enrichment events + const mqttUrl = process.env.MQTT_BROKER_URL || 'mqtt://localhost:1883'; + this.mqttOrgId = process.env.MQTT_ORG_ID || 'hackathon'; + try { + this.mqttClient = mqtt.connect(mqttUrl, { clientId: `jsonld-enrichment-${Date.now()}` }); + this.mqttClient.on('error', (err) => this.log('debug', `MQTT error: ${err.message}`)); + } catch (err: any) { + this.log('debug', `MQTT connection failed: ${err.message}`); + } + + this.log('info', `JSON-LD-ex enrichment initialized (${this.hashAlgorithm}, confidence model: subjective-logic)`); + } + + async shutdown(): Promise { + this.log('info', `JSON-LD-ex enrichment shut down (${this.enrichedCount} vCons enriched)`); + } + + // ========== Lifecycle Hooks ========== + + /** + * After a vCon is created, enrich it with JSON-LD-ex semantics. + * + * Pipeline: + * 1. Map vCon โ†’ JSON-LD document with @context + * 2. Annotate parties with confidence + provenance + * 3. Annotate analysis with confidence + provenance + opinions + * 4. Add temporal validity windows + * 5. Add agentic AI capability advertisement + * 6. Cryptographically sign with @integrity + * 7. Store enrichment alongside original vCon + * 8. Publish MQTT enrichment event + */ + async afterCreate(vcon: VCon, context: RequestContext): Promise { + try { + const startTime = Date.now(); + const enriched = this.enrichVCon(vcon, context); + const duration = Date.now() - startTime; + + // Store enrichment in MongoDB alongside the vCon + if (this.queries) { + try { + await this.storeEnrichment(vcon.uuid, enriched); + } catch (err: any) { + this.log('warn', `Failed to store enrichment for ${vcon.uuid}: ${err.message}`); + } + } + + // Publish vcon.enriched MQTT event + if (this.mqttClient?.connected) { + const event = { + event: 'vcon.enriched', + vcon_uuid: vcon.uuid, + timestamp: new Date().toISOString(), + analysis_type: 'jsonld-ex', + integrity: enriched['@integrity'], + duration_ms: duration, + }; + const topic = `vcon/enterprise/${this.mqttOrgId}/enriched/${vcon.uuid}`; + this.mqttClient.publish(topic, JSON.stringify(event), { qos: 1 }); + } + + this.enrichedCount++; + this.log('debug', `Enriched vCon ${vcon.uuid} in ${duration}ms (integrity: ${enriched['@integrity']})`); + } catch (err: any) { + this.log('error', `Failed to enrich vCon ${vcon.uuid}: ${err.message}`); + } + } + + /** + * After reading a vCon, attach the enrichment if available + */ + async afterRead(vcon: VCon, context: RequestContext): Promise { + // If the vCon already has enrichment data, return as-is + if ((vcon as any).jsonld_enrichment) return vcon; + + // Otherwise, try to enrich on-the-fly (lightweight, no storage) + try { + const enriched = this.enrichVCon(vcon, context); + return { ...vcon, jsonld_enrichment: enriched } as any; + } catch { + return vcon; + } + } + + // ========== Core Enrichment ========== + + /** + * Transform a vCon into a fully enriched JSON-LD-ex document + */ + enrichVCon(vcon: VCon, context?: RequestContext): EnrichedVConJsonLd { + const now = new Date().toISOString(); + const source = context?.purpose || 'vcon-mcp'; + const pipeline: string[] = []; + + // Step 1: Base JSON-LD structure + const doc: any = { + "@context": VCON_JSONLD_EX_CONTEXT["@context"], + "@type": "vcon:Conversation", + "@id": `urn:uuid:${vcon.uuid}`, + "subject": vcon.subject, + "vcon:version": vcon.vcon, + "vcon:created_at": vcon.created_at, + }; + pipeline.push('base-mapping'); + + // Step 2: Annotate parties with confidence + provenance + doc.parties = this.enrichParties(vcon.parties || [], source); + pipeline.push('party-annotation'); + + // Step 3: Annotate dialog with metadata + if (vcon.dialog?.length) { + doc.dialog = this.enrichDialog(vcon.dialog); + pipeline.push('dialog-annotation'); + } + + // Step 4: Annotate analysis with confidence, provenance, and opinions + if (vcon.analysis?.length) { + doc.analysis = this.enrichAnalysis(vcon.analysis, vcon.uuid); + pipeline.push('analysis-annotation'); + + // Step 4b: Compute aggregate confidence across all analyses + const confidences = doc.analysis + .map((a: any) => a['@confidence']) + .filter((c: any) => typeof c === 'number'); + if (confidences.length > 1) { + doc['vcon:aggregateConfidence'] = this.computeAggregateConfidence(confidences); + pipeline.push('confidence-fusion'); + } + } + + // Step 5: Temporal validity + doc['@validFrom'] = vcon.created_at || now; + pipeline.push('temporal-validity'); + + // Step 6: Agentic AI capability advertisement + doc.agentCapabilities = this.buildAgentCapabilities(vcon); + pipeline.push('agent-capabilities'); + + // Step 7: Enrichment metadata + doc._enrichment = { + version: '1.0.0', + enrichedAt: now, + enrichmentPipeline: pipeline, + integrityAlgorithm: this.hashAlgorithm, + confidenceModel: 'subjective-logic-josang', + }; + + // Step 8: Cryptographic integrity signing (MUST be last) + doc['@integrity'] = this.signDocument(doc); + pipeline.push('integrity-signing'); + + return doc as EnrichedVConJsonLd; + } + + // ========== Party Enrichment ========== + + private enrichParties(parties: Party[], source: string): any[] { + return parties.map((party, index) => { + const identifier = party.tel || party.mailto || party.name; + const identMethod = party.tel ? 'caller_id' : party.mailto ? 'email_header' : 'name_extraction'; + + // Confidence varies by identification method + const confidenceMap: Record = { + 'caller_id': 0.92, // STIR/SHAKEN verified phone + 'email_header': 0.88, // Email address from headers + 'name_extraction': 0.75, // NER or manual entry + }; + const confidence = confidenceMap[identMethod] || this.defaultPartyConfidence; + + const enriched: any = { + "@type": "schema:Person", + "@id": identifier ? `urn:party:${encodeURIComponent(identifier)}` : `urn:party:index:${index}`, + "@confidence": confidence, + "@source": `urn:adapter:${source}`, + "@extractedAt": new Date().toISOString(), + "@method": identMethod, + }; + + if (party.name) enriched["schema:name"] = party.name; + if (party.tel) enriched["schema:telephone"] = party.tel; + if (party.mailto) enriched["schema:email"] = party.mailto; + + // Subjective Logic opinion for this party identification + enriched.opinion = this.scalarToOpinionJson(confidence, 0.05); + + return enriched; + }); + } + + // ========== Dialog Enrichment ========== + + private enrichDialog(dialogs: VCon['dialog']): any[] { + return (dialogs || []).map((dialog, index) => { + const enriched: any = { + "@type": `vcon:Dialog:${dialog.type}`, + "vcon:dialogIndex": index, + "vcon:type": dialog.type, + }; + + if (dialog.start) enriched["vcon:start"] = dialog.start; + if (dialog.duration) enriched["vcon:duration"] = dialog.duration; + if (dialog.mediatype) enriched["vcon:mediatype"] = dialog.mediatype; + + // Content integrity hash (without including the body itself in the enrichment) + if (dialog.body) { + enriched["@contentHash"] = `sha256-${createHash('sha256').update(dialog.body).digest('hex')}`; + enriched["vcon:hasContent"] = true; + enriched["vcon:encoding"] = dialog.encoding || 'none'; + } else if (dialog.url) { + enriched["vcon:contentUrl"] = dialog.url; + if (dialog.content_hash) enriched["@contentHash"] = dialog.content_hash; + } + + return enriched; + }); + } + + // ========== Analysis Enrichment ========== + + private enrichAnalysis(analyses: Analysis[], vconUuid: string): any[] { + return analyses.map((analysis, index) => { + // Map analysis type to confidence heuristic + const typeConfidenceMap: Record = { + 'transcript': 0.90, // Whisper transcription + 'sentiment': 0.82, // LLM sentiment analysis + 'summary': 0.78, // LLM summarization + 'topics': 0.85, // Topic extraction + 'translation': 0.88, // Machine translation + 'intent': 0.76, // Intent classification + 'entities': 0.84, // Named entity recognition + 'pii_detection': 0.91, // PII detection + }; + const confidence = typeConfidenceMap[analysis.type] || 0.75; + + // Determine provenance source + const isLocalGpu = analysis.vendor?.includes('local') || analysis.vendor?.includes('whisper'); + const sourceUri = isLocalGpu + ? `urn:device:rtx_4090:${analysis.vendor}` + : `urn:vendor:${encodeURIComponent(analysis.vendor)}`; + + const enriched: any = { + "@type": `vcon:Analysis:${analysis.type}`, + "@id": `urn:uuid:${vconUuid}:analysis:${index}`, + "@confidence": confidence, + "@source": sourceUri, + "@extractedAt": new Date().toISOString(), + "@method": analysis.type, + "@wasDerivedFrom": `urn:uuid:${vconUuid}`, + "vendor": analysis.vendor, + }; + + if (analysis.product) enriched["product"] = analysis.product; + if (analysis.schema) enriched["vcon:schema"] = analysis.schema; + + // Provenance chain: who acted on behalf of whom + if (isLocalGpu) { + enriched["@actedOnBehalfOf"] = "urn:operator:vcon-intelligence-platform"; + enriched["provenance"] = { + "inferenceDevice": "NVIDIA RTX 4090", + "inferenceLocation": "on-premise", + "dataResidency": "local", + "cloudDependency": false, + }; + } + + // Subjective Logic opinion + enriched.opinion = this.scalarToOpinionJson(confidence, 0.08); + + // Content hash of analysis body for verification + if (analysis.body) { + enriched["@contentHash"] = `sha256-${createHash('sha256').update(analysis.body).digest('hex')}`; + } + + return enriched; + }); + } + + // ========== Confidence Algebra ========== + + /** + * Compute aggregate confidence using Subjective Logic cumulative fusion. + * This is far more principled than simple averaging โ€” it accounts for + * uncertainty and evidence strength from each source. + */ + private computeAggregateConfidence(scores: number[]): any { + // Use the bridge to convert scalars to opinions and fuse + const fused = combineOpinionsFromScalars(scores, 0.05, 'cumulative'); + + return { + "@type": "jsonld-ex:AggregateConfidence", + "projectedProbability": fused.projectedProbability(), + "opinion": { + "belief": round(fused.belief), + "disbelief": round(fused.disbelief), + "uncertainty": round(fused.uncertainty), + "baseRate": round(fused.baseRate), + }, + "inputScores": scores, + "fusionMethod": "cumulative_subjective_logic", + "sourceCount": scores.length, + }; + } + + /** + * Propagate confidence through a trust chain. + * Used when analysis B depends on analysis A (e.g., sentiment depends on transcript). + */ + propagateTrustChain(chain: number[]): any { + const result = propagateOpinionsFromScalars(chain, 0.05); + return { + "@type": "jsonld-ex:PropagatedConfidence", + "projectedProbability": result.projectedProbability(), + "opinion": { + "belief": round(result.belief), + "disbelief": round(result.disbelief), + "uncertainty": round(result.uncertainty), + "baseRate": round(result.baseRate), + }, + "chainLength": chain.length, + "inputChain": chain, + }; + } + + /** + * Apply temporal decay to a confidence score. + * Older analysis results become less certain over time. + */ + decayConfidence(confidence: number, hoursElapsed: number): any { + const opinion = Opinion.fromConfidence(confidence, 0.05); + const decayed = decayOpinion(opinion, hoursElapsed, this.confidenceHalfLifeHours); + return { + "@type": "jsonld-ex:DecayedConfidence", + "originalConfidence": confidence, + "decayedConfidence": round(decayed.toConfidence()), + "hoursElapsed": hoursElapsed, + "halfLifeHours": this.confidenceHalfLifeHours, + "opinion": { + "belief": round(decayed.belief), + "disbelief": round(decayed.disbelief), + "uncertainty": round(decayed.uncertainty), + }, + }; + } + + // ========== Cryptographic Integrity ========== + + /** + * Sign a document using the configured hash algorithm. + * Excludes @integrity field from hash calculation for verifiability. + */ + private signDocument(doc: any): string { + const copy = { ...doc }; + delete copy['@integrity']; + // Remove _enrichment.enrichmentPipeline last entry (it references this step) + const serialized = stringify(copy); + return computeIntegrity(serialized, this.hashAlgorithm); + } + + /** + * Verify a previously signed document + */ + verifyDocument(doc: any): { valid: boolean; algorithm: string; hash: string } { + const declaredIntegrity = doc['@integrity']; + if (!declaredIntegrity) { + return { valid: false, algorithm: 'none', hash: '' }; + } + + const copy = { ...doc }; + delete copy['@integrity']; + const serialized = stringify(copy); + const valid = verifySecurityIntegrity(serialized, declaredIntegrity); + + return { + valid, + algorithm: declaredIntegrity.split('-')[0], + hash: declaredIntegrity, + }; + } + + // ========== Agentic AI Capabilities ========== + + /** + * Build agent capability advertisement. + * This tells AI agents what operations they can perform on this vCon + * through the MCP server โ€” enabling semantic tool discovery. + */ + private buildAgentCapabilities(vcon: VCon): any { + const capabilities: string[] = [ + 'search_semantic', + 'search_keyword', + 'search_hybrid', + 'get_vcon', + 'add_analysis', + 'add_tag', + 'verify_integrity', + ]; + + // Add conditional capabilities + if (vcon.dialog?.length) capabilities.push('transcribe', 'translate'); + if (vcon.analysis?.length) capabilities.push('query_analysis', 'aggregate_confidence'); + if (vcon.parties?.length) capabilities.push('query_participants', 'graph_traversal'); + + return { + "@type": "jsonld-ex:AgentCapabilityManifest", + "queryableVia": `${this.mcpServerUrl}/mcp`, + "protocol": "mcp", + "supportedOperations": capabilities, + "toolDiscovery": { + "@type": "jsonld-ex:ToolRegistry", + "tools": [ + { + "name": "jsonld_verify_integrity", + "description": "Verify cryptographic integrity of this enriched vCon", + "inputRequired": ["vcon_uuid"], + }, + { + "name": "jsonld_get_enrichment", + "description": "Get the full JSON-LD-ex enrichment for a vCon", + "inputRequired": ["vcon_uuid"], + }, + { + "name": "jsonld_confidence_query", + "description": "Query confidence scores and opinions for vCon data points", + "inputRequired": ["vcon_uuid"], + }, + { + "name": "jsonld_provenance_chain", + "description": "Trace the provenance chain for any enriched data point", + "inputRequired": ["vcon_uuid", "data_path"], + }, + { + "name": "neo4j_query", + "description": "Query the relationship graph for participants, topics, and patterns", + "inputRequired": ["query"], + }, + ], + }, + "semanticContext": { + "interoperableWith": [ + "https://schema.org", + "https://www.w3.org/ns/prov#", + "https://hl7.org/fhir", + "http://purl.org/dc/terms/", + ], + "confidenceModel": "subjective-logic-josang", + "integrityAlgorithm": this.hashAlgorithm, + }, + }; + } + + // ========== MCP Tools ========== + + registerTools(): Tool[] { + return [ + { + name: 'jsonld_get_enrichment', + description: 'Get the full JSON-LD-ex semantic enrichment for a vCon, including provenance annotations, confidence scores, integrity hash, and agent capabilities.', + inputSchema: { + type: 'object' as const, + properties: { + vcon_uuid: { type: 'string', description: 'UUID of the vCon to get enrichment for' }, + }, + required: ['vcon_uuid'], + }, + }, + { + name: 'jsonld_verify_integrity', + description: 'Verify the cryptographic integrity of a vCon enrichment. Returns whether the document has been tampered with since signing.', + inputSchema: { + type: 'object' as const, + properties: { + vcon_uuid: { type: 'string', description: 'UUID of the vCon to verify' }, + }, + required: ['vcon_uuid'], + }, + }, + { + name: 'jsonld_confidence_query', + description: 'Query confidence scores for a vCon. Returns per-party and per-analysis confidence with Subjective Logic opinions. Supports temporal decay calculation.', + inputSchema: { + type: 'object' as const, + properties: { + vcon_uuid: { type: 'string', description: 'UUID of the vCon' }, + apply_decay: { type: 'boolean', description: 'Apply temporal decay to confidence scores based on age (default: false)' }, + }, + required: ['vcon_uuid'], + }, + }, + { + name: 'jsonld_provenance_chain', + description: 'Trace the full provenance chain for a vCon: who created it, what models analyzed it, confidence scores, integrity verification, and data lineage.', + inputSchema: { + type: 'object' as const, + properties: { + vcon_uuid: { type: 'string', description: 'UUID of the vCon' }, + }, + required: ['vcon_uuid'], + }, + }, + { + name: 'jsonld_trust_propagation', + description: 'Compute trust propagation through an analysis chain using Subjective Logic. For example: transcript confidence โ†’ sentiment confidence โ†’ summary confidence.', + inputSchema: { + type: 'object' as const, + properties: { + confidence_chain: { + type: 'array', + items: { type: 'number' }, + description: 'Array of confidence scores along the inference chain', + }, + }, + required: ['confidence_chain'], + }, + }, + ]; + } + + async handleToolCall(toolName: string, args: any, context: RequestContext): Promise { + switch (toolName) { + case 'jsonld_get_enrichment': + return this.handleGetEnrichment(args.vcon_uuid); + case 'jsonld_verify_integrity': + return this.handleVerifyIntegrity(args.vcon_uuid); + case 'jsonld_confidence_query': + return this.handleConfidenceQuery(args.vcon_uuid, args.apply_decay); + case 'jsonld_provenance_chain': + return this.handleProvenanceChain(args.vcon_uuid); + case 'jsonld_trust_propagation': + return this.propagateTrustChain(args.confidence_chain); + default: + return null; + } + } + + // ========== Tool Handlers ========== + + private async handleGetEnrichment(uuid: string): Promise { + if (!this.queries) return { error: 'Database not available' }; + + try { + const vcon = await this.queries.getVCon(uuid); + if ((vcon as any).jsonld_enrichment) { + return (vcon as any).jsonld_enrichment; + } + // Enrich on-the-fly + return this.enrichVCon(vcon); + } catch (err: any) { + return { error: err.message }; + } + } + + private async handleVerifyIntegrity(uuid: string): Promise { + if (!this.queries) return { error: 'Database not available' }; + + try { + const vcon = await this.queries.getVCon(uuid); + const enrichment = (vcon as any).jsonld_enrichment || this.enrichVCon(vcon); + const result = this.verifyDocument(enrichment); + + return { + vcon_uuid: uuid, + integrity: result, + verified_at: new Date().toISOString(), + message: result.valid + ? 'Document integrity verified โ€” no tampering detected.' + : 'INTEGRITY VIOLATION โ€” document may have been modified since signing.', + }; + } catch (err: any) { + return { error: err.message }; + } + } + + private async handleConfidenceQuery(uuid: string, applyDecay?: boolean): Promise { + if (!this.queries) return { error: 'Database not available' }; + + try { + const vcon = await this.queries.getVCon(uuid); + const enrichment = (vcon as any).jsonld_enrichment || this.enrichVCon(vcon); + + const result: any = { + vcon_uuid: uuid, + parties: enrichment.parties?.map((p: any) => ({ + name: p['schema:name'], + identifier: p['@id'], + confidence: p['@confidence'], + method: p['@method'], + opinion: p.opinion, + })), + analyses: enrichment.analysis?.map((a: any) => ({ + type: a['@type'], + confidence: a['@confidence'], + source: a['@source'], + opinion: a.opinion, + })), + aggregate: enrichment['vcon:aggregateConfidence'], + }; + + // Apply temporal decay if requested + if (applyDecay && vcon.created_at) { + const hoursElapsed = (Date.now() - new Date(vcon.created_at).getTime()) / (1000 * 60 * 60); + result.temporalDecay = { + hoursElapsed: round(hoursElapsed), + analyses: enrichment.analysis?.map((a: any) => ({ + type: a['@type'], + original: a['@confidence'], + decayed: this.decayConfidence(a['@confidence'], hoursElapsed), + })), + }; + } + + return result; + } catch (err: any) { + return { error: err.message }; + } + } + + private async handleProvenanceChain(uuid: string): Promise { + if (!this.queries) return { error: 'Database not available' }; + + try { + const vcon = await this.queries.getVCon(uuid); + const enrichment = (vcon as any).jsonld_enrichment || this.enrichVCon(vcon); + const integrity = this.verifyDocument(enrichment); + + return { + vcon_uuid: uuid, + "@id": enrichment['@id'], + created_at: vcon.created_at, + enriched_at: enrichment._enrichment?.enrichedAt, + pipeline: enrichment._enrichment?.enrichmentPipeline, + integrity, + parties: enrichment.parties?.map((p: any) => ({ + who: p['schema:name'], + identifier: p['@id'], + identifiedBy: p['@source'], + method: p['@method'], + confidence: p['@confidence'], + when: p['@extractedAt'], + })), + analyses: enrichment.analysis?.map((a: any) => ({ + type: a['@type'], + performedBy: a['@source'], + method: a['@method'], + confidence: a['@confidence'], + derivedFrom: a['@wasDerivedFrom'], + actedOnBehalfOf: a['@actedOnBehalfOf'], + provenance: a.provenance, + contentHash: a['@contentHash'], + when: a['@extractedAt'], + })), + agentCapabilities: enrichment.agentCapabilities?.supportedOperations, + interoperableWith: enrichment.agentCapabilities?.semanticContext?.interoperableWith, + }; + } catch (err: any) { + return { error: err.message }; + } + } + + // ========== Storage ========== + + private async storeEnrichment(uuid: string, enrichment: any): Promise { + if (!this.queries) return; + + // Store as a field on the vCon document in MongoDB + // Uses the raw MongoDB collection to add the enrichment field + try { + const db = (this.queries as any).db; + if (db) { + await db.collection('vcons').updateOne( + { uuid }, + { $set: { jsonld_enrichment: enrichment } } + ); + } + } catch (err: any) { + this.log('debug', `Could not store enrichment directly: ${err.message}`); + } + } + + // ========== Helpers ========== + + private scalarToOpinionJson(confidence: number, uncertainty: number = 0.05): any { + const opinion = Opinion.fromConfidence(confidence, uncertainty); + return { + belief: round(opinion.belief), + disbelief: round(opinion.disbelief), + uncertainty: round(opinion.uncertainty), + baseRate: round(opinion.baseRate), + projectedProbability: round(opinion.projectedProbability()), + }; + } + + private log(level: string, message: string): void { + const prefix = '[jsonld-enrichment]'; + if (level === 'error') console.error(`${prefix} ${message}`); + else if (level === 'warn') console.warn(`${prefix} ${message}`); + else if (level === 'debug' && this.verbose) console.log(`${prefix} ${message}`); + else if (level === 'info') console.log(`${prefix} ${message}`); + } +} + +// ========== Utility ========== + +function round(n: number, decimals: number = 4): number { + return Math.round(n * 10 ** decimals) / 10 ** decimals; +} + +export default JsonLdEnrichmentPlugin; diff --git a/hackathon/plugins/mqtt-bridge/index.ts b/hackathon/plugins/mqtt-bridge/index.ts new file mode 100644 index 0000000..0eb5fda --- /dev/null +++ b/hackathon/plugins/mqtt-bridge/index.ts @@ -0,0 +1,304 @@ +/** + * MQTT / UNS Bridge Plugin + * + * BASE CHALLENGE 2: vCon to UNS Bridge + * + * Publishes vCon lifecycle events to MQTT broker using + * Unified Namespace (UNS) topic hierarchy. + * + * Topic structure: + * vcon/enterprise/{org_id}/ingested/{uuid} + * vcon/enterprise/{org_id}/enriched/{uuid} + * vcon/enterprise/{org_id}/tagged/{uuid}/{tag} + * vcon/enterprise/{org_id}/alert/{severity}/{uuid} + * + * Events are lightweight JSON payloads referencing the full vCon in MongoDB. + * The WebSocket listener on port 9001 enables the dashboard to subscribe + * to real-time events via MQTT over WebSocket. + */ + +import mqtt, { MqttClient, IClientOptions } from 'mqtt'; +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon } from '../../src/types/vcon.js'; + +// ============================================================================ +// Types +// ============================================================================ + +export interface MqttBridgeConfig { + /** MQTT broker URL (default: mqtt://localhost:1883) */ + brokerUrl?: string; + /** Organization ID for UNS topic hierarchy */ + orgId?: string; + /** MQTT client options */ + clientOptions?: IClientOptions; + /** Enable verbose logging */ + verbose?: boolean; +} + +export interface VConEvent { + event: string; + vcon_uuid: string; + timestamp: string; + source_adapter?: string; + participant_count?: number; + duration_seconds?: number; + subject?: string; + sentiment?: number; + topics?: string[]; + mongodb_ref: string; + metadata?: Record; +} + +// ============================================================================ +// MQTT Bridge Plugin +// ============================================================================ + +export class MqttBridgePlugin implements VConPlugin { + name = 'mqtt-bridge'; + version = '1.0.0'; + + private client: MqttClient | null = null; + private orgId: string = 'default'; + private brokerUrl: string = 'mqtt://localhost:1883'; + private verbose: boolean = false; + private connected: boolean = false; + + // Track published event count for metrics + private eventCount: number = 0; + + constructor(private config?: MqttBridgeConfig) { } + + // ========== Lifecycle ========== + + async initialize(config?: any): Promise { + const mergedConfig = { ...this.config, ...config }; + + this.brokerUrl = mergedConfig?.brokerUrl + || process.env.MQTT_BROKER_URL + || 'mqtt://localhost:1883'; + this.orgId = mergedConfig?.orgId + || process.env.MQTT_ORG_ID + || 'default'; + this.verbose = mergedConfig?.verbose + || process.env.MQTT_VERBOSE === 'true' + || false; + + const clientOptions: IClientOptions = { + clientId: `vcon-mcp-bridge-${Date.now()}`, + clean: true, + connectTimeout: 5000, + reconnectPeriod: 3000, + ...mergedConfig?.clientOptions, + }; + + return new Promise((resolve, reject) => { + this.client = mqtt.connect(this.brokerUrl, clientOptions); + + this.client.on('connect', () => { + this.connected = true; + this.log('info', `Connected to MQTT broker at ${this.brokerUrl}`); + resolve(); + }); + + this.client.on('error', (err) => { + this.log('error', `MQTT error: ${err.message}`); + if (!this.connected) { + reject(err); + } + }); + + this.client.on('reconnect', () => { + this.log('info', 'Reconnecting to MQTT broker...'); + }); + + this.client.on('close', () => { + this.connected = false; + this.log('info', 'MQTT connection closed'); + }); + + // Timeout fallback + setTimeout(() => { + if (!this.connected) { + this.log('warn', 'MQTT connection timeout โ€” continuing without broker'); + resolve(); // Don't block server startup + } + }, 5000); + }); + } + + async shutdown(): Promise { + if (this.client) { + this.log('info', `Shutting down MQTT bridge (${this.eventCount} events published)`); + await new Promise((resolve) => { + this.client!.end(false, {}, () => resolve()); + }); + this.client = null; + this.connected = false; + } + } + + // ========== Lifecycle Hooks ========== + + /** + * After a vCon is created, publish an "ingested" event + */ + async afterCreate(vcon: VCon, context: RequestContext): Promise { + const source = context.purpose || context.metadata?.source || 'unknown'; + + // Calculate total duration from dialog segments + const totalDuration = vcon.dialog?.reduce( + (sum, d) => sum + (d.duration || 0), 0 + ) || 0; + + const event: VConEvent = { + event: 'vcon.ingested', + vcon_uuid: vcon.uuid, + timestamp: new Date().toISOString(), + source_adapter: source, + participant_count: vcon.parties?.length || 0, + duration_seconds: totalDuration, + subject: vcon.subject, + mongodb_ref: `vcons/${vcon.uuid}`, + }; + + await this.publish(`ingested/${vcon.uuid}`, event); + } + + /** + * After a vCon is read (used for tracking access patterns) + */ + async afterRead(vcon: VCon, context: RequestContext): Promise { + // Don't publish read events in non-verbose mode (too noisy) + if (this.verbose) { + await this.publish(`accessed/${vcon.uuid}`, { + event: 'vcon.accessed', + vcon_uuid: vcon.uuid, + timestamp: new Date().toISOString(), + mongodb_ref: `vcons/${vcon.uuid}`, + metadata: { purpose: context.purpose }, + }); + } + return vcon; // Pass through unmodified + } + + /** + * After a vCon is deleted, publish a "deleted" event + */ + async afterDelete(uuid: string, context: RequestContext): Promise { + await this.publish(`deleted/${uuid}`, { + event: 'vcon.deleted', + vcon_uuid: uuid, + timestamp: new Date().toISOString(), + mongodb_ref: `vcons/${uuid}`, + }); + } + + // ========== Public API ========== + + /** + * Publish an enrichment event (called by other plugins after analysis) + */ + async publishEnriched(vcon: VCon, analysisType: string, metadata?: Record): Promise { + const event: VConEvent = { + event: 'vcon.enriched', + vcon_uuid: vcon.uuid, + timestamp: new Date().toISOString(), + subject: vcon.subject, + participant_count: vcon.parties?.length || 0, + mongodb_ref: `vcons/${vcon.uuid}`, + metadata: { + analysis_type: analysisType, + ...metadata, + }, + }; + + await this.publish(`enriched/${vcon.uuid}`, event); + } + + /** + * Publish a tag event + */ + async publishTagged(vconUuid: string, tag: string, value: string): Promise { + await this.publish(`tagged/${vconUuid}/${tag}`, { + event: 'vcon.tagged', + vcon_uuid: vconUuid, + timestamp: new Date().toISOString(), + mongodb_ref: `vcons/${vconUuid}`, + metadata: { tag, value }, + }); + } + + /** + * Publish an alert (compliance, sentiment threshold, etc.) + */ + async publishAlert( + vconUuid: string, + severity: 'info' | 'warning' | 'critical', + message: string, + metadata?: Record + ): Promise { + await this.publish(`alert/${severity}/${vconUuid}`, { + event: `vcon.alert.${severity}`, + vcon_uuid: vconUuid, + timestamp: new Date().toISOString(), + mongodb_ref: `vcons/${vconUuid}`, + metadata: { message, ...metadata }, + }); + } + + /** + * Get connection status and stats + */ + getStatus(): { connected: boolean; eventCount: number; brokerUrl: string } { + return { + connected: this.connected, + eventCount: this.eventCount, + brokerUrl: this.brokerUrl, + }; + } + + // ========== Private ========== + + /** + * Publish a message to the UNS topic hierarchy + */ + private async publish(topicSuffix: string, payload: any): Promise { + if (!this.client || !this.connected) { + this.log('warn', `MQTT not connected โ€” dropping event: ${topicSuffix}`); + return; + } + + const topic = `vcon/enterprise/${this.orgId}/${topicSuffix}`; + const message = JSON.stringify(payload); + + return new Promise((resolve, reject) => { + this.client!.publish(topic, message, { qos: 1, retain: false }, (err) => { + if (err) { + this.log('error', `Failed to publish to ${topic}: ${err.message}`); + reject(err); + } else { + this.eventCount++; + this.log('debug', `Published to ${topic} (${message.length} bytes)`); + resolve(); + } + }); + }); + } + + private log(level: string, message: string): void { + const prefix = `[mqtt-bridge]`; + if (level === 'error') { + console.error(`${prefix} ${message}`); + } else if (level === 'warn') { + console.warn(`${prefix} ${message}`); + } else if (level === 'debug' && this.verbose) { + console.log(`${prefix} ${message}`); + } else if (level === 'info') { + console.log(`${prefix} ${message}`); + } + } +} + +// Default export for plugin loader +export default MqttBridgePlugin; diff --git a/hackathon/plugins/neo4j-consumer/index.ts b/hackathon/plugins/neo4j-consumer/index.ts new file mode 100644 index 0000000..70fd9ca --- /dev/null +++ b/hackathon/plugins/neo4j-consumer/index.ts @@ -0,0 +1,568 @@ +/** + * Neo4j Consumer Plugin + * + * BASE CHALLENGE 5: Relational Graph Mapping + * + * Consumes vCon objects and builds a knowledge graph in Neo4j. + * + * Graph Schema: + * (:Person {name, email, phone, role}) + * -[:PARTICIPATED_IN]-> (:Conversation {uuid, subject, start, duration, sentiment}) + * (:Conversation)-[:HAS_TOPIC]->(:Topic {name, confidence}) + * (:Conversation)-[:ANALYZED_BY]->(:Analysis {type, vendor}) + * (:Person)-[:CONTACTED]->(:Person) // derived from shared conversations + * (:Person)-[:WORKS_AT]->(:Organization {name}) + * + * Derived Cypher Insights: + * - Repeat callers (persons with >N conversations) + * - Topic clusters (community detection on co-occurrence) + * - Escalation paths (conversation chains with declining sentiment) + * - Contact networks (who talks to whom) + */ + +import neo4j, { Driver, Session, ManagedTransaction } from 'neo4j-driver'; +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon, Party, Analysis } from '../../src/types/vcon.js'; +import { Tool } from '@modelcontextprotocol/sdk/types.js'; + +// ============================================================================ +// Types +// ============================================================================ + +export interface Neo4jConsumerConfig { + uri?: string; + user?: string; + password?: string; + database?: string; + verbose?: boolean; +} + +interface PersonNode { + name: string; + email?: string; + phone?: string; + identifier: string; // Unique key: tel, mailto, or name +} + +// ============================================================================ +// Neo4j Consumer Plugin +// ============================================================================ + +export class Neo4jConsumerPlugin implements VConPlugin { + name = 'neo4j-consumer'; + version = '1.0.0'; + + private driver: Driver | null = null; + private database: string = 'neo4j'; + private verbose: boolean = false; + private nodeCount: number = 0; + private relCount: number = 0; + + constructor(private config?: Neo4jConsumerConfig) {} + + // ========== Lifecycle ========== + + async initialize(config?: any): Promise { + const merged = { ...this.config, ...config }; + + const uri = merged?.uri || process.env.NEO4J_URI || 'bolt://localhost:7687'; + const user = merged?.user || process.env.NEO4J_USER || 'neo4j'; + const password = merged?.password || process.env.NEO4J_PASSWORD || 'vcon2026'; + this.database = merged?.database || process.env.NEO4J_DATABASE || 'neo4j'; + this.verbose = merged?.verbose || process.env.NEO4J_VERBOSE === 'true' || false; + + try { + this.driver = neo4j.driver(uri, neo4j.auth.basic(user, password)); + + // Verify connectivity + await this.driver.verifyConnectivity(); + this.log('info', `Connected to Neo4j at ${uri}`); + + // Create constraints and indexes + await this.setupSchema(); + } catch (err: any) { + this.log('warn', `Neo4j connection failed: ${err.message} โ€” continuing without graph`); + this.driver = null; + } + } + + async shutdown(): Promise { + if (this.driver) { + this.log('info', `Shutting down Neo4j consumer (${this.nodeCount} nodes, ${this.relCount} rels created)`); + await this.driver.close(); + this.driver = null; + } + } + + // ========== Schema Setup ========== + + private async setupSchema(): Promise { + const session = this.getSession(); + if (!session) return; + + try { + // Constraints (uniqueness) + const constraints = [ + 'CREATE CONSTRAINT person_identifier IF NOT EXISTS FOR (p:Person) REQUIRE p.identifier IS UNIQUE', + 'CREATE CONSTRAINT conversation_uuid IF NOT EXISTS FOR (c:Conversation) REQUIRE c.uuid IS UNIQUE', + 'CREATE CONSTRAINT topic_name IF NOT EXISTS FOR (t:Topic) REQUIRE t.name IS UNIQUE', + 'CREATE CONSTRAINT organization_name IF NOT EXISTS FOR (o:Organization) REQUIRE o.name IS UNIQUE', + ]; + + for (const constraint of constraints) { + try { + await session.run(constraint); + } catch (e: any) { + // Ignore if constraint already exists or syntax not supported + if (!e.message.includes('already exists')) { + this.log('debug', `Constraint note: ${e.message}`); + } + } + } + + // Indexes for fast lookups + const indexes = [ + 'CREATE INDEX person_name IF NOT EXISTS FOR (p:Person) ON (p.name)', + 'CREATE INDEX conversation_start IF NOT EXISTS FOR (c:Conversation) ON (c.start)', + 'CREATE INDEX conversation_sentiment IF NOT EXISTS FOR (c:Conversation) ON (c.sentiment)', + ]; + + for (const index of indexes) { + try { + await session.run(index); + } catch (e: any) { + if (!e.message.includes('already exists')) { + this.log('debug', `Index note: ${e.message}`); + } + } + } + + this.log('info', 'Neo4j schema initialized (constraints + indexes)'); + } finally { + await session.close(); + } + } + + // ========== Lifecycle Hooks ========== + + /** + * After a vCon is created, map it into the Neo4j graph + */ + async afterCreate(vcon: VCon, context: RequestContext): Promise { + if (!this.driver) return; + + const session = this.getSession(); + if (!session) return; + + try { + await session.executeWrite(async (tx: ManagedTransaction) => { + // 1. Create Conversation node + await this.createConversationNode(tx, vcon); + + // 2. Create Person nodes + PARTICIPATED_IN relationships + const personIds = await this.createPartyNodes(tx, vcon); + + // 3. Create CONTACTED relationships between all participants + await this.createContactedRelationships(tx, personIds, vcon.uuid); + + // 4. Extract and create Topic nodes from analysis + await this.createTopicNodes(tx, vcon); + + // 5. Create Analysis nodes + await this.createAnalysisNodes(tx, vcon); + }); + + this.log('debug', `Mapped vCon ${vcon.uuid} to graph`); + } catch (err: any) { + this.log('error', `Failed to map vCon ${vcon.uuid} to graph: ${err.message}`); + } finally { + await session.close(); + } + } + + /** + * After a vCon is deleted, remove it from the graph + */ + async afterDelete(uuid: string, context: RequestContext): Promise { + if (!this.driver) return; + + const session = this.getSession(); + if (!session) return; + + try { + await session.executeWrite(async (tx: ManagedTransaction) => { + // Detach delete the conversation and its exclusive relationships + await tx.run( + `MATCH (c:Conversation {uuid: $uuid}) + DETACH DELETE c`, + { uuid } + ); + }); + this.log('debug', `Removed vCon ${uuid} from graph`); + } finally { + await session.close(); + } + } + + // ========== Graph Construction ========== + + private async createConversationNode(tx: ManagedTransaction, vcon: VCon): Promise { + // Calculate total duration + const totalDuration = vcon.dialog?.reduce((sum, d) => sum + (d.duration || 0), 0) || 0; + + // Extract sentiment from analysis if present + const sentimentAnalysis = vcon.analysis?.find(a => a.type === 'sentiment'); + let sentiment: number | null = null; + if (sentimentAnalysis?.body) { + try { + const parsed = JSON.parse(sentimentAnalysis.body); + sentiment = parsed.overall ?? parsed.score ?? null; + } catch { /* not JSON or no score */ } + } + + await tx.run( + `MERGE (c:Conversation {uuid: $uuid}) + SET c.subject = $subject, + c.start = $start, + c.duration = $duration, + c.sentiment = $sentiment, + c.party_count = $partyCount, + c.dialog_count = $dialogCount, + c.analysis_count = $analysisCount, + c.created_at = $createdAt, + c.source = $source`, + { + uuid: vcon.uuid, + subject: vcon.subject || null, + start: vcon.created_at, + duration: totalDuration, + sentiment, + partyCount: vcon.parties?.length || 0, + dialogCount: vcon.dialog?.length || 0, + analysisCount: vcon.analysis?.length || 0, + createdAt: vcon.created_at, + source: 'vcon-mcp', + } + ); + this.nodeCount++; + } + + private async createPartyNodes(tx: ManagedTransaction, vcon: VCon): Promise { + const personIds: string[] = []; + + for (const party of (vcon.parties || [])) { + const person = this.partyToPerson(party); + if (!person.identifier) continue; + + await tx.run( + `MERGE (p:Person {identifier: $identifier}) + SET p.name = COALESCE($name, p.name), + p.email = COALESCE($email, p.email), + p.phone = COALESCE($phone, p.phone) + WITH p + MATCH (c:Conversation {uuid: $uuid}) + MERGE (p)-[r:PARTICIPATED_IN]->(c) + SET r.timestamp = $timestamp`, + { + identifier: person.identifier, + name: person.name || null, + email: person.email || null, + phone: person.phone || null, + uuid: vcon.uuid, + timestamp: vcon.created_at, + } + ); + + personIds.push(person.identifier); + this.nodeCount++; + this.relCount++; + } + + return personIds; + } + + private async createContactedRelationships( + tx: ManagedTransaction, + personIds: string[], + conversationUuid: string + ): Promise { + // Create CONTACTED edges between all pairs of participants. + // Uses canonical ordering (smaller ID -> larger ID) with a DIRECTED + // relationship to avoid duplicate edges and self-loops from + // Neo4j's undirected MERGE behavior. + if (personIds.length < 2) return; + + // Deduplicate identifiers to prevent self-loops + const uniqueIds = [...new Set(personIds)]; + if (uniqueIds.length < 2) return; + + for (let i = 0; i < uniqueIds.length; i++) { + for (let j = i + 1; j < uniqueIds.length; j++) { + // Canonical ordering: always smaller -> larger + const [idFrom, idTo] = uniqueIds[i] < uniqueIds[j] + ? [uniqueIds[i], uniqueIds[j]] + : [uniqueIds[j], uniqueIds[i]]; + + await tx.run( + `MATCH (a:Person {identifier: $idFrom}), (b:Person {identifier: $idTo}) + MERGE (a)-[r:CONTACTED]->(b) + SET r.last_conversation = $uuid, + r.contact_count = COALESCE(r.contact_count, 0) + 1`, + { + idFrom, + idTo, + uuid: conversationUuid, + } + ); + this.relCount++; + } + } + } + + private async createTopicNodes(tx: ManagedTransaction, vcon: VCon): Promise { + // Extract topics from analysis summaries or subject + const topics: { name: string; confidence: number }[] = []; + + // Check analysis for topic extraction + for (const analysis of (vcon.analysis || [])) { + if (analysis.type === 'topics' && analysis.body) { + try { + const parsed = JSON.parse(analysis.body); + if (Array.isArray(parsed)) { + for (const t of parsed) { + topics.push({ + name: (typeof t === 'string' ? t : t.name || t.topic).toLowerCase(), + confidence: typeof t === 'object' ? (t.confidence || 0.8) : 0.8, + }); + } + } + } catch { /* not parseable */ } + } + } + + // Fallback: use subject as a topic if no topics extracted + if (topics.length === 0 && vcon.subject) { + topics.push({ name: vcon.subject.toLowerCase(), confidence: 0.5 }); + } + + for (const topic of topics) { + await tx.run( + `MERGE (t:Topic {name: $name}) + WITH t + MATCH (c:Conversation {uuid: $uuid}) + MERGE (c)-[r:HAS_TOPIC]->(t) + SET r.confidence = $confidence`, + { + name: topic.name, + uuid: vcon.uuid, + confidence: topic.confidence, + } + ); + this.nodeCount++; + this.relCount++; + } + } + + private async createAnalysisNodes(tx: ManagedTransaction, vcon: VCon): Promise { + for (const analysis of (vcon.analysis || [])) { + const analysisId = `${vcon.uuid}:${analysis.type}:${analysis.vendor}`; + + await tx.run( + `MERGE (a:Analysis {id: $id}) + SET a.type = $type, + a.vendor = $vendor, + a.product = $product + WITH a + MATCH (c:Conversation {uuid: $uuid}) + MERGE (c)-[:ANALYZED_BY]->(a)`, + { + id: analysisId, + type: analysis.type, + vendor: analysis.vendor, + product: analysis.product || null, + uuid: vcon.uuid, + } + ); + this.nodeCount++; + this.relCount++; + } + } + + // ========== MCP Tools ========== + + registerTools(): Tool[] { + return [ + { + name: 'neo4j_query', + description: 'Run a read-only Cypher query against the vCon knowledge graph. Use for finding repeat callers, topic clusters, contact networks, and escalation patterns.', + inputSchema: { + type: 'object' as const, + properties: { + query: { + type: 'string', + description: 'Cypher query to execute (read-only). Example: MATCH (p:Person)-[r:PARTICIPATED_IN]->(c:Conversation) RETURN p.name, count(c) as calls ORDER BY calls DESC LIMIT 10', + }, + params: { + type: 'object', + description: 'Optional query parameters', + }, + }, + required: ['query'], + }, + }, + { + name: 'neo4j_insights', + description: 'Get pre-built graph insights: repeat_callers, topic_clusters, escalation_paths, contact_network, or graph_stats.', + inputSchema: { + type: 'object' as const, + properties: { + insight: { + type: 'string', + enum: ['repeat_callers', 'topic_clusters', 'escalation_paths', 'contact_network', 'graph_stats'], + description: 'Which insight to retrieve', + }, + limit: { + type: 'number', + description: 'Max results to return (default: 20)', + }, + }, + required: ['insight'], + }, + }, + ]; + } + + async handleToolCall(toolName: string, args: any, context: RequestContext): Promise { + if (toolName === 'neo4j_query') { + return this.executeQuery(args.query, args.params); + } + if (toolName === 'neo4j_insights') { + return this.getInsight(args.insight, args.limit || 20); + } + return null; + } + + // ========== Query Execution ========== + + private async executeQuery(query: string, params?: Record): Promise { + if (!this.driver) { + return { error: 'Neo4j not connected' }; + } + + // Safety: only allow read queries + const normalized = query.trim().toUpperCase(); + if (normalized.startsWith('CREATE') || normalized.startsWith('DELETE') || + normalized.startsWith('DROP') || normalized.startsWith('MERGE') || + normalized.startsWith('SET') || normalized.startsWith('REMOVE')) { + return { error: 'Only read queries are allowed through this tool' }; + } + + const session = this.getSession(); + if (!session) return { error: 'Failed to create session' }; + + try { + const result = await session.executeRead(async (tx) => { + return tx.run(query, params || {}); + }); + + return { + records: result.records.map(r => r.toObject()), + summary: { + resultCount: result.records.length, + queryType: result.summary.queryType, + }, + }; + } catch (err: any) { + return { error: err.message }; + } finally { + await session.close(); + } + } + + private async getInsight(insight: string, limit: number): Promise { + const queries: Record = { + repeat_callers: ` + MATCH (p:Person)-[r:PARTICIPATED_IN]->(c:Conversation) + WITH p, count(c) as calls, collect(c.subject) as subjects, + avg(c.sentiment) as avgSentiment + WHERE calls > 1 + RETURN p.name as name, p.identifier as id, calls, + subjects[0..5] as recentSubjects, avgSentiment + ORDER BY calls DESC LIMIT $limit`, + + topic_clusters: ` + MATCH (c:Conversation)-[:HAS_TOPIC]->(t:Topic) + WITH t, count(c) as conversations, avg(c.sentiment) as avgSentiment + RETURN t.name as topic, conversations, avgSentiment + ORDER BY conversations DESC LIMIT $limit`, + + escalation_paths: ` + MATCH (p:Person)-[:PARTICIPATED_IN]->(c:Conversation) + WHERE c.sentiment IS NOT NULL AND c.sentiment < 0.4 + WITH p, c ORDER BY c.start + RETURN p.name as person, p.identifier as id, + collect({subject: c.subject, sentiment: c.sentiment, date: c.start}) as negativeConversations, + count(c) as escalationCount + ORDER BY escalationCount DESC LIMIT $limit`, + + contact_network: ` + MATCH (a:Person)-[r:CONTACTED]->(b:Person) + RETURN a.name as person1, b.name as person2, + r.contact_count as interactions, r.last_conversation as lastConversation + ORDER BY r.contact_count DESC LIMIT $limit`, + + graph_stats: ` + CALL { + MATCH (p:Person) RETURN 'Person' as label, count(p) as count + UNION ALL + MATCH (c:Conversation) RETURN 'Conversation' as label, count(c) as count + UNION ALL + MATCH (t:Topic) RETURN 'Topic' as label, count(t) as count + UNION ALL + MATCH (a:Analysis) RETURN 'Analysis' as label, count(a) as count + UNION ALL + MATCH ()-[r:PARTICIPATED_IN]->() RETURN 'PARTICIPATED_IN' as label, count(r) as count + UNION ALL + MATCH ()-[r:CONTACTED]->() RETURN 'CONTACTED' as label, count(r) as count + UNION ALL + MATCH ()-[r:HAS_TOPIC]->() RETURN 'HAS_TOPIC' as label, count(r) as count + } + RETURN label, count`, + }; + + const query = queries[insight]; + if (!query) { + return { error: `Unknown insight: ${insight}. Valid: ${Object.keys(queries).join(', ')}` }; + } + + return this.executeQuery(query, { limit: neo4j.int(limit) }); + } + + // ========== Helpers ========== + + private partyToPerson(party: Party): PersonNode { + // Priority for identifier: tel > mailto > name + const identifier = party.tel || party.mailto || party.name || ''; + return { + name: party.name || party.tel || party.mailto || 'Unknown', + email: party.mailto, + phone: party.tel, + identifier, + }; + } + + private getSession(): Session | null { + if (!this.driver) return null; + return this.driver.session({ database: this.database }); + } + + private log(level: string, message: string): void { + const prefix = '[neo4j-consumer]'; + if (level === 'error') console.error(`${prefix} ${message}`); + else if (level === 'warn') console.warn(`${prefix} ${message}`); + else if (level === 'debug' && this.verbose) console.log(`${prefix} ${message}`); + else if (level === 'info') console.log(`${prefix} ${message}`); + } +} + +export default Neo4jConsumerPlugin; diff --git a/hackathon/plugins/siprec-adapter/index.ts b/hackathon/plugins/siprec-adapter/index.ts new file mode 100644 index 0000000..b0ab063 --- /dev/null +++ b/hackathon/plugins/siprec-adapter/index.ts @@ -0,0 +1,568 @@ +/** + * SIPREC Adapter Plugin + * + * BASE CHALLENGE 1: Build the ingestion simulation folder drop + * + * Watches a folder for SIPREC-style recording files and metadata, + * parses them, and creates valid IETF vCon objects through VConService. + * + * Supported file patterns: + * - {name}.xml โ†’ SIPREC metadata (participants, timestamps, session info) + * - {name}.wav / .mp3 / .ogg โ†’ Audio recording + * - {name}.txt โ†’ Pre-existing transcript (optional) + * - {name}.json โ†’ Direct vCon JSON (bypass SIPREC parsing) + * + * The adapter pairs .xml metadata with matching audio files by filename stem. + * When a Whisper sidecar is available, audio is sent for transcription. + * + * Drop folder default: ./hackathon/watch/siprec/ + */ + +import fs from 'fs'; +import path from 'path'; +import { createHash } from 'crypto'; +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon, Party, Dialog, Analysis } from '../../src/types/vcon.js'; +import { Tool } from '@modelcontextprotocol/sdk/types.js'; + +// ============================================================================ +// Types +// ============================================================================ + +export interface SiprecAdapterConfig { + /** Folder to watch for incoming files */ + watchFolder?: string; + /** Poll interval in milliseconds (default: 3000) */ + pollInterval?: number; + /** Whisper sidecar URL for transcription */ + whisperUrl?: string; + /** Move processed files to this folder (default: ./hackathon/watch/processed/) */ + processedFolder?: string; + /** Move failed files here (default: ./hackathon/watch/failed/) */ + failedFolder?: string; + /** Enable verbose logging */ + verbose?: boolean; +} + +interface SiprecMetadata { + sessionId: string; + startTime: string; + endTime?: string; + callerNumber?: string; + callerName?: string; + calleeNumber?: string; + calleeName?: string; + direction?: string; + recordingFile?: string; +} + +interface PendingFile { + xmlPath?: string; + audioPath?: string; + transcriptPath?: string; + jsonPath?: string; + stem: string; +} + +// ============================================================================ +// SIPREC Adapter Plugin +// ============================================================================ + +export class SiprecAdapterPlugin implements VConPlugin { + name = 'siprec-adapter'; + version = '1.0.0'; + + private watchFolder: string = ''; + private processedFolder: string = ''; + private failedFolder: string = ''; + private pollInterval: number = 3000; + private whisperUrl: string = ''; + private verbose: boolean = false; + private pollTimer: ReturnType | null = null; + private processing: boolean = false; + private processedCount: number = 0; + private vconService: any = null; // Set during initialize from config + + constructor(private config?: SiprecAdapterConfig) {} + + // ========== Lifecycle ========== + + async initialize(config?: any): Promise { + const merged = { ...this.config, ...config }; + + this.watchFolder = merged?.watchFolder + || process.env.SIPREC_WATCH_FOLDER + || './hackathon/watch/siprec'; + this.processedFolder = merged?.processedFolder + || process.env.SIPREC_PROCESSED_FOLDER + || './hackathon/watch/processed'; + this.failedFolder = merged?.failedFolder + || process.env.SIPREC_FAILED_FOLDER + || './hackathon/watch/failed'; + this.pollInterval = merged?.pollInterval + || parseInt(process.env.SIPREC_POLL_INTERVAL || '3000'); + this.whisperUrl = merged?.whisperUrl + || process.env.WHISPER_SIDECAR_URL + || 'http://localhost:8100'; + this.verbose = merged?.verbose + || process.env.SIPREC_VERBOSE === 'true' + || false; + + // Store vconService reference if passed via config + // (Will be injected by a loader script or setup extension) + this.vconService = merged?.vconService || null; + + // Ensure directories exist + this.ensureDir(this.watchFolder); + this.ensureDir(this.processedFolder); + this.ensureDir(this.failedFolder); + + this.log('info', `Watching folder: ${path.resolve(this.watchFolder)}`); + this.log('info', `Processed folder: ${path.resolve(this.processedFolder)}`); + this.log('info', `Whisper sidecar: ${this.whisperUrl}`); + + // Start polling + this.pollTimer = setInterval(() => this.poll(), this.pollInterval); + this.log('info', `Polling every ${this.pollInterval}ms`); + } + + async shutdown(): Promise { + if (this.pollTimer) { + clearInterval(this.pollTimer); + this.pollTimer = null; + } + this.log('info', `SIPREC adapter shut down (${this.processedCount} files processed)`); + } + + // ========== MCP Tools ========== + + registerTools(): Tool[] { + return [ + { + name: 'siprec_ingest_file', + description: 'Manually trigger ingestion of a SIPREC XML or JSON file from the watch folder. Useful for testing or re-processing.', + inputSchema: { + type: 'object' as const, + properties: { + filename: { + type: 'string', + description: 'Filename (not full path) in the watch folder to process', + }, + }, + required: ['filename'], + }, + }, + { + name: 'siprec_status', + description: 'Get SIPREC adapter status: watch folder path, pending files, processed count.', + inputSchema: { + type: 'object' as const, + properties: {}, + }, + }, + ]; + } + + async handleToolCall(toolName: string, args: any, context: RequestContext): Promise { + if (toolName === 'siprec_status') { + return this.getStatus(); + } + if (toolName === 'siprec_ingest_file') { + return this.manualIngest(args.filename); + } + return null; + } + + // ========== Polling ========== + + private async poll(): Promise { + if (this.processing) return; + this.processing = true; + + try { + const files = this.scanWatchFolder(); + if (files.length === 0) return; + + this.log('debug', `Found ${files.length} file group(s) to process`); + + for (const group of files) { + try { + await this.processFileGroup(group); + this.moveToProcessed(group); + this.processedCount++; + } catch (err: any) { + this.log('error', `Failed to process ${group.stem}: ${err.message}`); + this.moveToFailed(group); + } + } + } catch (err: any) { + this.log('error', `Poll error: ${err.message}`); + } finally { + this.processing = false; + } + } + + // ========== File Scanning ========== + + private scanWatchFolder(): PendingFile[] { + if (!fs.existsSync(this.watchFolder)) return []; + + const entries = fs.readdirSync(this.watchFolder); + const groups = new Map(); + + for (const entry of entries) { + const fullPath = path.join(this.watchFolder, entry); + if (!fs.statSync(fullPath).isFile()) continue; + + const ext = path.extname(entry).toLowerCase(); + const stem = path.basename(entry, ext); + + if (!groups.has(stem)) { + groups.set(stem, { stem }); + } + const group = groups.get(stem)!; + + if (ext === '.xml') group.xmlPath = fullPath; + else if (['.wav', '.mp3', '.ogg', '.webm', '.m4a'].includes(ext)) group.audioPath = fullPath; + else if (ext === '.txt') group.transcriptPath = fullPath; + else if (ext === '.json') group.jsonPath = fullPath; + } + + // Only return groups that have at least an XML or JSON file + return Array.from(groups.values()).filter(g => g.xmlPath || g.jsonPath); + } + + // ========== Processing ========== + + private async processFileGroup(group: PendingFile): Promise { + this.log('info', `Processing: ${group.stem}`); + + // Path 1: Direct JSON vCon + if (group.jsonPath) { + const raw = fs.readFileSync(group.jsonPath, 'utf-8'); + const vconData = JSON.parse(raw); + await this.createVCon(vconData, 'json-drop'); + return; + } + + // Path 2: SIPREC XML + optional audio + if (group.xmlPath) { + const xmlContent = fs.readFileSync(group.xmlPath, 'utf-8'); + const metadata = this.parseSiprecXml(xmlContent); + + // Build parties + const parties: Party[] = []; + if (metadata.callerNumber || metadata.callerName) { + parties.push({ + tel: metadata.callerNumber, + name: metadata.callerName || metadata.callerNumber, + }); + } + if (metadata.calleeNumber || metadata.calleeName) { + parties.push({ + tel: metadata.calleeNumber, + name: metadata.calleeName || metadata.calleeNumber, + }); + } + + // Build dialog + const dialogs: Dialog[] = []; + + if (group.audioPath) { + // Read audio file for hash and optional inline + const audioBuffer = fs.readFileSync(group.audioPath); + const hash = createHash('sha512').update(audioBuffer).digest('hex'); + const ext = path.extname(group.audioPath).toLowerCase(); + const mimeMap: Record = { + '.wav': 'audio/x-wav', + '.mp3': 'audio/mpeg', + '.ogg': 'audio/ogg', + '.webm': 'audio/webm', + '.m4a': 'audio/mp4', + }; + + // For hackathon: inline base64 for small files (<1MB), external ref otherwise + if (audioBuffer.length < 1_000_000) { + dialogs.push({ + type: 'recording', + start: metadata.startTime || new Date().toISOString(), + duration: this.estimateDuration(metadata), + parties: parties.length >= 2 ? [0, 1] : [0], + mediatype: mimeMap[ext] || 'audio/x-wav', + body: audioBuffer.toString('base64url'), + encoding: 'base64url', + }); + } else { + dialogs.push({ + type: 'recording', + start: metadata.startTime || new Date().toISOString(), + duration: this.estimateDuration(metadata), + parties: parties.length >= 2 ? [0, 1] : [0], + mediatype: mimeMap[ext] || 'audio/x-wav', + url: `file://${path.resolve(group.audioPath)}`, + content_hash: `sha512-${hash}`, + }); + } + } + + // Build analysis (transcript) + const analyses: Analysis[] = []; + + // Check for pre-existing transcript + if (group.transcriptPath) { + const transcript = fs.readFileSync(group.transcriptPath, 'utf-8'); + analyses.push({ + type: 'transcript', + vendor: 'pre-existing', + body: transcript, + encoding: 'none', + }); + } else if (group.audioPath) { + // Try Whisper sidecar for transcription + const transcript = await this.transcribeAudio(group.audioPath); + if (transcript) { + analyses.push({ + type: 'transcript', + vendor: 'openai-whisper', + product: 'whisper-large-v3', + body: transcript, + encoding: 'none', + }); + } + } + + // Create the vCon + const vconData: Partial = { + subject: metadata.sessionId ? `SIPREC Session ${metadata.sessionId}` : `SIPREC Call ${group.stem}`, + parties, + dialog: dialogs.length > 0 ? dialogs : undefined, + analysis: analyses.length > 0 ? analyses : undefined, + }; + + await this.createVCon(vconData, 'siprec'); + } + } + + // ========== SIPREC XML Parsing ========== + + /** + * Parse SIPREC metadata XML. + * Supports a simplified SIPREC-like format for hackathon demo. + * + * Expected XML structure: + * + * + * 2026-03-07T10:30:00Z + * 2026-03-07T10:35:42Z + * + * + * +15551234567 + * John Doe + * + * + * +15559876543 + * Support Agent + * + * inbound + * + */ + private parseSiprecXml(xml: string): SiprecMetadata { + const result: SiprecMetadata = { + sessionId: '', + startTime: new Date().toISOString(), + }; + + // Simple regex-based XML extraction (no heavy XML lib needed for hackathon) + result.sessionId = this.extractXmlAttr(xml, 'session', 'id') + || this.extractXmlValue(xml, 'session-id') + || `siprec-${Date.now()}`; + result.startTime = this.extractXmlValue(xml, 'start-time') + || this.extractXmlValue(xml, 'startTime') + || new Date().toISOString(); + result.endTime = this.extractXmlValue(xml, 'end-time') + || this.extractXmlValue(xml, 'endTime'); + result.callerNumber = this.extractNestedXmlValue(xml, 'caller', 'number') + || this.extractXmlValue(xml, 'caller-number'); + result.callerName = this.extractNestedXmlValue(xml, 'caller', 'name') + || this.extractNestedXmlValue(xml, 'caller', 'n') + || this.extractXmlValue(xml, 'caller-name'); + result.calleeNumber = this.extractNestedXmlValue(xml, 'callee', 'number') + || this.extractXmlValue(xml, 'callee-number'); + result.calleeName = this.extractNestedXmlValue(xml, 'callee', 'name') + || this.extractNestedXmlValue(xml, 'callee', 'n') + || this.extractXmlValue(xml, 'callee-name'); + result.direction = this.extractXmlValue(xml, 'direction'); + + return result; + } + + private extractXmlValue(xml: string, tag: string): string | undefined { + const regex = new RegExp(`<${tag}[^>]*>([^<]+)`, 'i'); + const match = xml.match(regex); + return match?.[1]?.trim(); + } + + private extractXmlAttr(xml: string, tag: string, attr: string): string | undefined { + const regex = new RegExp(`<${tag}[^>]*${attr}="([^"]+)"`, 'i'); + const match = xml.match(regex); + return match?.[1]?.trim(); + } + + private extractNestedXmlValue(xml: string, parent: string, child: string): string | undefined { + const parentRegex = new RegExp(`<${parent}[^>]*>([\\s\\S]*?)`, 'i'); + const parentMatch = xml.match(parentRegex); + if (!parentMatch) return undefined; + return this.extractXmlValue(parentMatch[1], child); + } + + // ========== Whisper Sidecar ========== + + private async transcribeAudio(audioPath: string): Promise { + try { + const audioBuffer = fs.readFileSync(audioPath); + const ext = path.extname(audioPath).toLowerCase().replace('.', ''); + + const formData = new FormData(); + formData.append('file', new Blob([audioBuffer]), path.basename(audioPath)); + formData.append('model', 'large-v3'); + + const response = await fetch(`${this.whisperUrl}/transcribe`, { + method: 'POST', + body: formData, + signal: AbortSignal.timeout(60000), // 60s timeout + }); + + if (!response.ok) { + this.log('warn', `Whisper sidecar returned ${response.status} โ€” skipping transcription`); + return null; + } + + const result = await response.json() as any; + return result.text || result.transcript || null; + } catch (err: any) { + this.log('debug', `Whisper sidecar unavailable: ${err.message} โ€” skipping transcription`); + return null; + } + } + + // ========== vCon Creation ========== + + private async createVCon(vconData: Partial, source: string): Promise { + if (this.vconService) { + // Use VConService (triggers all hooks: MQTT, Neo4j, etc.) + const result = await this.vconService.create(vconData, { + requestContext: { purpose: `siprec-adapter:${source}` }, + source: `siprec-${source}`, + }); + this.log('info', `Created vCon ${result.uuid} via VConService (source: ${source})`); + } else { + // Fallback: POST to REST API + const response = await fetch('http://localhost:3000/api/v1/vcons', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(vconData), + }); + const result = await response.json() as any; + if (result.success) { + this.log('info', `Created vCon ${result.uuid} via REST API (source: ${source})`); + } else { + throw new Error(`REST API error: ${JSON.stringify(result)}`); + } + } + } + + // ========== File Management ========== + + private moveToProcessed(group: PendingFile): void { + this.moveFiles(group, this.processedFolder); + } + + private moveToFailed(group: PendingFile): void { + this.moveFiles(group, this.failedFolder); + } + + private moveFiles(group: PendingFile, destFolder: string): void { + const paths = [group.xmlPath, group.audioPath, group.transcriptPath, group.jsonPath]; + for (const p of paths) { + if (p && fs.existsSync(p)) { + const dest = path.join(destFolder, path.basename(p)); + try { + fs.renameSync(p, dest); + } catch { + // Cross-device move fallback + fs.copyFileSync(p, dest); + fs.unlinkSync(p); + } + } + } + } + + // ========== Helpers ========== + + private estimateDuration(metadata: SiprecMetadata): number { + if (metadata.startTime && metadata.endTime) { + const start = new Date(metadata.startTime).getTime(); + const end = new Date(metadata.endTime).getTime(); + return Math.max(0, (end - start) / 1000); + } + return 0; + } + + private ensureDir(dir: string): void { + if (!fs.existsSync(dir)) { + fs.mkdirSync(dir, { recursive: true }); + } + } + + private getStatus(): any { + const pending = fs.existsSync(this.watchFolder) + ? fs.readdirSync(this.watchFolder).length + : 0; + return { + watchFolder: path.resolve(this.watchFolder), + processedFolder: path.resolve(this.processedFolder), + pendingFiles: pending, + processedCount: this.processedCount, + whisperUrl: this.whisperUrl, + polling: this.pollTimer !== null, + pollInterval: this.pollInterval, + }; + } + + private async manualIngest(filename: string): Promise { + const fullPath = path.join(this.watchFolder, filename); + if (!fs.existsSync(fullPath)) { + return { error: `File not found: ${fullPath}` }; + } + const ext = path.extname(filename).toLowerCase(); + const stem = path.basename(filename, ext); + const group: PendingFile = { stem }; + if (ext === '.xml') group.xmlPath = fullPath; + else if (ext === '.json') group.jsonPath = fullPath; + else return { error: 'Only .xml and .json files can be manually ingested' }; + + // Check for companion files + const audioExts = ['.wav', '.mp3', '.ogg', '.webm', '.m4a']; + for (const aExt of audioExts) { + const audioPath = path.join(this.watchFolder, stem + aExt); + if (fs.existsSync(audioPath)) { group.audioPath = audioPath; break; } + } + const txtPath = path.join(this.watchFolder, stem + '.txt'); + if (fs.existsSync(txtPath)) group.transcriptPath = txtPath; + + await this.processFileGroup(group); + this.moveToProcessed(group); + this.processedCount++; + return { success: true, stem, processed: true }; + } + + private log(level: string, message: string): void { + const prefix = '[siprec-adapter]'; + if (level === 'error') console.error(`${prefix} ${message}`); + else if (level === 'warn') console.warn(`${prefix} ${message}`); + else if (level === 'debug' && this.verbose) console.log(`${prefix} ${message}`); + else if (level === 'info') console.log(`${prefix} ${message}`); + } +} + +export default SiprecAdapterPlugin; diff --git a/hackathon/plugins/teams-adapter/index.ts b/hackathon/plugins/teams-adapter/index.ts new file mode 100644 index 0000000..849b56c --- /dev/null +++ b/hackathon/plugins/teams-adapter/index.ts @@ -0,0 +1,442 @@ +/** + * Teams Adapter Plugin + * + * BASE CHALLENGE 4: MS Teams Post-Call Extractor + * + * Production architecture: MSAL auth โ†’ poll MS Graph /communications/callRecords + * โ†’ extract participants, transcript, recording โ†’ map to vCon. + * + * Hackathon mode: Watches a folder for exported Teams call record JSON files + * (matching MS Graph API schema) and ingests them as vCons. The adapter code + * is production-shaped โ€” swap the file watcher for Graph API polling and it works. + * + * Drop folder: ./hackathon/watch/teams/ + * Sample data: ./hackathon/sample-data/teams-*.json + * + * MS Graph callRecord schema: https://learn.microsoft.com/en-us/graph/api/resources/callrecords-callrecord + */ + +import fs from 'fs'; +import path from 'path'; +import { createHash } from 'crypto'; +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon, Party, Dialog, Analysis } from '../../src/types/vcon.js'; +import { Tool } from '@modelcontextprotocol/sdk/types.js'; + +// ============================================================================ +// Types +// ============================================================================ + +export interface TeamsAdapterConfig { + /** Folder to watch for Teams export JSON files */ + watchFolder?: string; + /** Poll interval in milliseconds (default: 3000) */ + pollInterval?: number; + /** Processed files destination */ + processedFolder?: string; + /** Failed files destination */ + failedFolder?: string; + /** Enable verbose logging */ + verbose?: boolean; +} + +/** MS Graph callRecord participant */ +interface TeamsParticipant { + user?: { + id?: string; + displayName?: string; + userPrincipalName?: string; + }; + phone?: { + id?: string; + displayName?: string; + }; + role?: string; +} + +/** MS Graph callRecord (simplified) */ +interface TeamsCallRecord { + id: string; + version?: number; + type?: string; // 'peerToPeer' | 'groupCall' + modalities?: string[]; // ['audio'] | ['audio', 'video'] + lastModifiedDateTime?: string; + startDateTime: string; + endDateTime: string; + joinWebUrl?: string; + organizer?: TeamsParticipant; + participants: TeamsParticipant[]; + sessions?: Array<{ + id?: string; + startDateTime?: string; + endDateTime?: string; + modalities?: string[]; + }>; + /** Simulated data for hackathon โ€” not in real Graph API */ + _simulated?: { + subject?: string; + transcript?: string; + recording_url?: string; + summary?: string; + }; +} + +// ============================================================================ +// Teams Adapter Plugin +// ============================================================================ + +export class TeamsAdapterPlugin implements VConPlugin { + name = 'teams-adapter'; + version = '1.0.0'; + + private watchFolder: string = ''; + private processedFolder: string = ''; + private failedFolder: string = ''; + private pollInterval: number = 3000; + private verbose: boolean = false; + private timer: ReturnType | null = null; + private processing: boolean = false; + private apiBaseUrl: string = 'http://localhost:3000/api/v1'; + + // Metrics + private stats = { imported: 0, failed: 0, lastImport: null as string | null }; + + constructor(private config?: TeamsAdapterConfig) {} + + // ========== Lifecycle ========== + + async initialize(config?: any): Promise { + const merged = { ...this.config, ...config }; + + const projectRoot = process.cwd(); + this.watchFolder = merged?.watchFolder + || process.env.TEAMS_WATCH_FOLDER + || path.join(projectRoot, 'hackathon', 'watch', 'teams'); + this.processedFolder = merged?.processedFolder + || path.join(projectRoot, 'hackathon', 'watch', 'processed'); + this.failedFolder = merged?.failedFolder + || path.join(projectRoot, 'hackathon', 'watch', 'failed'); + this.pollInterval = merged?.pollInterval || 3000; + this.verbose = merged?.verbose || process.env.TEAMS_VERBOSE === 'true'; + this.apiBaseUrl = process.env.REST_API_URL || `http://localhost:${process.env.MCP_HTTP_PORT || 3000}/api/v1`; + + // Ensure directories + for (const dir of [this.watchFolder, this.processedFolder, this.failedFolder]) { + if (!fs.existsSync(dir)) fs.mkdirSync(dir, { recursive: true }); + } + + // Start polling + this.timer = setInterval(() => this.pollForFiles(), this.pollInterval); + this.log(`Teams Adapter initialized โ€” watching ${this.watchFolder}`); + this.log(`Production mode: Replace file watcher with MS Graph API polling`); + } + + async shutdown(): Promise { + if (this.timer) { clearInterval(this.timer); this.timer = null; } + this.log('Teams Adapter shut down'); + } + + // ========== File Polling ========== + + private async pollForFiles(): Promise { + if (this.processing) return; + this.processing = true; + + try { + const files = fs.readdirSync(this.watchFolder) + .filter(f => f.endsWith('.json')) + .map(f => path.join(this.watchFolder, f)); + + for (const filePath of files) { + try { + await this.importFile(filePath); + this.moveFile(filePath, this.processedFolder); + this.stats.imported++; + this.stats.lastImport = new Date().toISOString(); + } catch (err) { + this.log(`Failed to import ${path.basename(filePath)}: ${err}`, true); + this.moveFile(filePath, this.failedFolder); + this.stats.failed++; + } + } + } catch (err) { + // Watch folder not readable โ€” silently skip + } + + this.processing = false; + } + + // ========== Core Import Logic ========== + + async importFile(filePath: string): Promise { + const raw = fs.readFileSync(filePath, 'utf-8'); + const callRecord: TeamsCallRecord = JSON.parse(raw); + + this.log(`Importing Teams call: ${callRecord.id} (${callRecord.type || 'unknown'})`); + + const vcon = this.mapCallRecordToVcon(callRecord); + + // Submit via REST API to trigger all hooks + const res = await fetch(`${this.apiBaseUrl}/vcons`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(vcon), + }); + + const data = await res.json() as any; + if (!data.success) { + throw new Error(`API error: ${data.message || JSON.stringify(data)}`); + } + + this.log(`Created vCon ${data.uuid} from Teams call ${callRecord.id} (${data.duration_ms}ms)`); + return data.uuid; + } + + // ========== Teams โ†’ vCon Mapping ========== + + private mapCallRecordToVcon(record: TeamsCallRecord): Partial { + const startTime = record.startDateTime; + const endTime = record.endDateTime; + const durationSec = (new Date(endTime).getTime() - new Date(startTime).getTime()) / 1000; + + // Map participants โ†’ parties + const parties: Party[] = record.participants.map(p => { + const user = p.user || p.phone; + const party: Party = { + name: user?.displayName || 'Unknown', + }; + if (user?.userPrincipalName) party.mailto = user.userPrincipalName; + if (user?.id) party.uuid = user.id; + return party; + }); + + // Dialog โ€” recording reference or text placeholder + const dialog: Dialog[] = []; + if (record._simulated?.recording_url) { + dialog.push({ + type: 'recording', + start: startTime, + duration: durationSec > 0 ? durationSec : 300, + parties: parties.map((_, i) => i), + mediatype: 'audio/wav', + url: record._simulated.recording_url, + }); + } else { + dialog.push({ + type: 'text', + start: startTime, + duration: durationSec > 0 ? durationSec : 300, + parties: parties.map((_, i) => i), + mediatype: 'text/plain', + }); + } + + // Analysis entries + const analysis: Analysis[] = []; + + // Transcript + if (record._simulated?.transcript) { + analysis.push({ + type: 'transcript', + vendor: 'teams-adapter', + body: record._simulated.transcript, + encoding: 'none', + dialog: 0, + }); + } + + // Summary + if (record._simulated?.summary) { + analysis.push({ + type: 'summary', + vendor: 'teams-adapter', + body: record._simulated.summary, + encoding: 'none', + }); + } + + // Keyword-based sentiment + topics + if (record._simulated?.transcript) { + const lower = record._simulated.transcript.toLowerCase(); + + // Sentiment + const posWords = ['thank', 'great', 'appreciate', 'excellent', 'resolved', 'happy', 'easier', 'steady']; + const negWords = ['frustrat', 'angry', 'terrible', 'worst', 'upset', 'complain', 'disconnect', 'impact', 'runaround']; + const posCount = posWords.filter(w => lower.includes(w)).length; + const negCount = negWords.filter(w => lower.includes(w)).length; + const score = Math.max(0, Math.min(1, 0.5 + (posCount - negCount) * 0.1)); + analysis.push({ + type: 'sentiment', + vendor: 'keyword-heuristic', + body: JSON.stringify({ overall: parseFloat(score.toFixed(2)), positive: posCount, negative: negCount }), + encoding: 'json', + }); + + // Topic extraction + const topicPatterns = [ + { pattern: /bill|charg|refund|payment|invoice|subscript|credit/i, topic: 'Billing' }, + { pattern: /technical|vpn|error|bug|crash|slow|broken|update|version|config/i, topic: 'Technical Support' }, + { pattern: /escalat|manager|supervisor|complain|process/i, topic: 'Escalation' }, + { pattern: /cancel|close|terminat|switch/i, topic: 'Cancellation' }, + { pattern: /security|fraud|flag|detect|approval/i, topic: 'Security' }, + { pattern: /remote|work from home|connectivity|network/i, topic: 'Remote Work' }, + ]; + const topics = topicPatterns.filter(t => t.pattern.test(lower)).map(t => t.topic); + if (topics.length > 0) { + analysis.push({ + type: 'topics', + vendor: 'keyword-heuristic', + body: JSON.stringify(topics), + encoding: 'json', + }); + } + } + + // Build subject + const subject = record._simulated?.subject + || `Teams ${record.type === 'groupCall' ? 'Meeting' : 'Call'}: ${parties.map(p => p.name).join(', ')}`; + + // Teams-specific metadata as analysis + analysis.push({ + type: 'source-metadata', + vendor: 'teams-adapter', + body: JSON.stringify({ + source: 'microsoft-teams', + callRecordId: record.id, + callType: record.type, + modalities: record.modalities, + joinWebUrl: record.joinWebUrl, + organizerName: record.organizer?.user?.displayName, + organizerEmail: record.organizer?.user?.userPrincipalName, + sessionCount: record.sessions?.length || 0, + }), + encoding: 'json', + }); + + return { + vcon: '0.3.0', + created_at: new Date().toISOString(), + subject, + parties, + dialog, + analysis, + }; + } + + // ========== MCP Tools ========== + + registerTools(): Tool[] { + return [ + { + name: 'teams_import_file', + description: 'Import a Teams call record JSON file and create a vCon. File must follow MS Graph callRecord schema.', + inputSchema: { + type: 'object' as const, + properties: { + filePath: { + type: 'string', + description: 'Absolute path to Teams call record JSON file', + }, + }, + required: ['filePath'], + }, + }, + { + name: 'teams_import_folder', + description: 'Import all Teams call record JSON files from a folder. Defaults to the sample-data directory.', + inputSchema: { + type: 'object' as const, + properties: { + folderPath: { + type: 'string', + description: 'Path to folder containing Teams JSON files (default: sample-data)', + }, + }, + }, + }, + { + name: 'teams_status', + description: 'Get Teams adapter status: watch folder, stats, connection info.', + inputSchema: { + type: 'object' as const, + properties: {}, + }, + }, + ]; + } + + async handleToolCall(toolName: string, args: any): Promise { + switch (toolName) { + case 'teams_import_file': { + const filePath = args.filePath; + if (!filePath || !fs.existsSync(filePath)) { + return { error: `File not found: ${filePath}` }; + } + try { + const uuid = await this.importFile(filePath); + return { success: true, uuid, source: path.basename(filePath) }; + } catch (err: any) { + return { error: err.message }; + } + } + + case 'teams_import_folder': { + const folderPath = args.folderPath + || path.join(process.cwd(), 'hackathon', 'sample-data'); + if (!fs.existsSync(folderPath)) { + return { error: `Folder not found: ${folderPath}` }; + } + const files = fs.readdirSync(folderPath).filter(f => f.startsWith('teams-') && f.endsWith('.json')); + const results: any[] = []; + for (const file of files) { + try { + const uuid = await this.importFile(path.join(folderPath, file)); + results.push({ file, success: true, uuid }); + } catch (err: any) { + results.push({ file, success: false, error: err.message }); + } + } + return { imported: results.filter(r => r.success).length, failed: results.filter(r => !r.success).length, results }; + } + + case 'teams_status': { + return { + adapter: this.name, + version: this.version, + watchFolder: this.watchFolder, + pollInterval: this.pollInterval, + stats: this.stats, + productionNotes: 'Replace file watcher with MS Graph API: GET /communications/callRecords + MSAL auth', + }; + } + + default: + return { error: `Unknown tool: ${toolName}` }; + } + } + + // ========== Utilities ========== + + private moveFile(src: string, destDir: string): void { + try { + const dest = path.join(destDir, path.basename(src)); + fs.renameSync(src, dest); + } catch (err) { + this.log(`Failed to move ${src}: ${err}`, true); + } + } + + private log(msg: string, isError = false): void { + const prefix = `[TeamsAdapter]`; + if (isError) { + console.error(`${prefix} ${msg}`); + } else if (this.verbose || process.env.TEAMS_VERBOSE === 'true') { + console.log(`${prefix} ${msg}`); + } + } +} + +// ============================================================================ +// Default export for plugin loader +// ============================================================================ +export default TeamsAdapterPlugin; diff --git a/hackathon/plugins/whatsapp-adapter/index.ts b/hackathon/plugins/whatsapp-adapter/index.ts new file mode 100644 index 0000000..50b5b1f --- /dev/null +++ b/hackathon/plugins/whatsapp-adapter/index.ts @@ -0,0 +1,426 @@ +/** + * WhatsApp Adapter Plugin + * + * WOW FACTOR 2: WhatsApp Chat Ingestion + * + * Mode A (hackathon): Parse exported WhatsApp .txt chat files โ†’ vCon + * Mode B (production): WhatsApp Business Cloud API webhooks โ†’ real-time vCon + * + * WhatsApp export format (common patterns): + * [M/D/YY, H:MM AM] - Name: Message + * M/D/YY, H:MM AM - Name: Message + * [DD/MM/YYYY, HH:MM:SS] Name: Message + * + * Drop folder: ./hackathon/watch/whatsapp/ + * Sample data: ./hackathon/sample-data/whatsapp-*.txt + */ + +import fs from 'fs'; +import path from 'path'; +import { VConPlugin, RequestContext } from '../../src/hooks/plugin-interface.js'; +import { VCon, Party, Dialog, Analysis } from '../../src/types/vcon.js'; +import { Tool } from '@modelcontextprotocol/sdk/types.js'; + +// ============================================================================ +// Types +// ============================================================================ + +export interface WhatsAppAdapterConfig { + watchFolder?: string; + pollInterval?: number; + processedFolder?: string; + failedFolder?: string; + verbose?: boolean; +} + +interface ChatMessage { + timestamp: string; + sender: string; + text: string; +} + +// ============================================================================ +// WhatsApp Chat Parser +// ============================================================================ + +function parseWhatsAppExport(content: string): { messages: ChatMessage[]; participants: string[] } { + const messages: ChatMessage[] = []; + const participants = new Set(); + + // Common WhatsApp export patterns + const patterns = [ + // US format: M/D/YY, H:MM AM - Name: Message + /^(\d{1,2}\/\d{1,2}\/\d{2,4}),?\s+(\d{1,2}:\d{2}(?::\d{2})?\s*(?:AM|PM)?)\s*[-โ€“]\s*(.+?):\s*([\s\S]*?)$/, + // Bracketed: [M/D/YY, H:MM:SS AM] Name: Message + /^\[(\d{1,2}\/\d{1,2}\/\d{2,4}),?\s+(\d{1,2}:\d{2}(?::\d{2})?\s*(?:AM|PM)?)\]\s*(.+?):\s*([\s\S]*?)$/, + // EU format: DD/MM/YYYY, HH:MM - Name: Message + /^(\d{1,2}[\/.]\d{1,2}[\/.]\d{2,4}),?\s+(\d{1,2}:\d{2}(?::\d{2})?)\s*[-โ€“]\s*(.+?):\s*([\s\S]*?)$/, + ]; + + const lines = content.split('\n'); + let currentMessage: ChatMessage | null = null; + + for (const line of lines) { + const trimmed = line.trim(); + if (!trimmed) continue; + + // Skip system messages + if (trimmed.includes('Messages and calls are end-to-end encrypted') || + trimmed.includes('created group') || + trimmed.includes('added you') || + trimmed.includes('changed the subject') || + trimmed.includes('left the group')) { + continue; + } + + let matched = false; + for (const pattern of patterns) { + const match = trimmed.match(pattern); + if (match) { + // Save previous message + if (currentMessage) messages.push(currentMessage); + + const [, dateStr, timeStr, sender, text] = match; + const cleanSender = sender.trim(); + participants.add(cleanSender); + + // Parse date + let isoDate: string; + try { + const combined = `${dateStr} ${timeStr}`.replace(/\./g, '/'); + const d = new Date(combined); + isoDate = isNaN(d.getTime()) ? new Date().toISOString() : d.toISOString(); + } catch { + isoDate = new Date().toISOString(); + } + + currentMessage = { timestamp: isoDate, sender: cleanSender, text: text.trim() }; + matched = true; + break; + } + } + + // Continuation of previous message (multi-line) + if (!matched && currentMessage) { + currentMessage.text += '\n' + trimmed; + } + } + + // Don't forget the last message + if (currentMessage) messages.push(currentMessage); + + return { messages, participants: Array.from(participants) }; +} + +function buildVconFromChat(messages: ChatMessage[], participants: string[], filename: string): Partial { + if (messages.length === 0) { + throw new Error('No messages found in chat export'); + } + + const startTime = messages[0].timestamp; + const endTime = messages[messages.length - 1].timestamp; + const durationSec = Math.max(60, (new Date(endTime).getTime() - new Date(startTime).getTime()) / 1000); + + // Build parties + const parties: Party[] = participants.map(name => { + const isAgent = /agent/i.test(name) || /support/i.test(name); + return { + name, + meta: { role: isAgent ? 'agent' : 'customer' }, + } as any; + }); + + // Build transcript from messages + const transcript = messages.map(m => `${m.sender}: ${m.text}`).join('\n\n'); + + // Build analysis + const analysis: Analysis[] = []; + + analysis.push({ + type: 'transcript', + vendor: 'whatsapp-adapter', + body: transcript, + encoding: 'none', + dialog: 0, + }); + + // Keyword sentiment + const lower = transcript.toLowerCase(); + const posWords = ['thank', 'great', 'appreciate', 'excellent', 'resolved', 'happy', 'perfect', 'awesome', 'wonderful', 'pleasure']; + const negWords = ['frustrat', 'angry', 'terrible', 'worst', 'upset', 'complain', 'unacceptable', 'ridiculous', 'disappointed', 'problem']; + const posCount = posWords.filter(w => lower.includes(w)).length; + const negCount = negWords.filter(w => lower.includes(w)).length; + const score = Math.max(0, Math.min(1, 0.5 + (posCount - negCount) * 0.1)); + analysis.push({ + type: 'sentiment', + vendor: 'keyword-heuristic', + body: JSON.stringify({ overall: parseFloat(score.toFixed(2)), positive: posCount, negative: negCount }), + encoding: 'json', + }); + + // Topic extraction + const topicPatterns = [ + { pattern: /bill|charg|refund|payment|invoice|subscript|credit|pricing|plan|cost/i, topic: 'Billing' }, + { pattern: /technical|vpn|error|bug|crash|slow|broken|update|version|config|connect|disconnect/i, topic: 'Technical Support' }, + { pattern: /cancel|close|terminat|switch|competi/i, topic: 'Cancellation' }, + { pattern: /upgrade|premium|professional|enterprise|tier|feature/i, topic: 'Upgrade' }, + { pattern: /ship|deliver|track|order/i, topic: 'Shipping' }, + { pattern: /password|login|access|secur|auth|lock|unlock/i, topic: 'Account Security' }, + { pattern: /remote|work from home|connectivity|network/i, topic: 'Remote Work' }, + ]; + const topics = topicPatterns.filter(t => t.pattern.test(lower)).map(t => t.topic); + if (topics.length > 0) { + analysis.push({ + type: 'topics', + vendor: 'keyword-heuristic', + body: JSON.stringify(topics), + encoding: 'json', + }); + } + + // Source metadata + analysis.push({ + type: 'source-metadata', + vendor: 'whatsapp-adapter', + body: JSON.stringify({ + source: 'whatsapp-export', + filename, + messageCount: messages.length, + participantCount: participants.length, + firstMessage: startTime, + lastMessage: endTime, + }), + encoding: 'json', + }); + + // Auto-generate subject + const subject = `WhatsApp: ${participants.join(' โ†” ')}${topics.length > 0 ? ` โ€” ${topics[0]}` : ''}`; + + return { + vcon: '0.3.0', + created_at: new Date().toISOString(), + subject, + parties, + dialog: [{ + type: 'text', + start: startTime, + duration: durationSec, + parties: parties.map((_, i) => i), + mediatype: 'text/plain', + }], + analysis, + }; +} + +// ============================================================================ +// WhatsApp Adapter Plugin +// ============================================================================ + +export class WhatsAppAdapterPlugin implements VConPlugin { + name = 'whatsapp-adapter'; + version = '1.0.0'; + + private watchFolder: string = ''; + private processedFolder: string = ''; + private failedFolder: string = ''; + private pollInterval: number = 3000; + private verbose: boolean = false; + private timer: ReturnType | null = null; + private processing: boolean = false; + private apiBaseUrl: string = 'http://localhost:3000/api/v1'; + + private stats = { imported: 0, failed: 0, lastImport: null as string | null }; + + constructor(private config?: WhatsAppAdapterConfig) {} + + async initialize(config?: any): Promise { + const merged = { ...this.config, ...config }; + const projectRoot = process.cwd(); + + this.watchFolder = merged?.watchFolder + || process.env.WHATSAPP_WATCH_FOLDER + || path.join(projectRoot, 'hackathon', 'watch', 'whatsapp'); + this.processedFolder = merged?.processedFolder + || path.join(projectRoot, 'hackathon', 'watch', 'processed'); + this.failedFolder = merged?.failedFolder + || path.join(projectRoot, 'hackathon', 'watch', 'failed'); + this.pollInterval = merged?.pollInterval || 3000; + this.verbose = merged?.verbose || process.env.WHATSAPP_VERBOSE === 'true'; + this.apiBaseUrl = process.env.REST_API_URL || `http://localhost:${process.env.MCP_HTTP_PORT || 3000}/api/v1`; + + for (const dir of [this.watchFolder, this.processedFolder, this.failedFolder]) { + if (!fs.existsSync(dir)) fs.mkdirSync(dir, { recursive: true }); + } + + this.timer = setInterval(() => this.pollForFiles(), this.pollInterval); + this.log(`WhatsApp Adapter initialized โ€” watching ${this.watchFolder}`); + } + + async shutdown(): Promise { + if (this.timer) { clearInterval(this.timer); this.timer = null; } + this.log('WhatsApp Adapter shut down'); + } + + private async pollForFiles(): Promise { + if (this.processing) return; + this.processing = true; + + try { + const files = fs.readdirSync(this.watchFolder) + .filter(f => f.endsWith('.txt')) + .map(f => path.join(this.watchFolder, f)); + + for (const filePath of files) { + try { + await this.importFile(filePath); + this.moveFile(filePath, this.processedFolder); + this.stats.imported++; + this.stats.lastImport = new Date().toISOString(); + } catch (err) { + this.log(`Failed to import ${path.basename(filePath)}: ${err}`, true); + this.moveFile(filePath, this.failedFolder); + this.stats.failed++; + } + } + } catch {} + + this.processing = false; + } + + async importFile(filePath: string): Promise { + const content = fs.readFileSync(filePath, 'utf-8'); + const filename = path.basename(filePath); + + this.log(`Parsing WhatsApp export: ${filename}`); + const { messages, participants } = parseWhatsAppExport(content); + + if (messages.length === 0) { + throw new Error('No messages parsed from file'); + } + + this.log(`Found ${messages.length} messages from ${participants.length} participants`); + const vcon = buildVconFromChat(messages, participants, filename); + + const res = await fetch(`${this.apiBaseUrl}/vcons`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(vcon), + }); + + const data = await res.json() as any; + if (!data.success) throw new Error(`API error: ${data.message || JSON.stringify(data)}`); + + this.log(`Created vCon ${data.uuid} from WhatsApp chat (${messages.length} msgs, ${data.duration_ms}ms)`); + return data.uuid; + } + + registerTools(): Tool[] { + return [ + { + name: 'whatsapp_import_file', + description: 'Import a WhatsApp exported chat .txt file and create a vCon.', + inputSchema: { + type: 'object' as const, + properties: { + filePath: { type: 'string', description: 'Path to WhatsApp exported .txt chat file' }, + }, + required: ['filePath'], + }, + }, + { + name: 'whatsapp_import_folder', + description: 'Import all WhatsApp .txt files from a folder. Defaults to sample-data directory.', + inputSchema: { + type: 'object' as const, + properties: { + folderPath: { type: 'string', description: 'Path to folder (default: sample-data)' }, + }, + }, + }, + { + name: 'whatsapp_parse_preview', + description: 'Parse a WhatsApp chat export and return a preview without creating a vCon.', + inputSchema: { + type: 'object' as const, + properties: { + content: { type: 'string', description: 'Raw WhatsApp chat export text' }, + }, + required: ['content'], + }, + }, + { + name: 'whatsapp_status', + description: 'Get WhatsApp adapter status.', + inputSchema: { type: 'object' as const, properties: {} }, + }, + ]; + } + + async handleToolCall(toolName: string, args: any): Promise { + switch (toolName) { + case 'whatsapp_import_file': { + if (!args.filePath || !fs.existsSync(args.filePath)) { + return { error: `File not found: ${args.filePath}` }; + } + try { + const uuid = await this.importFile(args.filePath); + return { success: true, uuid, source: path.basename(args.filePath) }; + } catch (err: any) { + return { error: err.message }; + } + } + + case 'whatsapp_import_folder': { + const folderPath = args.folderPath || path.join(process.cwd(), 'hackathon', 'sample-data'); + if (!fs.existsSync(folderPath)) return { error: `Folder not found: ${folderPath}` }; + const files = fs.readdirSync(folderPath).filter(f => f.startsWith('whatsapp-') && f.endsWith('.txt')); + const results: any[] = []; + for (const file of files) { + try { + const uuid = await this.importFile(path.join(folderPath, file)); + results.push({ file, success: true, uuid }); + } catch (err: any) { + results.push({ file, success: false, error: err.message }); + } + } + return { imported: results.filter(r => r.success).length, failed: results.filter(r => !r.success).length, results }; + } + + case 'whatsapp_parse_preview': { + try { + const { messages, participants } = parseWhatsAppExport(args.content); + return { + success: true, + messageCount: messages.length, + participants, + firstMessage: messages[0]?.timestamp, + lastMessage: messages[messages.length - 1]?.timestamp, + preview: messages.slice(0, 5).map(m => ({ sender: m.sender, text: m.text.slice(0, 80) })), + }; + } catch (err: any) { + return { error: err.message }; + } + } + + case 'whatsapp_status': + return { adapter: this.name, version: this.version, watchFolder: this.watchFolder, stats: this.stats }; + + default: + return { error: `Unknown tool: ${toolName}` }; + } + } + + private moveFile(src: string, destDir: string): void { + try { fs.renameSync(src, path.join(destDir, path.basename(src))); } catch {} + } + + private log(msg: string, isError = false): void { + const prefix = '[WhatsAppAdapter]'; + if (isError) console.error(`${prefix} ${msg}`); + else if (this.verbose || process.env.WHATSAPP_VERBOSE === 'true') console.log(`${prefix} ${msg}`); + } +} + +// Exported parser for use from ingest page +export { parseWhatsAppExport, buildVconFromChat }; +export default WhatsAppAdapterPlugin; diff --git a/hackathon/sample-data/billing-complaint.txt b/hackathon/sample-data/billing-complaint.txt new file mode 100644 index 0000000..417d4ad --- /dev/null +++ b/hackathon/sample-data/billing-complaint.txt @@ -0,0 +1,21 @@ +Agent Mike Rivera: Thank you for calling TechCorp support, this is Mike. How can I help you today? + +Sarah Johnson: Hi Mike, I'm calling because I was charged twice for my subscription last month. I noticed two charges of $49.99 on my credit card statement. + +Agent Mike Rivera: I'm sorry to hear that, Sarah. Let me pull up your account right away. Can you confirm the email address on your account? + +Sarah Johnson: It's sarah.johnson@email.com. + +Agent Mike Rivera: Thank you. I can see the issue here. It looks like there was a system error during our billing cycle update on February 15th that caused duplicate charges for several customers. I sincerely apologize for the inconvenience. + +Sarah Johnson: This is really frustrating. I've been a customer for three years and this is the second billing issue I've had. + +Agent Mike Rivera: I completely understand your frustration, and I want to make this right. I'm going to process a refund for the duplicate charge of $49.99 right now. You should see it back on your card within 3 to 5 business days. I'm also going to add a $10 credit to your account for the inconvenience. + +Sarah Johnson: Okay, that sounds fair. Thank you for taking care of it quickly. + +Agent Mike Rivera: Absolutely. Is there anything else I can help you with today? + +Sarah Johnson: No, that's all. Thanks Mike. + +Agent Mike Rivera: You're welcome, Sarah. Have a great day! diff --git a/hackathon/sample-data/billing-complaint.xml b/hackathon/sample-data/billing-complaint.xml new file mode 100644 index 0000000..a57af0c --- /dev/null +++ b/hackathon/sample-data/billing-complaint.xml @@ -0,0 +1,16 @@ + + + + 2026-03-07T10:30:00Z + 2026-03-07T10:35:42Z + + + +15551234567 + Sarah Johnson + + + +15559876001 + Agent Mike Rivera + + inbound + diff --git a/hackathon/sample-data/demo-call.xml b/hackathon/sample-data/demo-call.xml new file mode 100644 index 0000000..4d81046 --- /dev/null +++ b/hackathon/sample-data/demo-call.xml @@ -0,0 +1,16 @@ + + + + 2026-03-09T09:15:00Z + 2026-03-09T09:17:30Z + + + +15553847291 + Robert Martinez + + + +15551000200 + Agent Jennifer Walsh + + inbound + diff --git a/hackathon/sample-data/followup-positive.txt b/hackathon/sample-data/followup-positive.txt new file mode 100644 index 0000000..1dbb738 --- /dev/null +++ b/hackathon/sample-data/followup-positive.txt @@ -0,0 +1,17 @@ +Agent Mike Rivera: TechCorp support, this is Mike. How can I help you? + +Sarah Johnson: Hi Mike, it's Sarah Johnson again. I'm calling to let you know that the refund came through. I saw it on my statement this morning. + +Agent Mike Rivera: That's great to hear, Sarah! I'm glad it was processed quickly. + +Sarah Johnson: Yes, and I also wanted to say thank you for the account credit. I actually used it to upgrade to the premium plan. The new features are really nice. + +Agent Mike Rivera: That's wonderful! The premium plan has some great tools. If you need any help getting set up with the advanced analytics dashboard, we have some tutorial videos I can send you. + +Sarah Johnson: That would be helpful, yes please. + +Agent Mike Rivera: I'll send those to your email right now. Is there anything else I can help with? + +Sarah Johnson: No, that's it. Thanks again Mike, you've been really helpful through all of this. + +Agent Mike Rivera: My pleasure, Sarah. Enjoy the premium features and don't hesitate to reach out anytime! diff --git a/hackathon/sample-data/followup-positive.xml b/hackathon/sample-data/followup-positive.xml new file mode 100644 index 0000000..18974fc --- /dev/null +++ b/hackathon/sample-data/followup-positive.xml @@ -0,0 +1,16 @@ + + + + 2026-03-07T14:00:00Z + 2026-03-07T14:04:15Z + + + +15551234567 + Sarah Johnson + + + +15559876001 + Agent Mike Rivera + + inbound + diff --git a/hackathon/sample-data/teams-escalation-review.json b/hackathon/sample-data/teams-escalation-review.json new file mode 100644 index 0000000..aad7337 --- /dev/null +++ b/hackathon/sample-data/teams-escalation-review.json @@ -0,0 +1,60 @@ +{ + "id": "e]f3a7d1-2b4c-4e8f-9a1d-3c5e7f9b1d3e", + "version": 1, + "type": "groupCall", + "modalities": ["audio"], + "lastModifiedDateTime": "2026-03-06T14:32:00Z", + "startDateTime": "2026-03-06T14:00:00Z", + "endDateTime": "2026-03-06T14:28:30Z", + "joinWebUrl": "https://teams.microsoft.com/l/meetup-join/19%3ameeting_abc123", + "organizer": { + "user": { + "id": "usr-teams-001", + "displayName": "Rachel Kim", + "userPrincipalName": "rachel.kim@techcorp.com" + } + }, + "participants": [ + { + "user": { + "id": "usr-teams-001", + "displayName": "Rachel Kim", + "userPrincipalName": "rachel.kim@techcorp.com" + }, + "role": "organizer" + }, + { + "user": { + "id": "usr-teams-002", + "displayName": "Tom Bradley", + "userPrincipalName": "tom.bradley@techcorp.com" + }, + "role": "attendee" + }, + { + "user": { + "id": "usr-teams-003", + "displayName": "Sarah Johnson", + "userPrincipalName": "sarah.johnson@email.com" + }, + "role": "attendee" + } + ], + "sessions": [ + { + "id": "sess-001", + "startDateTime": "2026-03-06T14:00:00Z", + "endDateTime": "2026-03-06T14:28:30Z", + "modalities": ["audio"], + "caller": { + "userAgent": { "applicationVersion": "1416/1.0.0.2026032300", "platform": "Windows" } + } + } + ], + "_simulated": { + "subject": "Q1 Customer Escalation Review โ€” Sarah Johnson Account", + "transcript": "Rachel Kim: Let's start with the Sarah Johnson escalation. Tom, can you walk us through the timeline?\n\nTom Bradley: Sure. Sarah first contacted us on March 3rd about a duplicate billing charge. Agent Mike Rivera handled the initial call and processed a refund, but the refund didn't go through because of a system flag on her account.\n\nSarah Johnson: Right, and then I called back two days later and was told there was no record of the refund being initiated. That's when I asked to speak with a manager.\n\nRachel Kim: I see. Tom, what happened with the system flag?\n\nTom Bradley: It turns out our fraud detection system flagged the refund because it exceeded the auto-approval threshold. It needed manual approval from finance, but that step was never communicated to the front-line agents.\n\nRachel Kim: That's a process gap we need to fix. Sarah, I want to sincerely apologize for the runaround. We're going to process your refund today with priority handling, and I'm adding a $25 credit to your account.\n\nSarah Johnson: I appreciate that, Rachel. I've been a customer for three years and honestly this experience made me consider switching providers.\n\nRachel Kim: I completely understand. Tom, I want you to draft a process update memo โ€” any refund flagged by fraud detection should generate an automatic notification to the original agent and the customer within 24 hours.\n\nTom Bradley: Got it. I'll have that drafted by end of day.\n\nRachel Kim: Sarah, is there anything else we can do for you?\n\nSarah Johnson: No, I think that covers it. Thank you for taking this seriously.\n\nRachel Kim: Absolutely. We'll follow up via email once the refund is confirmed. Thank you everyone.", + "recording_url": "https://teams-recordings.blob.core.windows.net/recordings/call-e3f3a7d1.wav", + "summary": "Escalation review meeting regarding Sarah Johnson's unprocessed refund due to fraud detection flag. Process gap identified in refund approval workflow. Resolution: priority refund + $25 credit, process update memo to be drafted." + } +} diff --git a/hackathon/sample-data/teams-vpn-support.json b/hackathon/sample-data/teams-vpn-support.json new file mode 100644 index 0000000..e4d3f3e --- /dev/null +++ b/hackathon/sample-data/teams-vpn-support.json @@ -0,0 +1,48 @@ +{ + "id": "b8c2d4e6-1a3f-5c7d-9e0b-2d4f6a8c0e2a", + "version": 1, + "type": "peerToPeer", + "modalities": ["audio"], + "lastModifiedDateTime": "2026-03-05T16:45:00Z", + "startDateTime": "2026-03-05T16:15:00Z", + "endDateTime": "2026-03-05T16:42:18Z", + "organizer": { + "user": { + "id": "usr-teams-004", + "displayName": "Agent Lisa Park", + "userPrincipalName": "lisa.park@techcorp.com" + } + }, + "participants": [ + { + "user": { + "id": "usr-teams-004", + "displayName": "Agent Lisa Park", + "userPrincipalName": "lisa.park@techcorp.com" + }, + "role": "organizer" + }, + { + "user": { + "id": "usr-teams-005", + "displayName": "David Chen", + "userPrincipalName": "david.chen@gmail.com" + }, + "role": "attendee" + } + ], + "sessions": [ + { + "id": "sess-002", + "startDateTime": "2026-03-05T16:15:00Z", + "endDateTime": "2026-03-05T16:42:18Z", + "modalities": ["audio"] + } + ], + "_simulated": { + "subject": "Technical Support โ€” David Chen VPN Configuration", + "transcript": "Agent Lisa Park: Hi David, thanks for reaching out via Teams. I see you're having issues with VPN connectivity?\n\nDavid Chen: Yes, ever since the last update my VPN client keeps disconnecting every 10 to 15 minutes. I'm working remotely and it's really impacting my productivity.\n\nAgent Lisa Park: I understand how frustrating that must be. Let me check a few things. Are you running the latest version of the VPN client, version 4.2.1?\n\nDavid Chen: I think so. Let me check... it says version 4.2.0.\n\nAgent Lisa Park: There's the issue. Version 4.2.0 has a known bug with keep-alive packets on certain network configurations. We pushed a patch in 4.2.1 last week. Let me walk you through the update.\n\nDavid Chen: Okay, I'm ready.\n\nAgent Lisa Park: First, open your system tray and right-click the VPN icon. Select 'Check for Updates'. It should find version 4.2.1. Go ahead and install it โ€” it won't interrupt your current connection.\n\nDavid Chen: Updating now... it's downloading. Okay, it's asking me to restart the client.\n\nAgent Lisa Park: Go ahead and restart it. I'll wait.\n\nDavid Chen: Done. It says version 4.2.1 now. Should I test the connection?\n\nAgent Lisa Park: Yes, connect to the corporate VPN and let's wait a few minutes to see if the disconnection issue persists.\n\nDavid Chen: Connected. It's been about 5 minutes now and it's holding steady. Before it would have dropped by now.\n\nAgent Lisa Park: Excellent! The update should resolve the keep-alive issue permanently. If you experience any more disconnections, please reach out again and we'll investigate further. I'm also going to create a ticket so we can track this.\n\nDavid Chen: Great, thank you Lisa. That was much easier than I expected.\n\nAgent Lisa Park: Happy to help! Have a great rest of your day.", + "recording_url": "https://teams-recordings.blob.core.windows.net/recordings/call-b8c2d4e6.wav", + "summary": "VPN disconnection issue resolved by updating client from v4.2.0 to v4.2.1 which fixed a known keep-alive packet bug. Customer confirmed stable connection after update." + } +} diff --git a/hackathon/sample-data/tech-support-escalation.txt b/hackathon/sample-data/tech-support-escalation.txt new file mode 100644 index 0000000..4c010c4 --- /dev/null +++ b/hackathon/sample-data/tech-support-escalation.txt @@ -0,0 +1,21 @@ +Agent Lisa Park: TechCorp support, this is Lisa. How can I assist you? + +David Chen: Hi Lisa, I've been having a serious issue with your cloud platform. My production database has been experiencing intermittent outages for the past three days and your tier one support hasn't been able to resolve it. + +Agent Lisa Park: I'm sorry to hear that, David. That sounds very frustrating, especially for a production system. Let me review the ticket history. Can you give me your account ID? + +David Chen: It's CORP-88421. And honestly, I'm running out of patience. We've lost revenue because of this downtime. My CTO is asking me why we're still on your platform. + +Agent Lisa Park: I understand the urgency and the business impact. I can see the previous tickets. It looks like the issue was initially diagnosed as a network configuration problem, but the suggested fixes haven't resolved it. I'm going to escalate this to our senior infrastructure team right now. + +David Chen: That's what the last agent said too. I need someone who can actually fix this, not just pass it along. + +Agent Lisa Park: You're absolutely right, and I want to handle this differently. I'm creating a priority one escalation with a direct line to our infrastructure lead, James. I'm also going to give you my direct extension so you have a single point of contact. You won't have to explain this again. + +David Chen: Okay. When can I expect to hear back? + +Agent Lisa Park: James will contact you within two hours. If you don't hear from him by 1:30 PM, call me directly at extension 4472. I'm going to personally follow up to make sure this gets resolved today. + +David Chen: Alright, I appreciate you taking this seriously. Thank you Lisa. + +Agent Lisa Park: Of course. We value your business and I'll make sure this gets the attention it deserves. diff --git a/hackathon/sample-data/tech-support-escalation.xml b/hackathon/sample-data/tech-support-escalation.xml new file mode 100644 index 0000000..da4cb1c --- /dev/null +++ b/hackathon/sample-data/tech-support-escalation.xml @@ -0,0 +1,16 @@ + + + + 2026-03-07T11:15:00Z + 2026-03-07T11:22:30Z + + + +15552223344 + David Chen + + + +15559876002 + Agent Lisa Park + + inbound + diff --git a/hackathon/sample-data/whatsapp-account-lockout.txt b/hackathon/sample-data/whatsapp-account-lockout.txt new file mode 100644 index 0000000..68dabcd --- /dev/null +++ b/hackathon/sample-data/whatsapp-account-lockout.txt @@ -0,0 +1,11 @@ +3/5/26, 10:15 AM - Sarah Johnson: Hi, is this TechCorp support? +3/5/26, 10:16 AM - Agent Mike Rivera: Hi Sarah! Yes, this is Mike from TechCorp support. How can I help you today? +3/5/26, 10:16 AM - Sarah Johnson: I'm having trouble logging into my account. It says my password is incorrect but I'm sure it's right. +3/5/26, 10:17 AM - Agent Mike Rivera: I'm sorry to hear that. Let me check your account. Can you confirm the email address you're using to log in? +3/5/26, 10:17 AM - Sarah Johnson: sarah.johnson@email.com +3/5/26, 10:18 AM - Agent Mike Rivera: Thanks Sarah. I can see your account was temporarily locked after 5 failed login attempts. This is a security measure. I'm going to unlock it now. +3/5/26, 10:19 AM - Agent Mike Rivera: Done! Your account is unlocked. You should be able to log in now with your existing password. If you'd like, I can also send you a password reset link. +3/5/26, 10:20 AM - Sarah Johnson: Let me try... yes! I'm in! Thank you so much Mike. +3/5/26, 10:20 AM - Agent Mike Rivera: Great! Just so you know, the lockout happens after 5 failed attempts within 30 minutes. If it happens again, you can wait 30 minutes or reach out to us. +3/5/26, 10:21 AM - Sarah Johnson: Good to know. Thanks for the quick help! +3/5/26, 10:21 AM - Agent Mike Rivera: You're welcome! Have a great day Sarah ๐Ÿ˜Š diff --git a/hackathon/sample-data/whatsapp-plan-upgrade.txt b/hackathon/sample-data/whatsapp-plan-upgrade.txt new file mode 100644 index 0000000..e7b0803 --- /dev/null +++ b/hackathon/sample-data/whatsapp-plan-upgrade.txt @@ -0,0 +1,16 @@ +3/6/26, 2:30 PM - David Chen: Hey, I have a question about upgrading my plan +3/6/26, 2:32 PM - Agent Lisa Park: Hi David! Of course, I'd be happy to help with that. What would you like to know? +3/6/26, 2:33 PM - David Chen: I'm currently on the Basic plan at $29/mo. What's the difference with Professional? +3/6/26, 2:34 PM - Agent Lisa Park: Great question! The Professional plan is $49/mo and includes: +3/6/26, 2:34 PM - Agent Lisa Park: - Priority VPN routing (dedicated bandwidth) +3/6/26, 2:34 PM - Agent Lisa Park: - 24/7 phone support (vs business hours only) +3/6/26, 2:35 PM - Agent Lisa Park: - Advanced analytics dashboard +3/6/26, 2:35 PM - Agent Lisa Park: - Up to 10 user seats (vs 3 on Basic) +3/6/26, 2:36 PM - David Chen: The priority VPN routing sounds great. I've been having some connectivity issues working remotely. +3/6/26, 2:37 PM - Agent Lisa Park: Actually, I helped you with a VPN issue recently! The 4.2.1 update should have fixed the disconnections. But priority routing would give you even better performance during peak hours. +3/6/26, 2:38 PM - David Chen: Right, that fix worked perfectly. Can I do a trial of Professional before committing? +3/6/26, 2:39 PM - Agent Lisa Park: Absolutely! I can set you up with a 14-day free trial. No credit card change needed โ€” if you like it, we'll switch your billing after the trial. If not, you'll stay on Basic automatically. +3/6/26, 2:40 PM - David Chen: Perfect, let's do that! +3/6/26, 2:41 PM - Agent Lisa Park: Done! Your Professional trial is active now. You'll see the new features in your dashboard within a few minutes. I'll also send you a getting-started guide via email. +3/6/26, 2:42 PM - David Chen: Awesome, thanks Lisa! Really appreciate the help. +3/6/26, 2:42 PM - Agent Lisa Park: My pleasure David! Feel free to message anytime if you have questions during the trial. ๐Ÿ˜Š diff --git a/hackathon/sidecar/README.md b/hackathon/sidecar/README.md new file mode 100644 index 0000000..417f577 --- /dev/null +++ b/hackathon/sidecar/README.md @@ -0,0 +1,47 @@ +# vCon Intelligence Platform โ€” Python Sidecars + +GPU-accelerated services for the hackathon pipeline. + +## Whisper Sidecar (port 8100) + +Audio transcription via OpenAI Whisper on your RTX 4090. + +### Setup + +```powershell +cd E:\data\code\claudecode\vcon-mcp\hackathon\sidecar + +# Install PyTorch with CUDA first (if not already installed): +pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 + +# Install remaining dependencies: +pip install -r requirements.txt + +# Run: +python whisper_service.py +``` + +### Environment Variables + +| Variable | Default | Description | +|----------|---------|-------------| +| `WHISPER_MODEL` | `medium` | Whisper model size: `tiny`, `base`, `small`, `medium`, `large-v3` | +| `WHISPER_HOST` | `0.0.0.0` | Bind host | +| `WHISPER_PORT` | `8100` | Bind port | + +### Endpoints + +- `POST /transcribe` โ€” Upload audio file โ†’ `{ "text": "...", "segments": [...] }` +- `GET /health` โ€” Model status + GPU memory info + +### VRAM Usage (approximate) + +| Model | VRAM | Quality | +|-------|------|---------| +| tiny | ~1 GB | Low | +| base | ~1 GB | Fair | +| small | ~2 GB | Good | +| medium | ~5 GB | Very Good | +| large-v3 | ~10 GB | Best | + +Default is `medium` (~5GB) to leave room for LLaMA + embeddings. diff --git a/hackathon/sidecar/generate_demo_audio.py b/hackathon/sidecar/generate_demo_audio.py new file mode 100644 index 0000000..4f00311 --- /dev/null +++ b/hackathon/sidecar/generate_demo_audio.py @@ -0,0 +1,109 @@ +""" +Generate a demo audio file for Whisper sidecar testing. +Creates a short simulated customer service call as a WAV file +using edge-tts (Microsoft Edge TTS โ€” free, high quality). + +Usage: + pip install edge-tts + python generate_demo_audio.py + +Output: demo-call.wav + demo-call.xml in hackathon/watch/siprec/ +""" + +import asyncio +import os +import sys + +# Check for edge-tts +try: + import edge_tts +except ImportError: + print("Installing edge-tts...") + os.system(f"{sys.executable} -m pip install edge-tts") + import edge_tts + +OUTPUT_DIR = os.path.join(os.path.dirname(__file__), '..', 'sample-data') +WAV_PATH = os.path.join(OUTPUT_DIR, 'demo-call.mp3') +XML_PATH = os.path.join(OUTPUT_DIR, 'demo-call.xml') + +# Simulated call transcript โ€” two speakers +# We'll generate it as a single narrated audio for simplicity +SCRIPT = """ +Hello, thank you for calling Acme Wireless. My name is Jennifer. How can I help you today? + +Hi Jennifer, this is Robert Martinez. I've been having issues with my internet connection for the past three days. It keeps dropping every few hours and the speed is way below what I'm paying for. + +I'm sorry to hear that, Robert. Let me pull up your account. Can you give me your account number? + +Sure, it's A C 7 7 4 2 1. + +Thank you. I can see your account. It looks like there was a network upgrade in your area last week. Some customers have been experiencing intermittent connectivity. Let me run a diagnostic on your line. + +That would be great. I work from home, so reliable internet is critical for me. + +I completely understand. The diagnostic shows some signal degradation on your line. I'm going to reset your connection from our end and also schedule a technician visit for tomorrow morning between 9 and 11. Would that work for you? + +Tomorrow morning works. Will there be any charge for the technician visit? + +No, this will be completely free since the issue is on our end. I'm also going to apply a credit to your account for the three days of service disruption. + +Thank you Jennifer, I really appreciate that. You've been very helpful. + +You're welcome, Robert. Is there anything else I can help you with today? + +No, that's everything. Thanks again. + +Thank you for calling Acme Wireless. Have a great day! +""".strip() + +# SIPREC-style XML metadata (no .txt file โ€” forces Whisper transcription) +XML_CONTENT = """ + + + 2026-03-09T09:15:00Z + 2026-03-09T09:17:30Z + + + +15553847291 + Robert Martinez + + + +15551000200 + Agent Jennifer Walsh + + inbound + +""" + + +async def main(): + os.makedirs(OUTPUT_DIR, exist_ok=True) + + print(f"Generating demo audio with Edge TTS...") + print(f"Script length: {len(SCRIPT)} characters") + + # Use a natural-sounding voice + communicate = edge_tts.Communicate(SCRIPT, "en-US-JennyNeural", rate="+5%") + await communicate.save(WAV_PATH) + print(f"Audio saved: {WAV_PATH}") + + # Write XML metadata + with open(XML_PATH, 'w') as f: + f.write(XML_CONTENT) + print(f"XML saved: {XML_PATH}") + + print(f"\nโœ“ Files ready in: {os.path.abspath(OUTPUT_DIR)}") + print(f" demo-call.mp3 โ€” Audio file for Whisper demo") + print(f" demo-call.xml โ€” SIPREC metadata (for folder-drop demo)") + print(f"\nTo demo via Ingest page:") + print(f" 1. Open ingest.html โ†’ Audio Upload tab") + print(f" 2. Enter caller/agent names") + print(f" 3. Browse to: {os.path.abspath(WAV_PATH)}") + print(f" 4. Whisper will auto-transcribe on your GPU") + print(f" 5. Click Submit โ†’ full pipeline fires") + print(f"\nTo demo via folder drop:") + print(f" Copy demo-call.xml + demo-call.mp3 (no .txt!) to hackathon/watch/siprec/") + + +if __name__ == '__main__': + asyncio.run(main()) diff --git a/hackathon/sidecar/llama_service.py b/hackathon/sidecar/llama_service.py new file mode 100644 index 0000000..58467f9 --- /dev/null +++ b/hackathon/sidecar/llama_service.py @@ -0,0 +1,290 @@ +""" +LLaMA Inference Sidecar (via Groq) +==================================== +FastAPI service that provides LLM-powered analysis for the vCon pipeline. +Uses Groq's API (free tier) running Llama models for fast inference. + +Endpoints: + POST /analyze โ€” sentiment + summary + topics from transcript text + POST /query โ€” RAG-style Q&A with context chunks and citations + GET /health โ€” API status + +Default: port 8200, model llama-3.1-8b-instant (configurable via GROQ_MODEL) +""" + +import os +import json +import time +import logging +from typing import Optional + +from groq import Groq +from fastapi import FastAPI, HTTPException +from fastapi.middleware.cors import CORSMiddleware +from pydantic import BaseModel +import uvicorn + +# --------------------------------------------------------------------------- +# Config +# --------------------------------------------------------------------------- + +GROQ_API_KEY = os.environ.get("GROQ_API_KEY", "") +GROQ_MODEL = os.environ.get("GROQ_MODEL", "llama-3.1-8b-instant") +HOST = os.environ.get("LLAMA_HOST", "0.0.0.0") +PORT = int(os.environ.get("LLAMA_PORT", "8200")) + +# --------------------------------------------------------------------------- +# Logging +# --------------------------------------------------------------------------- + +logging.basicConfig( + level=logging.INFO, + format="[llama-sidecar] %(asctime)s %(levelname)s %(message)s", + datefmt="%H:%M:%S", +) +log = logging.getLogger("llama-sidecar") + +# --------------------------------------------------------------------------- +# App +# --------------------------------------------------------------------------- + +app = FastAPI(title="LLaMA Sidecar (Groq)", version="1.0.0") + +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_methods=["*"], + allow_headers=["*"], +) + +client: Groq | None = None + + +@app.on_event("startup") +async def startup(): + global client + if not GROQ_API_KEY: + log.warning("GROQ_API_KEY not set โ€” service will return errors on requests") + return + client = Groq(api_key=GROQ_API_KEY) + log.info(f"Groq client initialized, model: {GROQ_MODEL}") + + +# --------------------------------------------------------------------------- +# Request / Response models +# --------------------------------------------------------------------------- + +class AnalyzeRequest(BaseModel): + text: str + vcon_uuid: Optional[str] = None + +class AnalyzeResponse(BaseModel): + sentiment: float # -1.0 to 1.0 + sentiment_label: str # negative / neutral / positive + summary: str + topics: list[str] + key_phrases: list[str] + processing_time_seconds: float + +class QueryRequest(BaseModel): + question: str + context_chunks: list[dict] # [{ "text": "...", "vcon_uuid": "...", "source": "..." }] + max_tokens: int = 1024 + +class QueryResponse(BaseModel): + answer: str + citations: list[dict] # [{ "vcon_uuid": "...", "excerpt": "..." }] + confidence: float + processing_time_seconds: float + + +# --------------------------------------------------------------------------- +# POST /analyze +# --------------------------------------------------------------------------- + +ANALYZE_SYSTEM_PROMPT = """You are an expert conversation analyst. Given a transcript or conversation text, provide a structured analysis. + +You MUST respond with valid JSON only โ€” no markdown, no explanation, no preamble. Use this exact schema: + +{ + "sentiment": , + "sentiment_label": "", + "summary": "<2-3 sentence summary of the conversation>", + "topics": ["", "", ...], + "key_phrases": ["", "", ...] +} + +Rules: +- sentiment must be a number between -1.0 and 1.0 +- sentiment_label: negative if < -0.2, positive if > 0.2, neutral otherwise +- topics: 2-5 main topics discussed +- key_phrases: 3-6 notable phrases or terms from the conversation +- summary: concise, factual, 2-3 sentences""" + + +@app.post("/analyze", response_model=AnalyzeResponse) +async def analyze(req: AnalyzeRequest): + if client is None: + raise HTTPException(503, "Groq client not initialized โ€” check GROQ_API_KEY") + + if not req.text.strip(): + raise HTTPException(400, "Empty text") + + t0 = time.time() + + try: + # Truncate very long transcripts to stay within token limits + text = req.text[:12000] if len(req.text) > 12000 else req.text + + response = client.chat.completions.create( + model=GROQ_MODEL, + messages=[ + {"role": "system", "content": ANALYZE_SYSTEM_PROMPT}, + {"role": "user", "content": f"Analyze this conversation:\n\n{text}"}, + ], + temperature=0.1, + max_tokens=1024, + response_format={"type": "json_object"}, + ) + + raw = response.choices[0].message.content + result = json.loads(raw) + elapsed = time.time() - t0 + + # Normalize and validate + sentiment = max(-1.0, min(1.0, float(result.get("sentiment", 0)))) + if sentiment < -0.2: + label = "negative" + elif sentiment > 0.2: + label = "positive" + else: + label = "neutral" + + log.info(f"Analyzed {len(text)} chars in {elapsed:.1f}s โ€” " + f"sentiment={sentiment:.2f} ({label}), " + f"{len(result.get('topics', []))} topics") + + return AnalyzeResponse( + sentiment=sentiment, + sentiment_label=result.get("sentiment_label", label), + summary=result.get("summary", "No summary available."), + topics=result.get("topics", [])[:5], + key_phrases=result.get("key_phrases", [])[:6], + processing_time_seconds=round(elapsed, 2), + ) + + except json.JSONDecodeError as e: + log.error(f"JSON parse error from Groq: {e}") + raise HTTPException(500, f"LLM returned invalid JSON: {str(e)}") + except Exception as e: + log.error(f"Analysis failed: {e}") + raise HTTPException(500, f"Analysis failed: {str(e)}") + + +# --------------------------------------------------------------------------- +# POST /query (RAG-style Q&A) +# --------------------------------------------------------------------------- + +QUERY_SYSTEM_PROMPT = """You are a conversation intelligence assistant. Answer the user's question using ONLY the provided context chunks from vCon conversation records. + +Rules: +- Base your answer strictly on the provided context +- Cite specific conversations by their vcon_uuid when making claims +- If the context doesn't contain enough information, say so clearly +- Be concise and direct + +You MUST respond with valid JSON only โ€” no markdown, no explanation: + +{ + "answer": "", + "citations": [ + {"vcon_uuid": "", "excerpt": ""} + ], + "confidence": +}""" + + +@app.post("/query", response_model=QueryResponse) +async def query(req: QueryRequest): + if client is None: + raise HTTPException(503, "Groq client not initialized โ€” check GROQ_API_KEY") + + if not req.question.strip(): + raise HTTPException(400, "Empty question") + + t0 = time.time() + + try: + # Format context chunks + context_parts = [] + for i, chunk in enumerate(req.context_chunks[:10]): # Limit to 10 chunks + uuid = chunk.get("vcon_uuid", f"unknown-{i}") + source = chunk.get("source", "unknown") + text = chunk.get("text", "")[:2000] # Truncate individual chunks + context_parts.append( + f"[vCon {uuid}] (source: {source})\n{text}" + ) + + context_text = "\n\n---\n\n".join(context_parts) + + response = client.chat.completions.create( + model=GROQ_MODEL, + messages=[ + {"role": "system", "content": QUERY_SYSTEM_PROMPT}, + {"role": "user", "content": ( + f"Context from conversation records:\n\n{context_text}\n\n" + f"---\n\nQuestion: {req.question}" + )}, + ], + temperature=0.2, + max_tokens=req.max_tokens, + response_format={"type": "json_object"}, + ) + + raw = response.choices[0].message.content + result = json.loads(raw) + elapsed = time.time() - t0 + + confidence = max(0.0, min(1.0, float(result.get("confidence", 0.5)))) + + log.info(f"Query answered in {elapsed:.1f}s โ€” " + f"confidence={confidence:.2f}, " + f"{len(result.get('citations', []))} citations") + + return QueryResponse( + answer=result.get("answer", "Unable to answer from the provided context."), + citations=result.get("citations", [])[:5], + confidence=confidence, + processing_time_seconds=round(elapsed, 2), + ) + + except json.JSONDecodeError as e: + log.error(f"JSON parse error from Groq: {e}") + raise HTTPException(500, f"LLM returned invalid JSON: {str(e)}") + except Exception as e: + log.error(f"Query failed: {e}") + raise HTTPException(500, f"Query failed: {str(e)}") + + +# --------------------------------------------------------------------------- +# GET /health +# --------------------------------------------------------------------------- + +@app.get("/health") +async def health(): + return { + "status": "ok" if client is not None else "no_api_key", + "model": GROQ_MODEL, + "provider": "groq", + "api_key_set": bool(GROQ_API_KEY), + } + + +# --------------------------------------------------------------------------- +# Main +# --------------------------------------------------------------------------- + +if __name__ == "__main__": + log.info(f"Starting LLaMA sidecar (Groq) on {HOST}:{PORT}") + log.info(f"Model: {GROQ_MODEL}, API key: {'set' if GROQ_API_KEY else 'NOT SET'}") + uvicorn.run(app, host=HOST, port=PORT, log_level="info") diff --git a/hackathon/sidecar/requirements.txt b/hackathon/sidecar/requirements.txt new file mode 100644 index 0000000..3bc5c14 --- /dev/null +++ b/hackathon/sidecar/requirements.txt @@ -0,0 +1,16 @@ +# Whisper + LLaMA sidecar dependencies +# Install: pip install -r requirements.txt + +fastapi==0.115.* +uvicorn[standard]==0.34.* +python-multipart==0.0.* + +# Whisper (OpenAI) - GPU accelerated +openai-whisper==20240930 + +# Groq API client (runs Llama on Groq's cloud โ€” free tier) +groq>=0.9.0 + +# PyTorch with CUDA 12.x (install separately if needed): +# pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 +# torch should be pulled in by whisper, but if you need CUDA specifically, use the above. diff --git a/hackathon/sidecar/whisper_service.py b/hackathon/sidecar/whisper_service.py new file mode 100644 index 0000000..44a37df --- /dev/null +++ b/hackathon/sidecar/whisper_service.py @@ -0,0 +1,185 @@ +""" +Whisper Audio Transcription Sidecar +==================================== +FastAPI service that accepts audio file uploads and returns transcriptions +using OpenAI Whisper running locally on GPU. + +Endpoint contract (matches SIPREC adapter + ingest.html expectations): + POST /transcribe โ€” multipart file upload โ†’ { "text": "...", "segments": [...] } + GET /health โ€” model status + GPU info + +Default: port 8100, Whisper 'medium' model (configurable via WHISPER_MODEL env var) +""" + +import os +import sys +import time +import tempfile +import logging +from pathlib import Path + +import torch +import whisper +from fastapi import FastAPI, File, UploadFile, Form, HTTPException +from fastapi.middleware.cors import CORSMiddleware +from fastapi.responses import JSONResponse +import uvicorn + +# --------------------------------------------------------------------------- +# Config +# --------------------------------------------------------------------------- + +WHISPER_MODEL = os.environ.get("WHISPER_MODEL", "medium") +HOST = os.environ.get("WHISPER_HOST", "0.0.0.0") +PORT = int(os.environ.get("WHISPER_PORT", "8100")) +DEVICE = "cuda" if torch.cuda.is_available() else "cpu" + +# --------------------------------------------------------------------------- +# Logging +# --------------------------------------------------------------------------- + +logging.basicConfig( + level=logging.INFO, + format="[whisper-sidecar] %(asctime)s %(levelname)s %(message)s", + datefmt="%H:%M:%S", +) +log = logging.getLogger("whisper-sidecar") + +# --------------------------------------------------------------------------- +# App +# --------------------------------------------------------------------------- + +app = FastAPI(title="Whisper Sidecar", version="1.0.0") + +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_methods=["*"], + allow_headers=["*"], +) + +# Global model reference โ€” loaded once at startup +model: whisper.Whisper | None = None +model_load_time: float = 0.0 + + +@app.on_event("startup") +async def load_model(): + global model, model_load_time + log.info(f"Loading Whisper '{WHISPER_MODEL}' model on {DEVICE}...") + t0 = time.time() + model = whisper.load_model(WHISPER_MODEL, device=DEVICE) + model_load_time = time.time() - t0 + log.info(f"Model loaded in {model_load_time:.1f}s") + + if DEVICE == "cuda": + mem_alloc = torch.cuda.memory_allocated() / 1024**3 + mem_total = torch.cuda.get_device_properties(0).total_memory / 1024**3 + log.info(f"GPU memory: {mem_alloc:.1f} / {mem_total:.1f} GB") + + +# --------------------------------------------------------------------------- +# POST /transcribe +# --------------------------------------------------------------------------- + +@app.post("/transcribe") +async def transcribe( + file: UploadFile = File(...), + model_name: str = Form(default=None, alias="model"), + language: str = Form(default=None), +): + """ + Accepts an audio file (WAV, MP3, OGG, WEBM, M4A, etc.) and returns + the Whisper transcription. + + Form fields: + - file: audio file (required) + - model: ignored (we use the pre-loaded model), kept for API compat + - language: optional language hint (e.g. "en") + """ + if model is None: + raise HTTPException(503, "Model not loaded yet") + + # Save upload to a temp file (Whisper needs a file path) + suffix = Path(file.filename or "audio.wav").suffix or ".wav" + with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp: + content = await file.read() + tmp.write(content) + tmp_path = tmp.name + + try: + log.info(f"Transcribing: {file.filename} ({len(content)} bytes)") + t0 = time.time() + + # Build transcribe options + options = {} + if language: + options["language"] = language + + result = model.transcribe(tmp_path, **options) + elapsed = time.time() - t0 + + # Build segment list + segments = [] + for seg in result.get("segments", []): + segments.append({ + "id": seg["id"], + "start": round(seg["start"], 2), + "end": round(seg["end"], 2), + "text": seg["text"].strip(), + }) + + log.info(f"Done in {elapsed:.1f}s โ€” {len(segments)} segments, " + f"language={result.get('language', '?')}") + + return { + "text": result["text"].strip(), + "segments": segments, + "language": result.get("language"), + "duration_seconds": round(segments[-1]["end"], 2) if segments else 0, + "processing_time_seconds": round(elapsed, 2), + } + + except Exception as e: + log.error(f"Transcription failed: {e}") + raise HTTPException(500, f"Transcription failed: {str(e)}") + + finally: + # Clean up temp file + try: + os.unlink(tmp_path) + except OSError: + pass + + +# --------------------------------------------------------------------------- +# GET /health +# --------------------------------------------------------------------------- + +@app.get("/health") +async def health(): + gpu_info = {} + if torch.cuda.is_available(): + gpu_info = { + "device": torch.cuda.get_device_name(0), + "memory_allocated_gb": round(torch.cuda.memory_allocated() / 1024**3, 2), + "memory_total_gb": round(torch.cuda.get_device_properties(0).total_memory / 1024**3, 2), + } + + return { + "status": "ok" if model is not None else "loading", + "model": WHISPER_MODEL, + "device": DEVICE, + "model_load_time_seconds": round(model_load_time, 1), + "gpu": gpu_info, + } + + +# --------------------------------------------------------------------------- +# Main +# --------------------------------------------------------------------------- + +if __name__ == "__main__": + log.info(f"Starting Whisper sidecar on {HOST}:{PORT}") + log.info(f"Model: {WHISPER_MODEL}, Device: {DEVICE}") + uvicorn.run(app, host=HOST, port=PORT, log_level="info") diff --git a/package-lock.json b/package-lock.json index 488e829..4e763b0 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,6 +12,7 @@ "@aws-sdk/client-s3": "^3.922.0", "@aws-sdk/credential-providers": "^3.927.0", "@aws-sdk/s3-request-presigner": "^3.922.0", + "@jsonld-ex/core": "^0.1.1", "@koa/cors": "^5.0.0", "@koa/router": "^13.1.0", "@modelcontextprotocol/sdk": "^1.19.1", @@ -26,9 +27,13 @@ "@opentelemetry/semantic-conventions": "^1.25.0", "@supabase/supabase-js": "^2.74.0", "dotenv": "^16.4.0", + "fast-json-stable-stringify": "^2.1.0", "ioredis": "^5.4.1", "koa": "^3.1.1", "koa-bodyparser": "^4.4.1", + "mongodb": "^7.1.0", + "mqtt": "^5.15.0", + "neo4j-driver": "^6.0.1", "p-limit": "^7.2.0", "pino": "^10.1.0", "pino-pretty": "^13.1.3", @@ -36,10 +41,12 @@ "zod": "^3.25.76" }, "devDependencies": { + "@types/fast-json-stable-stringify": "^2.1.2", "@types/koa": "^2.15.0", "@types/koa__cors": "^5.0.0", "@types/koa__router": "^12.0.4", "@types/koa-bodyparser": "^4.3.12", + "@types/mqtt": "^2.5.0", "@types/node": "^24.7.0", "@types/pino": "^7.0.4", "@typescript-eslint/eslint-plugin": "^8.46.0", @@ -2135,6 +2142,14 @@ "node": ">=6.0.0" } }, + "node_modules/@babel/runtime": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.6.tgz", + "integrity": "sha512-05WQkdpL9COIMz4LjTxGpPNCdlpyimKppYNoJ5Di5EUObifl8t4tuLuUBBZEpoLYOmfvIWrsp9fCl0HoPRVTdA==", + "engines": { + "node": ">=6.9.0" + } + }, "node_modules/@babel/types": { "version": "7.28.4", "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.28.4.tgz", @@ -2159,6 +2174,91 @@ "node": ">=18" } }, + "node_modules/@cbor-extract/cbor-extract-darwin-arm64": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@cbor-extract/cbor-extract-darwin-arm64/-/cbor-extract-darwin-arm64-2.2.0.tgz", + "integrity": "sha512-P7swiOAdF7aSi0H+tHtHtr6zrpF3aAq/W9FXx5HektRvLTM2O89xCyXF3pk7pLc7QpaY7AoaE8UowVf9QBdh3w==", + "cpu": [ + "arm64" + ], + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@cbor-extract/cbor-extract-darwin-x64": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@cbor-extract/cbor-extract-darwin-x64/-/cbor-extract-darwin-x64-2.2.0.tgz", + "integrity": "sha512-1liF6fgowph0JxBbYnAS7ZlqNYLf000Qnj4KjqPNW4GViKrEql2MgZnAsExhY9LSy8dnvA4C0qHEBgPrll0z0w==", + "cpu": [ + "x64" + ], + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@cbor-extract/cbor-extract-linux-arm": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@cbor-extract/cbor-extract-linux-arm/-/cbor-extract-linux-arm-2.2.0.tgz", + "integrity": "sha512-QeBcBXk964zOytiedMPQNZr7sg0TNavZeuUCD6ON4vEOU/25+pLhNN6EDIKJ9VLTKaZ7K7EaAriyYQ1NQ05s/Q==", + "cpu": [ + "arm" + ], + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@cbor-extract/cbor-extract-linux-arm64": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@cbor-extract/cbor-extract-linux-arm64/-/cbor-extract-linux-arm64-2.2.0.tgz", + "integrity": "sha512-rQvhNmDuhjTVXSPFLolmQ47/ydGOFXtbR7+wgkSY0bdOxCFept1hvg59uiLPT2fVDuJFuEy16EImo5tE2x3RsQ==", + "cpu": [ + "arm64" + ], + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@cbor-extract/cbor-extract-linux-x64": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@cbor-extract/cbor-extract-linux-x64/-/cbor-extract-linux-x64-2.2.0.tgz", + "integrity": "sha512-cWLAWtT3kNLHSvP4RKDzSTX9o0wvQEEAj4SKvhWuOVZxiDAeQazr9A+PSiRILK1VYMLeDml89ohxCnUNQNQNCw==", + "cpu": [ + "x64" + ], + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@cbor-extract/cbor-extract-win32-x64": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@cbor-extract/cbor-extract-win32-x64/-/cbor-extract-win32-x64-2.2.0.tgz", + "integrity": "sha512-l2M+Z8DO2vbvADOBNLbbh9y5ST1RY5sqkWOg/58GkUPBYou/cuNZ68SGQ644f1CvZ8kcOxyZtw06+dxWHIoN/w==", + "cpu": [ + "x64" + ], + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@digitalbazaar/http-client": { + "version": "3.4.1", + "resolved": "https://registry.npmjs.org/@digitalbazaar/http-client/-/http-client-3.4.1.tgz", + "integrity": "sha512-Ahk1N+s7urkgj7WvvUND5f8GiWEPfUw0D41hdElaqLgu8wZScI8gdI0q+qWw5N1d35x7GCRH2uk9mi+Uzo9M3g==", + "dependencies": { + "ky": "^0.33.3", + "ky-universal": "^0.11.0", + "undici": "^5.21.2" + }, + "engines": { + "node": ">=14.0" + } + }, "node_modules/@docsearch/css": { "version": "3.8.2", "resolved": "https://registry.npmjs.org/@docsearch/css/-/css-3.8.2.tgz", @@ -2841,6 +2941,14 @@ "node": "^18.18.0 || ^20.9.0 || >=21.1.0" } }, + "node_modules/@fastify/busboy": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/@fastify/busboy/-/busboy-2.1.1.tgz", + "integrity": "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA==", + "engines": { + "node": ">=14" + } + }, "node_modules/@grpc/grpc-js": { "version": "1.14.0", "resolved": "https://registry.npmjs.org/@grpc/grpc-js/-/grpc-js-1.14.0.tgz", @@ -2878,6 +2986,17 @@ "integrity": "sha512-Waj1cwPXJDucOib4a3bAISsKJVb15MKi9IvmTI/7ssVEm6sywXGjVJDhl6/umt1pK1ZS7PacXU3A1PmFKHEZ2w==", "license": "BSD-3-Clause" }, + "node_modules/@hono/node-server": { + "version": "1.19.9", + "resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.9.tgz", + "integrity": "sha512-vHL6w3ecZsky+8P5MD+eFfaGTyCeOHUIFYMGpQGbrBTSmNNoxv0if69rEZ5giu36weC5saFuznL411gRX7bJDw==", + "engines": { + "node": ">=18.14.1" + }, + "peerDependencies": { + "hono": "^4" + } + }, "node_modules/@humanfs/core": { "version": "0.19.1", "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz", @@ -2957,7 +3076,7 @@ "version": "8.0.2", "resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz", "integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==", - "dev": true, + "devOptional": true, "license": "ISC", "dependencies": { "string-width": "^5.1.2", @@ -3030,6 +3149,41 @@ "url": "https://opencollective.com/js-sdsl" } }, + "node_modules/@jsonld-ex/core": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/@jsonld-ex/core/-/core-0.1.1.tgz", + "integrity": "sha512-3O226UFj0lCUkDc/CvFGjEnqEiFogx/u7beALjU3OcfqDDPnSa6PHJwC/GlPekO7i+4TQCKqLfXFKFPnrqHxFQ==", + "dependencies": { + "@modelcontextprotocol/sdk": "^1.26.0", + "cbor-x": "^1.6.0", + "jsonld": "^8.3.2", + "uuid": "^13.0.0", + "zod": "^4.3.6" + }, + "bin": { + "jsonld-mcp": "bin/mcp-server.js" + } + }, + "node_modules/@jsonld-ex/core/node_modules/uuid": { + "version": "13.0.0", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-13.0.0.tgz", + "integrity": "sha512-XQegIaBTVUjSHliKqcnFqYypAd4S+WCYt5NIeRs6w/UAry7z8Y9j5ZwRRL4kzq9U3sD6v+85er9FvkEaBpji2w==", + "funding": [ + "https://github.com/sponsors/broofa", + "https://github.com/sponsors/ctavan" + ], + "bin": { + "uuid": "dist-node/bin/uuid" + } + }, + "node_modules/@jsonld-ex/core/node_modules/zod": { + "version": "4.3.6", + "resolved": "https://registry.npmjs.org/zod/-/zod-4.3.6.tgz", + "integrity": "sha512-rftlrkhHZOcjDwkGlnUtZZkvaPHCsDATp4pGpuOOMDaTdDDXF91wuVDJoWoPsKX/3YPQ5fHuF3STjcYyKr+Qhg==", + "funding": { + "url": "https://github.com/sponsors/colinhacks" + } + }, "node_modules/@koa/cors": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/@koa/cors/-/cors-5.0.0.tgz", @@ -3065,26 +3219,70 @@ "license": "MIT" }, "node_modules/@modelcontextprotocol/sdk": { - "version": "1.20.0", - "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.20.0.tgz", - "integrity": "sha512-kOQ4+fHuT4KbR2iq2IjeV32HiihueuOf1vJkq18z08CLZ1UQrTc8BXJpVfxZkq45+inLLD+D4xx4nBjUelJa4Q==", - "license": "MIT", + "version": "1.26.0", + "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.26.0.tgz", + "integrity": "sha512-Y5RmPncpiDtTXDbLKswIJzTqu2hyBKxTNsgKqKclDbhIgg1wgtf1fRuvxgTnRfcnxtvvgbIEcqUOzZrJ6iSReg==", "dependencies": { - "ajv": "^6.12.6", + "@hono/node-server": "^1.19.9", + "ajv": "^8.17.1", + "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", - "express": "^5.0.1", - "express-rate-limit": "^7.5.0", + "express": "^5.2.1", + "express-rate-limit": "^8.2.1", + "hono": "^4.11.4", + "jose": "^6.1.3", + "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", - "zod": "^3.23.8", - "zod-to-json-schema": "^3.24.1" + "zod": "^3.25 || ^4.0", + "zod-to-json-schema": "^3.25.1" }, "engines": { "node": ">=18" + }, + "peerDependencies": { + "@cfworker/json-schema": "^4.1.1", + "zod": "^3.25 || ^4.0" + }, + "peerDependenciesMeta": { + "@cfworker/json-schema": { + "optional": true + }, + "zod": { + "optional": false + } + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/ajv": { + "version": "8.17.1", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.17.1.tgz", + "integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==", + "dependencies": { + "fast-deep-equal": "^3.1.3", + "fast-uri": "^3.0.1", + "json-schema-traverse": "^1.0.0", + "require-from-string": "^2.0.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/json-schema-traverse": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", + "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==" + }, + "node_modules/@mongodb-js/saslprep": { + "version": "1.4.6", + "resolved": "https://registry.npmjs.org/@mongodb-js/saslprep/-/saslprep-1.4.6.tgz", + "integrity": "sha512-y+x3H1xBZd38n10NZF/rEBlvDOOMQ6LKUTHqr8R9VkJ+mmQOYtJFxIlkkK8fZrtOiL6VixbOBWMbZGBdal3Z1g==", + "dependencies": { + "sparse-bitfield": "^3.0.3" } }, "node_modules/@nodelib/fs.scandir": { @@ -4379,6 +4577,61 @@ "@opentelemetry/api": "^1.0.0" } }, + "node_modules/@opentelemetry/resource-detector-gcp/node_modules/gaxios": { + "version": "6.7.1", + "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz", + "integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==", + "dependencies": { + "extend": "^3.0.2", + "https-proxy-agent": "^7.0.1", + "is-stream": "^2.0.0", + "node-fetch": "^2.6.9", + "uuid": "^9.0.1" + }, + "engines": { + "node": ">=14" + } + }, + "node_modules/@opentelemetry/resource-detector-gcp/node_modules/gcp-metadata": { + "version": "6.1.1", + "resolved": "https://registry.npmjs.org/gcp-metadata/-/gcp-metadata-6.1.1.tgz", + "integrity": "sha512-a4tiq7E0/5fTjxPAaH4jpjkSv/uCaU2p5KC6HVGrvl0cDjA8iBZv4vv1gyzlmK0ZUKqwpOyQMKzZQe3lTit77A==", + "dependencies": { + "gaxios": "^6.1.1", + "google-logging-utils": "^0.0.2", + "json-bigint": "^1.0.0" + }, + "engines": { + "node": ">=14" + } + }, + "node_modules/@opentelemetry/resource-detector-gcp/node_modules/google-logging-utils": { + "version": "0.0.2", + "resolved": "https://registry.npmjs.org/google-logging-utils/-/google-logging-utils-0.0.2.tgz", + "integrity": "sha512-NEgUnEcBiP5HrPzufUkBzJOD/Sxsco3rLNo1F1TNf7ieU8ryUzBhqba8r756CjLX7rn3fHl6iLEwPYuqpoKgQQ==", + "engines": { + "node": ">=14" + } + }, + "node_modules/@opentelemetry/resource-detector-gcp/node_modules/node-fetch": { + "version": "2.7.0", + "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz", + "integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==", + "dependencies": { + "whatwg-url": "^5.0.0" + }, + "engines": { + "node": "4.x || >=6.0.0" + }, + "peerDependencies": { + "encoding": "^0.1.0" + }, + "peerDependenciesMeta": { + "encoding": { + "optional": true + } + } + }, "node_modules/@opentelemetry/resources": { "version": "1.30.1", "resolved": "https://registry.npmjs.org/@opentelemetry/resources/-/resources-1.30.1.tgz", @@ -6097,6 +6350,16 @@ "@types/send": "*" } }, + "node_modules/@types/fast-json-stable-stringify": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@types/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.2.tgz", + "integrity": "sha512-vsxcbfLDdjytnCnHXtinE40Xl46Wr7l/VGRGt7ewJwCPMKEHOdEsTxXX8xwgoR7cbc+6dE8SB4jlMrOV2zAg7g==", + "deprecated": "This is a stub types definition. fast-json-stable-stringify provides its own type definitions, so you do not need this installed.", + "dev": true, + "dependencies": { + "fast-json-stable-stringify": "*" + } + }, "node_modules/@types/hast": { "version": "3.0.4", "resolved": "https://registry.npmjs.org/@types/hast/-/hast-3.0.4.tgz", @@ -6236,6 +6499,16 @@ "@types/node": "*" } }, + "node_modules/@types/mqtt": { + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/@types/mqtt/-/mqtt-2.5.0.tgz", + "integrity": "sha512-n+0/ErBin30j+UbhcHGK/STjHjh65k85WNR6NlUjRG0g9yctpF12pS+SOkwz0wmp+7momAo9Cyi4Wmvy8UsCQg==", + "deprecated": "This is a stub types definition for MQTT (https://github.com/mqttjs/MQTT.js). MQTT provides its own type definitions, so you don't need @types/mqtt installed!", + "dev": true, + "dependencies": { + "mqtt": "*" + } + }, "node_modules/@types/mysql": { "version": "2.15.22", "resolved": "https://registry.npmjs.org/@types/mysql/-/mysql-2.15.22.tgz", @@ -6304,6 +6577,14 @@ "dev": true, "license": "MIT" }, + "node_modules/@types/readable-stream": { + "version": "4.0.23", + "resolved": "https://registry.npmjs.org/@types/readable-stream/-/readable-stream-4.0.23.tgz", + "integrity": "sha512-wwXrtQvbMHxCbBgjHaMGEmImFTQxxpfMOR/ZoQnXxB1woqkUbdLGFDgauo00Py9IudiaqSeiBiulSV9i6XIPig==", + "dependencies": { + "@types/node": "*" + } + }, "node_modules/@types/send": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/@types/send/-/send-1.2.1.tgz", @@ -6354,6 +6635,19 @@ "dev": true, "license": "MIT" }, + "node_modules/@types/webidl-conversions": { + "version": "7.0.3", + "resolved": "https://registry.npmjs.org/@types/webidl-conversions/-/webidl-conversions-7.0.3.tgz", + "integrity": "sha512-CiJJvcRtIgzadHCYXw7dqEnMNRjhGZlYK05Mj9OyktqV8uVT8fD2BFOB7S1uwBE3Kj2Z+4UyPmFw/Ixgw/LAlA==" + }, + "node_modules/@types/whatwg-url": { + "version": "13.0.0", + "resolved": "https://registry.npmjs.org/@types/whatwg-url/-/whatwg-url-13.0.0.tgz", + "integrity": "sha512-N8WXpbE6Wgri7KUSvrmQcqrMllKZ9uxkYWMt+mCSGwNc0Hsw9VQTW7ApqI4XNrx6/SaM2QQJCzMPDEXE058s+Q==", + "dependencies": { + "@types/webidl-conversions": "*" + } + }, "node_modules/@types/ws": { "version": "8.18.1", "resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.18.1.tgz", @@ -7033,11 +7327,21 @@ "url": "https://github.com/sponsors/antfu" } }, + "node_modules/abort-controller": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz", + "integrity": "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==", + "dependencies": { + "event-target-shim": "^5.0.0" + }, + "engines": { + "node": ">=6.5" + } + }, "node_modules/accepts": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz", "integrity": "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==", - "license": "MIT", "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" @@ -7081,7 +7385,6 @@ "version": "7.1.4", "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==", - "license": "MIT", "engines": { "node": ">= 14" } @@ -7090,6 +7393,7 @@ "version": "6.12.6", "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", + "dev": true, "license": "MIT", "dependencies": { "fast-deep-equal": "^3.1.1", @@ -7102,6 +7406,42 @@ "url": "https://github.com/sponsors/epoberezkin" } }, + "node_modules/ajv-formats": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/ajv-formats/-/ajv-formats-3.0.1.tgz", + "integrity": "sha512-8iUql50EUR+uUcdRQ3HDqa6EVyo3docL8g5WJ3FNcWmu62IbkGUue/pEyLBW8VGKKucTPgqeks4fIU1DA4yowQ==", + "dependencies": { + "ajv": "^8.0.0" + }, + "peerDependencies": { + "ajv": "^8.0.0" + }, + "peerDependenciesMeta": { + "ajv": { + "optional": true + } + } + }, + "node_modules/ajv-formats/node_modules/ajv": { + "version": "8.17.1", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.17.1.tgz", + "integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==", + "dependencies": { + "fast-deep-equal": "^3.1.3", + "fast-uri": "^3.0.1", + "json-schema-traverse": "^1.0.0", + "require-from-string": "^2.0.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ajv-formats/node_modules/json-schema-traverse": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", + "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==" + }, "node_modules/algoliasearch": { "version": "5.40.0", "resolved": "https://registry.npmjs.org/algoliasearch/-/algoliasearch-5.40.0.tgz", @@ -7132,7 +7472,7 @@ "version": "6.2.2", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz", "integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==", - "dev": true, + "devOptional": true, "license": "MIT", "engines": { "node": ">=12" @@ -7198,9 +7538,28 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", - "dev": true, + "devOptional": true, "license": "MIT" }, + "node_modules/base64-js": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz", + "integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ] + }, "node_modules/bignumber.js": { "version": "9.3.1", "resolved": "https://registry.npmjs.org/bignumber.js/-/bignumber.js-9.3.1.tgz", @@ -7220,24 +7579,38 @@ "url": "https://github.com/sponsors/antfu" } }, + "node_modules/bl": { + "version": "6.1.6", + "resolved": "https://registry.npmjs.org/bl/-/bl-6.1.6.tgz", + "integrity": "sha512-jLsPgN/YSvPUg9UX0Kd73CXpm2Psg9FxMeCSXnk3WBO3CMT10JMwijubhGfHCnFu6TPn1ei3b975dxv7K2pWVg==", + "dependencies": { + "@types/readable-stream": "^4.0.0", + "buffer": "^6.0.3", + "inherits": "^2.0.4", + "readable-stream": "^4.2.0" + } + }, "node_modules/body-parser": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.0.tgz", - "integrity": "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg==", - "license": "MIT", + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.2.tgz", + "integrity": "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==", "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", - "debug": "^4.4.0", + "debug": "^4.4.3", "http-errors": "^2.0.0", - "iconv-lite": "^0.6.3", + "iconv-lite": "^0.7.0", "on-finished": "^2.4.1", - "qs": "^6.14.0", - "raw-body": "^3.0.0", - "type-is": "^2.0.0" + "qs": "^6.14.1", + "raw-body": "^3.0.1", + "type-is": "^2.0.1" }, "engines": { "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, "node_modules/bowser": { @@ -7250,7 +7623,7 @@ "version": "2.0.2", "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz", "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "balanced-match": "^1.0.0" @@ -7269,21 +7642,68 @@ "node": ">=8" } }, - "node_modules/bytes": { - "version": "3.1.2", - "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", - "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", - "license": "MIT", - "engines": { - "node": ">= 0.8" + "node_modules/broker-factory": { + "version": "3.1.13", + "resolved": "https://registry.npmjs.org/broker-factory/-/broker-factory-3.1.13.tgz", + "integrity": "sha512-H2VALe31mEtO/SRcNp4cUU5BAm1biwhc/JaF77AigUuni/1YT0FLCJfbUxwIEs9y6Kssjk2fmXgf+Y9ALvmKlw==", + "dependencies": { + "@babel/runtime": "^7.28.6", + "fast-unique-numbers": "^9.0.26", + "tslib": "^2.8.1", + "worker-factory": "^7.0.48" } }, - "node_modules/cac": { - "version": "6.7.14", - "resolved": "https://registry.npmjs.org/cac/-/cac-6.7.14.tgz", - "integrity": "sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==", - "dev": true, - "license": "MIT", + "node_modules/bson": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/bson/-/bson-7.2.0.tgz", + "integrity": "sha512-YCEo7KjMlbNlyHhz7zAZNDpIpQbd+wOEHJYezv0nMYTn4x31eIUM2yomNNubclAt63dObUzKHWsBLJ9QcZNSnQ==", + "engines": { + "node": ">=20.19.0" + } + }, + "node_modules/buffer": { + "version": "6.0.3", + "resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz", + "integrity": "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "dependencies": { + "base64-js": "^1.3.1", + "ieee754": "^1.2.1" + } + }, + "node_modules/buffer-from": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", + "integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==" + }, + "node_modules/bytes": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", + "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/cac": { + "version": "6.7.14", + "resolved": "https://registry.npmjs.org/cac/-/cac-6.7.14.tgz", + "integrity": "sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==", + "dev": true, + "license": "MIT", "engines": { "node": ">=8" } @@ -7327,6 +7747,40 @@ "node": ">=6" } }, + "node_modules/canonicalize": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/canonicalize/-/canonicalize-1.0.8.tgz", + "integrity": "sha512-0CNTVCLZggSh7bc5VkX5WWPWO+cyZbNd07IHIsSXLia/eAq+r836hgk+8BKoEh7949Mda87VUOitx5OddVj64A==" + }, + "node_modules/cbor-extract": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/cbor-extract/-/cbor-extract-2.2.0.tgz", + "integrity": "sha512-Ig1zM66BjLfTXpNgKpvBePq271BPOvu8MR0Jl080yG7Jsl+wAZunfrwiwA+9ruzm/WEdIV5QF/bjDZTqyAIVHA==", + "hasInstallScript": true, + "optional": true, + "dependencies": { + "node-gyp-build-optional-packages": "5.1.1" + }, + "bin": { + "download-cbor-prebuilds": "bin/download-prebuilds.js" + }, + "optionalDependencies": { + "@cbor-extract/cbor-extract-darwin-arm64": "2.2.0", + "@cbor-extract/cbor-extract-darwin-x64": "2.2.0", + "@cbor-extract/cbor-extract-linux-arm": "2.2.0", + "@cbor-extract/cbor-extract-linux-arm64": "2.2.0", + "@cbor-extract/cbor-extract-linux-x64": "2.2.0", + "@cbor-extract/cbor-extract-win32-x64": "2.2.0" + } + }, + "node_modules/cbor-x": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/cbor-x/-/cbor-x-1.6.0.tgz", + "integrity": "sha512-0kareyRwHSkL6ws5VXHEf8uY1liitysCVJjlmhaLG+IXLqhSaOO+t63coaso7yjwEzWZzLy8fJo06gZDVQM9Qg==", + "optionalDependencies": { + "cbor-extract": "^2.2.0" + } + }, "node_modules/ccount": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/ccount/-/ccount-2.0.1.tgz", @@ -7507,26 +7961,6 @@ "node": ">=8.0.0" } }, - "node_modules/co-body/node_modules/http-errors": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.1.tgz", - "integrity": "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==", - "license": "MIT", - "dependencies": { - "depd": "~2.0.0", - "inherits": "~2.0.4", - "setprototypeof": "~1.2.0", - "statuses": "~2.0.2", - "toidentifier": "~1.0.1" - }, - "engines": { - "node": ">= 0.8" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/express" - } - }, "node_modules/co-body/node_modules/iconv-lite": { "version": "0.4.24", "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", @@ -7632,6 +8066,11 @@ "url": "https://github.com/sponsors/wooorm" } }, + "node_modules/commist": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/commist/-/commist-3.2.0.tgz", + "integrity": "sha512-4PIMoPniho+LqXmpS5d3NuGYncG6XWlkBSVGiWycL22dd42OYdUGil2CWuzklaJoNxyxUSpO4MKIBU94viWNAw==" + }, "node_modules/concat-map": { "version": "0.0.1", "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", @@ -7639,16 +8078,43 @@ "dev": true, "license": "MIT" }, - "node_modules/content-disposition": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.0.0.tgz", - "integrity": "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg==", - "license": "MIT", + "node_modules/concat-stream": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/concat-stream/-/concat-stream-2.0.0.tgz", + "integrity": "sha512-MWufYdFw53ccGjCA+Ol7XJYpAlW6/prSMzuPOTRnJGcGzuhLn4Scrz7qf6o8bROZ514ltazcIFJZevcfbo0x7A==", + "engines": [ + "node >= 6.0" + ], "dependencies": { - "safe-buffer": "5.2.1" + "buffer-from": "^1.0.0", + "inherits": "^2.0.3", + "readable-stream": "^3.0.2", + "typedarray": "^0.0.6" + } + }, + "node_modules/concat-stream/node_modules/readable-stream": { + "version": "3.6.2", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz", + "integrity": "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==", + "dependencies": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" }, "engines": { - "node": ">= 0.6" + "node": ">= 6" + } + }, + "node_modules/content-disposition": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.0.1.tgz", + "integrity": "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q==", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, "node_modules/content-type": { @@ -7664,7 +8130,6 @@ "version": "0.7.2", "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz", "integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==", - "license": "MIT", "engines": { "node": ">= 0.6" } @@ -7673,7 +8138,6 @@ "version": "1.2.2", "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.2.2.tgz", "integrity": "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==", - "license": "MIT", "engines": { "node": ">=6.6.0" } @@ -7747,6 +8211,14 @@ "dev": true, "license": "MIT" }, + "node_modules/data-uri-to-buffer": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz", + "integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==", + "engines": { + "node": ">= 12" + } + }, "node_modules/dateformat": { "version": "4.6.3", "resolved": "https://registry.npmjs.org/dateformat/-/dateformat-4.6.3.tgz", @@ -7840,6 +8312,15 @@ "npm": "1.2.8000 || >= 1.4.16" } }, + "node_modules/detect-libc": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.1.2.tgz", + "integrity": "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==", + "optional": true, + "engines": { + "node": ">=8" + } + }, "node_modules/devlop": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/devlop/-/devlop-1.1.0.tgz", @@ -7884,7 +8365,7 @@ "version": "0.2.0", "resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz", "integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==", - "dev": true, + "devOptional": true, "license": "MIT" }, "node_modules/ee-first": { @@ -7897,7 +8378,7 @@ "version": "9.2.2", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz", "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", - "dev": true, + "devOptional": true, "license": "MIT" }, "node_modules/emoji-regex-xs": { @@ -8274,11 +8755,26 @@ "version": "1.8.1", "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", - "license": "MIT", "engines": { "node": ">= 0.6" } }, + "node_modules/event-target-shim": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz", + "integrity": "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==", + "engines": { + "node": ">=6" + } + }, + "node_modules/events": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz", + "integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==", + "engines": { + "node": ">=0.8.x" + } + }, "node_modules/eventsource": { "version": "3.0.7", "resolved": "https://registry.npmjs.org/eventsource/-/eventsource-3.0.7.tgz", @@ -8311,18 +8807,18 @@ } }, "node_modules/express": { - "version": "5.1.0", - "resolved": "https://registry.npmjs.org/express/-/express-5.1.0.tgz", - "integrity": "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA==", - "license": "MIT", + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/express/-/express-5.2.1.tgz", + "integrity": "sha512-hIS4idWWai69NezIdRt2xFVofaF4j+6INOpJlVOLDO8zXGpUVEVzIYk12UUi2JzjEzWL3IOAxcTubgz9Po0yXw==", "dependencies": { "accepts": "^2.0.0", - "body-parser": "^2.2.0", + "body-parser": "^2.2.1", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", + "depd": "^2.0.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", @@ -8353,10 +8849,12 @@ } }, "node_modules/express-rate-limit": { - "version": "7.5.1", - "resolved": "https://registry.npmjs.org/express-rate-limit/-/express-rate-limit-7.5.1.tgz", - "integrity": "sha512-7iN8iPMDzOMHPUYllBEsQdWVB6fPDMPqwjBaFrgr4Jgr/+okjvzAy+UHlYYL/Vs0OsOrMkwS6PJDkFlJwoxUnw==", - "license": "MIT", + "version": "8.2.1", + "resolved": "https://registry.npmjs.org/express-rate-limit/-/express-rate-limit-8.2.1.tgz", + "integrity": "sha512-PCZEIEIxqwhzw4KF0n7QF4QqruVTcF73O5kFKUnGOyjbCCgizBBiFaYpd/fnBLUMPw/BWw9OsiN7GgrNYr7j6g==", + "dependencies": { + "ip-address": "10.0.1" + }, "engines": { "node": ">= 16" }, @@ -8370,8 +8868,7 @@ "node_modules/extend": { "version": "3.0.2", "resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz", - "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==", - "license": "MIT" + "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==" }, "node_modules/fast-copy": { "version": "4.0.2", @@ -8418,8 +8915,7 @@ "node_modules/fast-json-stable-stringify": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", - "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", - "license": "MIT" + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==" }, "node_modules/fast-levenshtein": { "version": "2.0.6", @@ -8434,6 +8930,33 @@ "integrity": "sha512-W+KJc2dmILlPplD/H4K9l9LcAHAfPtP6BY84uVLXQ6Evcz9Lcg33Y2z1IVblT6xdY54PXYVHEv+0Wpq8Io6zkA==", "license": "MIT" }, + "node_modules/fast-unique-numbers": { + "version": "9.0.26", + "resolved": "https://registry.npmjs.org/fast-unique-numbers/-/fast-unique-numbers-9.0.26.tgz", + "integrity": "sha512-3Mtq8p1zQinjGyWfKeuBunbuFoixG72AUkk4VvzbX4ykCW9Q4FzRaNyIlfQhUjnKw2ARVP+/CKnoyr6wfHftig==", + "dependencies": { + "@babel/runtime": "^7.28.6", + "tslib": "^2.8.1" + }, + "engines": { + "node": ">=18.2.0" + } + }, + "node_modules/fast-uri": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/fast-uri/-/fast-uri-3.1.0.tgz", + "integrity": "sha512-iPeeDKJSWf4IEOasVVrknXpaBV0IApz/gp7S2bb7Z4Lljbl2MGJRqInZiUrQwV16cpzw/D3S5j5Julj/gT52AA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/fastify" + }, + { + "type": "opencollective", + "url": "https://opencollective.com/fastify" + } + ] + }, "node_modules/fast-xml-parser": { "version": "5.2.5", "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.2.5.tgz", @@ -8462,6 +8985,28 @@ "reusify": "^1.0.4" } }, + "node_modules/fetch-blob": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz", + "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/jimmywarting" + }, + { + "type": "paypal", + "url": "https://paypal.me/jimmywarting" + } + ], + "dependencies": { + "node-domexception": "^1.0.0", + "web-streams-polyfill": "^3.0.3" + }, + "engines": { + "node": "^12.20 || >= 14.13" + } + }, "node_modules/file-entry-cache": { "version": "8.0.0", "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", @@ -8489,10 +9034,9 @@ } }, "node_modules/finalhandler": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.0.tgz", - "integrity": "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q==", - "license": "MIT", + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.1.tgz", + "integrity": "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA==", "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", @@ -8502,7 +9046,11 @@ "statuses": "^2.0.1" }, "engines": { - "node": ">= 0.8" + "node": ">= 18.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, "node_modules/find-up": { @@ -8557,7 +9105,7 @@ "version": "3.3.1", "resolved": "https://registry.npmjs.org/foreground-child/-/foreground-child-3.3.1.tgz", "integrity": "sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==", - "dev": true, + "devOptional": true, "license": "ISC", "dependencies": { "cross-spawn": "^7.0.6", @@ -8570,11 +9118,21 @@ "url": "https://github.com/sponsors/isaacs" } }, + "node_modules/formdata-polyfill": { + "version": "4.0.10", + "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz", + "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==", + "dependencies": { + "fetch-blob": "^3.1.2" + }, + "engines": { + "node": ">=12.20.0" + } + }, "node_modules/forwarded": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", - "license": "MIT", "engines": { "node": ">= 0.6" } @@ -8583,7 +9141,6 @@ "version": "2.0.0", "resolved": "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz", "integrity": "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==", - "license": "MIT", "engines": { "node": ">= 0.8" } @@ -8613,33 +9170,34 @@ } }, "node_modules/gaxios": { - "version": "6.7.1", - "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz", - "integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==", - "license": "Apache-2.0", + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-7.1.3.tgz", + "integrity": "sha512-YGGyuEdVIjqxkxVH1pUTMY/XtmmsApXrCVv5EU25iX6inEPbV+VakJfLealkBtJN69AQmh1eGOdCl9Sm1UP6XQ==", + "optional": true, + "peer": true, "dependencies": { "extend": "^3.0.2", "https-proxy-agent": "^7.0.1", - "is-stream": "^2.0.0", - "node-fetch": "^2.6.9", - "uuid": "^9.0.1" + "node-fetch": "^3.3.2", + "rimraf": "^5.0.1" }, "engines": { - "node": ">=14" + "node": ">=18" } }, "node_modules/gcp-metadata": { - "version": "6.1.1", - "resolved": "https://registry.npmjs.org/gcp-metadata/-/gcp-metadata-6.1.1.tgz", - "integrity": "sha512-a4tiq7E0/5fTjxPAaH4jpjkSv/uCaU2p5KC6HVGrvl0cDjA8iBZv4vv1gyzlmK0ZUKqwpOyQMKzZQe3lTit77A==", - "license": "Apache-2.0", + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/gcp-metadata/-/gcp-metadata-7.0.1.tgz", + "integrity": "sha512-UcO3kefx6dCcZkgcTGgVOTFb7b1LlQ02hY1omMjjrrBzkajRMCFgYOjs7J71WqnuG1k2b+9ppGL7FsOfhZMQKQ==", + "optional": true, + "peer": true, "dependencies": { - "gaxios": "^6.1.1", - "google-logging-utils": "^0.0.2", + "gaxios": "^7.0.0", + "google-logging-utils": "^1.0.0", "json-bigint": "^1.0.0" }, "engines": { - "node": ">=14" + "node": ">=18" } }, "node_modules/get-caller-file": { @@ -8705,7 +9263,7 @@ "version": "10.4.5", "resolved": "https://registry.npmjs.org/glob/-/glob-10.4.5.tgz", "integrity": "sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg==", - "dev": true, + "devOptional": true, "license": "ISC", "dependencies": { "foreground-child": "^3.1.0", @@ -8749,10 +9307,11 @@ } }, "node_modules/google-logging-utils": { - "version": "0.0.2", - "resolved": "https://registry.npmjs.org/google-logging-utils/-/google-logging-utils-0.0.2.tgz", - "integrity": "sha512-NEgUnEcBiP5HrPzufUkBzJOD/Sxsco3rLNo1F1TNf7ieU8ryUzBhqba8r756CjLX7rn3fHl6iLEwPYuqpoKgQQ==", - "license": "Apache-2.0", + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/google-logging-utils/-/google-logging-utils-1.1.3.tgz", + "integrity": "sha512-eAmLkjDjAFCVXg7A1unxHsLf961m6y17QFqXqAXGj/gVkKFrEICfStRfwUlGNfeCEjNRa32JEWOUTlYXPyyKvA==", + "optional": true, + "peer": true, "engines": { "node": ">=14" } @@ -8854,6 +9413,14 @@ "integrity": "sha512-7xgomUX6ADmcYzFik0HzAxh/73YlKR9bmFzf51CZwR+b6YtzU2m0u49hQCqV6SvlqIqsaxovfwdvbnsw3b/zpg==", "license": "MIT" }, + "node_modules/hono": { + "version": "4.11.9", + "resolved": "https://registry.npmjs.org/hono/-/hono-4.11.9.tgz", + "integrity": "sha512-Eaw2YTGM6WOxA6CXbckaEvslr2Ne4NFsKrvc0v97JD5awbmeBLO5w9Ho9L9kmKonrwF9RJlW6BxT1PVv/agBHQ==", + "engines": { + "node": ">=16.9.0" + } + }, "node_modules/hookable": { "version": "5.5.3", "resolved": "https://registry.npmjs.org/hookable/-/hookable-5.5.3.tgz", @@ -8927,35 +9494,28 @@ } }, "node_modules/http-errors": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", - "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", - "license": "MIT", + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.1.tgz", + "integrity": "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==", "dependencies": { - "depd": "2.0.0", - "inherits": "2.0.4", - "setprototypeof": "1.2.0", - "statuses": "2.0.1", - "toidentifier": "1.0.1" + "depd": "~2.0.0", + "inherits": "~2.0.4", + "setprototypeof": "~1.2.0", + "statuses": "~2.0.2", + "toidentifier": "~1.0.1" }, "engines": { "node": ">= 0.8" - } - }, - "node_modules/http-errors/node_modules/statuses": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", - "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", - "license": "MIT", - "engines": { - "node": ">= 0.8" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, "node_modules/https-proxy-agent": { "version": "7.0.6", "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", - "license": "MIT", "dependencies": { "agent-base": "^7.1.2", "debug": "4" @@ -8965,17 +9525,39 @@ } }, "node_modules/iconv-lite": { - "version": "0.6.3", - "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", - "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", - "license": "MIT", + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.7.2.tgz", + "integrity": "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==", "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" }, "engines": { "node": ">=0.10.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, + "node_modules/ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ] + }, "node_modules/ignore": { "version": "7.0.5", "resolved": "https://registry.npmjs.org/ignore/-/ignore-7.0.5.tgz", @@ -9064,11 +9646,18 @@ "url": "https://opencollective.com/ioredis" } }, + "node_modules/ip-address": { + "version": "10.0.1", + "resolved": "https://registry.npmjs.org/ip-address/-/ip-address-10.0.1.tgz", + "integrity": "sha512-NWv9YLW4PoW2B7xtzaS3NCot75m6nK7Icdv0o3lfMceJVRfSoQwqD4wEH5rLwoKJwUiZ/rfpiVBhnaF0FK4HoA==", + "engines": { + "node": ">= 12" + } + }, "node_modules/ipaddr.js": { "version": "1.9.1", "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", - "license": "MIT", "engines": { "node": ">= 0.10" } @@ -9133,14 +9722,12 @@ "node_modules/is-promise": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-4.0.0.tgz", - "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==", - "license": "MIT" + "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==" }, "node_modules/is-stream": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz", "integrity": "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==", - "license": "MIT", "engines": { "node": ">=8" }, @@ -9225,7 +9812,7 @@ "version": "3.4.3", "resolved": "https://registry.npmjs.org/jackspeak/-/jackspeak-3.4.3.tgz", "integrity": "sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==", - "dev": true, + "devOptional": true, "license": "BlueOak-1.0.0", "dependencies": { "@isaacs/cliui": "^8.0.2" @@ -9237,6 +9824,14 @@ "@pkgjs/parseargs": "^0.11.0" } }, + "node_modules/jose": { + "version": "6.1.3", + "resolved": "https://registry.npmjs.org/jose/-/jose-6.1.3.tgz", + "integrity": "sha512-0TpaTfihd4QMNwrz/ob2Bp7X04yuxJkjRGi4aKmOqwhov54i6u79oCv7T+C7lo70MKH6BesI3vscD1yb/yzKXQ==", + "funding": { + "url": "https://github.com/sponsors/panva" + } + }, "node_modules/joycon": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/joycon/-/joycon-3.1.1.tgz", @@ -9246,6 +9841,15 @@ "node": ">=10" } }, + "node_modules/js-sdsl": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/js-sdsl/-/js-sdsl-4.3.0.tgz", + "integrity": "sha512-mifzlm2+5nZ+lEcLJMoBK0/IH/bDg8XnJfd/Wq6IP+xoCjLZsTOnV2QpxlVbX9bMnkl5PdEjNtBJ9Cj1NjifhQ==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/js-sdsl" + } + }, "node_modules/js-tokens": { "version": "9.0.1", "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-9.0.1.tgz", @@ -9286,8 +9890,14 @@ "version": "0.4.1", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true, "license": "MIT" }, + "node_modules/json-schema-typed": { + "version": "8.0.2", + "resolved": "https://registry.npmjs.org/json-schema-typed/-/json-schema-typed-8.0.2.tgz", + "integrity": "sha512-fQhoXdcvc3V28x7C7BMs4P5+kNlgUURe2jmUT1T//oBRMDrqy1QPelJimwZGo7Hg9VPV3EQV5Bnq4hbFy2vetA==" + }, "node_modules/json-stable-stringify-without-jsonify": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz", @@ -9295,6 +9905,31 @@ "dev": true, "license": "MIT" }, + "node_modules/jsonld": { + "version": "8.3.3", + "resolved": "https://registry.npmjs.org/jsonld/-/jsonld-8.3.3.tgz", + "integrity": "sha512-9YcilrF+dLfg9NTEof/mJLMtbdX1RJ8dbWtJgE00cMOIohb1lIyJl710vFiTaiHTl6ZYODJuBd32xFvUhmv3kg==", + "dependencies": { + "@digitalbazaar/http-client": "^3.4.1", + "canonicalize": "^1.0.1", + "lru-cache": "^6.0.0", + "rdf-canonize": "^3.4.0" + }, + "engines": { + "node": ">=14" + } + }, + "node_modules/jsonld/node_modules/lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dependencies": { + "yallist": "^4.0.0" + }, + "engines": { + "node": ">=10" + } + }, "node_modules/keygrip": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/keygrip/-/keygrip-1.1.0.tgz", @@ -9474,6 +10109,41 @@ "node": ">= 0.6" } }, + "node_modules/ky": { + "version": "0.33.3", + "resolved": "https://registry.npmjs.org/ky/-/ky-0.33.3.tgz", + "integrity": "sha512-CasD9OCEQSFIam2U8efFK81Yeg8vNMTBUqtMOHlrcWQHqUX3HeCl9Dr31u4toV7emlH8Mymk5+9p0lL6mKb/Xw==", + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/sindresorhus/ky?sponsor=1" + } + }, + "node_modules/ky-universal": { + "version": "0.11.0", + "resolved": "https://registry.npmjs.org/ky-universal/-/ky-universal-0.11.0.tgz", + "integrity": "sha512-65KyweaWvk+uKKkCrfAf+xqN2/epw1IJDtlyCPxYffFCMR8u1sp2U65NtWpnozYfZxQ6IUzIlvUcw+hQ82U2Xw==", + "dependencies": { + "abort-controller": "^3.0.0", + "node-fetch": "^3.2.10" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/sindresorhus/ky-universal?sponsor=1" + }, + "peerDependencies": { + "ky": ">=0.31.4", + "web-streams-polyfill": ">=3.2.1" + }, + "peerDependenciesMeta": { + "web-streams-polyfill": { + "optional": true + } + } + }, "node_modules/levn": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz", @@ -9545,7 +10215,6 @@ "version": "10.4.3", "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz", "integrity": "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==", - "dev": true, "license": "ISC" }, "node_modules/magic-string": { @@ -9633,11 +10302,15 @@ "node": ">= 0.8" } }, + "node_modules/memory-pager": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/memory-pager/-/memory-pager-1.5.0.tgz", + "integrity": "sha512-ZS4Bp4r/Zoeq6+NLJpP+0Zzm0pR8whtGPf1XExKLJBAczGMnSi3It14OiNCStjQjM6NU1okjQGSxgEZN8eBYKg==" + }, "node_modules/merge-descriptors": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-2.0.0.tgz", "integrity": "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==", - "license": "MIT", "engines": { "node": ">=18" }, @@ -9792,7 +10465,7 @@ "version": "9.0.5", "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz", "integrity": "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==", - "dev": true, + "devOptional": true, "license": "ISC", "dependencies": { "brace-expansion": "^2.0.1" @@ -9817,7 +10490,7 @@ "version": "7.1.2", "resolved": "https://registry.npmjs.org/minipass/-/minipass-7.1.2.tgz", "integrity": "sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==", - "dev": true, + "devOptional": true, "license": "ISC", "engines": { "node": ">=16 || 14 >=14.17" @@ -9843,6 +10516,135 @@ "integrity": "sha512-EGWKgxALGMgzvxYF1UyGTy0HXX/2vHLkw6+NvDKW2jypWbHpjQuj4UMcqQWXHERJhVGKikolT06G3bcKe4fi7w==", "license": "MIT" }, + "node_modules/mongodb": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/mongodb/-/mongodb-7.1.0.tgz", + "integrity": "sha512-kMfnKunbolQYwCIyrkxNJFB4Ypy91pYqua5NargS/f8ODNSJxT03ZU3n1JqL4mCzbSih8tvmMEMLpKTT7x5gCg==", + "dependencies": { + "@mongodb-js/saslprep": "^1.3.0", + "bson": "^7.1.1", + "mongodb-connection-string-url": "^7.0.0" + }, + "engines": { + "node": ">=20.19.0" + }, + "peerDependencies": { + "@aws-sdk/credential-providers": "^3.806.0", + "@mongodb-js/zstd": "^7.0.0", + "gcp-metadata": "^7.0.1", + "kerberos": "^7.0.0", + "mongodb-client-encryption": ">=7.0.0 <7.1.0", + "snappy": "^7.3.2", + "socks": "^2.8.6" + }, + "peerDependenciesMeta": { + "@aws-sdk/credential-providers": { + "optional": true + }, + "@mongodb-js/zstd": { + "optional": true + }, + "gcp-metadata": { + "optional": true + }, + "kerberos": { + "optional": true + }, + "mongodb-client-encryption": { + "optional": true + }, + "snappy": { + "optional": true + }, + "socks": { + "optional": true + } + } + }, + "node_modules/mongodb-connection-string-url": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/mongodb-connection-string-url/-/mongodb-connection-string-url-7.0.1.tgz", + "integrity": "sha512-h0AZ9A7IDVwwHyMxmdMXKy+9oNlF0zFoahHiX3vQ8e3KFcSP3VmsmfvtRSuLPxmyv2vjIDxqty8smTgie/SNRQ==", + "dependencies": { + "@types/whatwg-url": "^13.0.0", + "whatwg-url": "^14.1.0" + }, + "engines": { + "node": ">=20.19.0" + } + }, + "node_modules/mongodb-connection-string-url/node_modules/tr46": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/tr46/-/tr46-5.1.1.tgz", + "integrity": "sha512-hdF5ZgjTqgAntKkklYw0R03MG2x/bSzTtkxmIRw/sTNV8YXsCJ1tfLAX23lhxhHJlEf3CRCOCGGWw3vI3GaSPw==", + "dependencies": { + "punycode": "^2.3.1" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/mongodb-connection-string-url/node_modules/webidl-conversions": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-7.0.0.tgz", + "integrity": "sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==", + "engines": { + "node": ">=12" + } + }, + "node_modules/mongodb-connection-string-url/node_modules/whatwg-url": { + "version": "14.2.0", + "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-14.2.0.tgz", + "integrity": "sha512-De72GdQZzNTUBBChsXueQUnPKDkg/5A5zp7pFDuQAj5UFoENpiACU0wlCvzpAGnTkj++ihpKwKyYewn/XNUbKw==", + "dependencies": { + "tr46": "^5.1.0", + "webidl-conversions": "^7.0.0" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/mqtt": { + "version": "5.15.0", + "resolved": "https://registry.npmjs.org/mqtt/-/mqtt-5.15.0.tgz", + "integrity": "sha512-KC+wAssYk83Qu5bT8YDzDYgUJxPhbLeVsDvpY2QvL28PnXYJzC2WkKruyMUgBAZaQ7h9lo9k2g4neRNUUxzgMw==", + "dependencies": { + "@types/readable-stream": "^4.0.21", + "@types/ws": "^8.18.1", + "commist": "^3.2.0", + "concat-stream": "^2.0.0", + "debug": "^4.4.1", + "help-me": "^5.0.0", + "lru-cache": "^10.4.3", + "minimist": "^1.2.8", + "mqtt-packet": "^9.0.2", + "number-allocator": "^1.0.14", + "readable-stream": "^4.7.0", + "rfdc": "^1.4.1", + "socks": "^2.8.6", + "split2": "^4.2.0", + "worker-timers": "^8.0.23", + "ws": "^8.18.3" + }, + "bin": { + "mqtt": "build/bin/mqtt.js", + "mqtt_pub": "build/bin/pub.js", + "mqtt_sub": "build/bin/sub.js" + }, + "engines": { + "node": ">=16.0.0" + } + }, + "node_modules/mqtt-packet": { + "version": "9.0.2", + "resolved": "https://registry.npmjs.org/mqtt-packet/-/mqtt-packet-9.0.2.tgz", + "integrity": "sha512-MvIY0B8/qjq7bKxdN1eD+nrljoeaai+qjLJgfRn3TiMuz0pamsIWY2bFODPZMSNmabsLANXsLl4EMoWvlaTZWA==", + "dependencies": { + "bl": "^6.0.8", + "debug": "^4.3.4", + "process-nextick-args": "^2.0.1" + } + }, "node_modules/ms": { "version": "2.1.3", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", @@ -9879,29 +10681,95 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-1.0.0.tgz", "integrity": "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==", - "license": "MIT", "engines": { "node": ">= 0.6" } }, + "node_modules/neo4j-driver": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/neo4j-driver/-/neo4j-driver-6.0.1.tgz", + "integrity": "sha512-8DDF2MwEJNz7y7cp97x4u8fmVIP4CWS8qNBxdwxTG0fWtsS+2NdeC+7uXwmmuFOpHvkfXqv63uWY73bfDtOH8Q==", + "dependencies": { + "neo4j-driver-bolt-connection": "6.0.1", + "neo4j-driver-core": "6.0.1", + "rxjs": "^7.8.2" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/neo4j-driver-bolt-connection": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/neo4j-driver-bolt-connection/-/neo4j-driver-bolt-connection-6.0.1.tgz", + "integrity": "sha512-1KyG73TO+CwnYJisdHD0sjUw9yR+P5q3JFcmVPzsHT4/whzCjuXSMpmY4jZcHH2PdY2cBUq4l/6WcDiPMxW2UA==", + "dependencies": { + "buffer": "^6.0.3", + "neo4j-driver-core": "6.0.1", + "string_decoder": "^1.3.0" + } + }, + "node_modules/neo4j-driver-core": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/neo4j-driver-core/-/neo4j-driver-core-6.0.1.tgz", + "integrity": "sha512-5I2KxICAvcHxnWdJyDqwu8PBAQvWVTlQH2ve3VQmtVdJScPqWhpXN1PiX5IIl+cRF3pFpz9GQF53B5n6s0QQUQ==" + }, + "node_modules/node-domexception": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz", + "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==", + "deprecated": "Use your platform's native DOMException instead", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/jimmywarting" + }, + { + "type": "github", + "url": "https://paypal.me/jimmywarting" + } + ], + "engines": { + "node": ">=10.5.0" + } + }, "node_modules/node-fetch": { - "version": "2.7.0", - "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz", - "integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==", - "license": "MIT", + "version": "3.3.2", + "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz", + "integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==", "dependencies": { - "whatwg-url": "^5.0.0" + "data-uri-to-buffer": "^4.0.0", + "fetch-blob": "^3.1.4", + "formdata-polyfill": "^4.0.10" }, "engines": { - "node": "4.x || >=6.0.0" + "node": "^12.20.0 || ^14.13.1 || >=16.0.0" }, - "peerDependencies": { - "encoding": "^0.1.0" + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/node-fetch" + } + }, + "node_modules/node-gyp-build-optional-packages": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/node-gyp-build-optional-packages/-/node-gyp-build-optional-packages-5.1.1.tgz", + "integrity": "sha512-+P72GAjVAbTxjjwUmwjVrqrdZROD4nf8KgpBoDxqXXTiYZZt/ud60dE5yvCSr9lRO8e8yv6kgJIC0K0PfZFVQw==", + "optional": true, + "dependencies": { + "detect-libc": "^2.0.1" }, - "peerDependenciesMeta": { - "encoding": { - "optional": true - } + "bin": { + "node-gyp-build-optional-packages": "bin.js", + "node-gyp-build-optional-packages-optional": "optional.js", + "node-gyp-build-optional-packages-test": "build-test.js" + } + }, + "node_modules/number-allocator": { + "version": "1.0.14", + "resolved": "https://registry.npmjs.org/number-allocator/-/number-allocator-1.0.14.tgz", + "integrity": "sha512-OrL44UTVAvkKdOdRQZIJpLkAdjXGTRda052sN4sO77bKEzYYqWKMBjQvrJFzqygI99gL6Z4u2xctPW1tB8ErvA==", + "dependencies": { + "debug": "^4.3.1", + "js-sdsl": "4.3.0" } }, "node_modules/object-assign": { @@ -10049,7 +10917,7 @@ "version": "1.0.1", "resolved": "https://registry.npmjs.org/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz", "integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==", - "dev": true, + "devOptional": true, "license": "BlueOak-1.0.0" }, "node_modules/parent-module": { @@ -10103,7 +10971,7 @@ "version": "1.11.1", "resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-1.11.1.tgz", "integrity": "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==", - "dev": true, + "devOptional": true, "license": "BlueOak-1.0.0", "dependencies": { "lru-cache": "^10.2.0", @@ -10120,7 +10988,6 @@ "version": "8.3.0", "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-8.3.0.tgz", "integrity": "sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==", - "license": "MIT", "funding": { "type": "opencollective", "url": "https://opencollective.com/express" @@ -10381,6 +11248,19 @@ "node": ">= 0.8.0" } }, + "node_modules/process": { + "version": "0.11.10", + "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", + "integrity": "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==", + "engines": { + "node": ">= 0.6.0" + } + }, + "node_modules/process-nextick-args": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", + "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==" + }, "node_modules/process-warning": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/process-warning/-/process-warning-5.0.0.tgz", @@ -10436,7 +11316,6 @@ "version": "2.0.7", "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", - "license": "MIT", "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" @@ -10465,10 +11344,9 @@ } }, "node_modules/qs": { - "version": "6.14.0", - "resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz", - "integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==", - "license": "BSD-3-Clause", + "version": "6.14.2", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.14.2.tgz", + "integrity": "sha512-V/yCWTTF7VJ9hIh18Ugr2zhJMP01MY7c5kh4J870L7imm6/DIzBsNLTXzMwUA3yZ5b/KBqLx8Kp3uRvd7xSe3Q==", "dependencies": { "side-channel": "^1.1.0" }, @@ -10510,40 +11388,48 @@ "version": "1.2.1", "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", - "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/raw-body": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.1.tgz", - "integrity": "sha512-9G8cA+tuMS75+6G/TzW8OtLzmBDMo8p1JRxN5AZ+LAp8uxGA8V8GZm4GQ4/N5QNQEnLmg6SS7wyuSmbKepiKqA==", - "license": "MIT", + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.2.tgz", + "integrity": "sha512-K5zQjDllxWkf7Z5xJdV0/B0WTNqx6vxG70zJE4N0kBs4LovmEYWJzQGxC9bS9RAKu3bgM40lrd5zoLJ12MQ5BA==", "dependencies": { - "bytes": "3.1.2", - "http-errors": "2.0.0", - "iconv-lite": "0.7.0", - "unpipe": "1.0.0" + "bytes": "~3.1.2", + "http-errors": "~2.0.1", + "iconv-lite": "~0.7.0", + "unpipe": "~1.0.0" }, "engines": { "node": ">= 0.10" } }, - "node_modules/raw-body/node_modules/iconv-lite": { - "version": "0.7.0", - "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.7.0.tgz", - "integrity": "sha512-cf6L2Ds3h57VVmkZe+Pn+5APsT7FpqJtEhhieDCvrE2MK5Qk9MyffgQyuxQTm6BChfeZNtcOLHp9IcWRVcIcBQ==", - "license": "MIT", + "node_modules/rdf-canonize": { + "version": "3.4.0", + "resolved": "https://registry.npmjs.org/rdf-canonize/-/rdf-canonize-3.4.0.tgz", + "integrity": "sha512-fUeWjrkOO0t1rg7B2fdyDTvngj+9RlUyL92vOdiB7c0FPguWVsniIMjEtHH+meLBO9rzkUlUzBVXgWrjI8P9LA==", "dependencies": { - "safer-buffer": ">= 2.1.2 < 3.0.0" + "setimmediate": "^1.0.5" }, "engines": { - "node": ">=0.10.0" + "node": ">=12" + } + }, + "node_modules/readable-stream": { + "version": "4.7.0", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-4.7.0.tgz", + "integrity": "sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg==", + "dependencies": { + "abort-controller": "^3.0.0", + "buffer": "^6.0.3", + "events": "^3.3.0", + "process": "^0.11.10", + "string_decoder": "^1.3.0" }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/express" + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" } }, "node_modules/real-require": { @@ -10628,6 +11514,14 @@ "node": ">=0.10.0" } }, + "node_modules/require-from-string": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", + "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/require-in-the-middle": { "version": "7.5.2", "resolved": "https://registry.npmjs.org/require-in-the-middle/-/require-in-the-middle-7.5.2.tgz", @@ -10697,9 +11591,24 @@ "version": "1.4.1", "resolved": "https://registry.npmjs.org/rfdc/-/rfdc-1.4.1.tgz", "integrity": "sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA==", - "dev": true, "license": "MIT" }, + "node_modules/rimraf": { + "version": "5.0.10", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-5.0.10.tgz", + "integrity": "sha512-l0OE8wL34P4nJH/H2ffoaniAokM2qSmrtXHmlpvYr5AVVX8msAyW0l8NVJFDxlSK4u3Uh/f41cQheDVdnYijwQ==", + "optional": true, + "peer": true, + "dependencies": { + "glob": "^10.3.7" + }, + "bin": { + "rimraf": "dist/esm/bin.mjs" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, "node_modules/rollup": { "version": "4.52.4", "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.52.4.tgz", @@ -10746,7 +11655,6 @@ "version": "2.2.0", "resolved": "https://registry.npmjs.org/router/-/router-2.2.0.tgz", "integrity": "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ==", - "license": "MIT", "dependencies": { "debug": "^4.4.0", "depd": "^2.0.0", @@ -10782,6 +11690,14 @@ "queue-microtask": "^1.2.2" } }, + "node_modules/rxjs": { + "version": "7.8.2", + "resolved": "https://registry.npmjs.org/rxjs/-/rxjs-7.8.2.tgz", + "integrity": "sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA==", + "dependencies": { + "tslib": "^2.1.0" + } + }, "node_modules/safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", @@ -10854,32 +11770,34 @@ } }, "node_modules/send": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/send/-/send-1.2.0.tgz", - "integrity": "sha512-uaW0WwXKpL9blXE2o0bRhoL2EGXIrZxQ2ZQ4mgcfoBxdFmQold+qWsD2jLrfZ0trjKL6vOw0j//eAwcALFjKSw==", - "license": "MIT", + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/send/-/send-1.2.1.tgz", + "integrity": "sha512-1gnZf7DFcoIcajTjTwjwuDjzuz4PPcY2StKPlsGAQ1+YH20IRVrBaXSWmdjowTJ6u8Rc01PoYOGHXfP1mYcZNQ==", "dependencies": { - "debug": "^4.3.5", + "debug": "^4.4.3", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "fresh": "^2.0.0", - "http-errors": "^2.0.0", - "mime-types": "^3.0.1", + "http-errors": "^2.0.1", + "mime-types": "^3.0.2", "ms": "^2.1.3", "on-finished": "^2.4.1", "range-parser": "^1.2.1", - "statuses": "^2.0.1" + "statuses": "^2.0.2" }, "engines": { "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, "node_modules/serve-static": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.0.tgz", - "integrity": "sha512-61g9pCh0Vnh7IutZjtLGGpTA355+OPn2TyDv/6ivP2h/AdAVX9azsoxmg2/M6nZeQZNYBEwIcsne1mJd9oQItQ==", - "license": "MIT", + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.1.tgz", + "integrity": "sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw==", "dependencies": { "encodeurl": "^2.0.0", "escape-html": "^1.0.3", @@ -10888,8 +11806,17 @@ }, "engines": { "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" } }, + "node_modules/setimmediate": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz", + "integrity": "sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==" + }, "node_modules/setprototypeof": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", @@ -11023,7 +11950,7 @@ "version": "4.1.0", "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz", "integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==", - "dev": true, + "devOptional": true, "license": "ISC", "engines": { "node": ">=14" @@ -11032,6 +11959,28 @@ "url": "https://github.com/sponsors/isaacs" } }, + "node_modules/smart-buffer": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/smart-buffer/-/smart-buffer-4.2.0.tgz", + "integrity": "sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg==", + "engines": { + "node": ">= 6.0.0", + "npm": ">= 3.0.0" + } + }, + "node_modules/socks": { + "version": "2.8.7", + "resolved": "https://registry.npmjs.org/socks/-/socks-2.8.7.tgz", + "integrity": "sha512-HLpt+uLy/pxB+bum/9DzAgiKS8CX1EvbWxI4zlmgGCExImLdiad2iCwXT5Z4c9c3Eq8rP2318mPW2c+QbtjK8A==", + "dependencies": { + "ip-address": "^10.0.1", + "smart-buffer": "^4.2.0" + }, + "engines": { + "node": ">= 10.0.0", + "npm": ">= 3.0.0" + } + }, "node_modules/sonic-boom": { "version": "4.2.0", "resolved": "https://registry.npmjs.org/sonic-boom/-/sonic-boom-4.2.0.tgz", @@ -11062,6 +12011,14 @@ "url": "https://github.com/sponsors/wooorm" } }, + "node_modules/sparse-bitfield": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/sparse-bitfield/-/sparse-bitfield-3.0.3.tgz", + "integrity": "sha512-kvzhi7vqKTfkh0PZU+2D2PIllw2ymqJKujUcyPMd9Y75Nv4nPbGJZXNhxsgdQab2BmlDct1YnfQCguEvHr7VsQ==", + "dependencies": { + "memory-pager": "^1.0.2" + } + }, "node_modules/speakingurl": { "version": "14.0.1", "resolved": "https://registry.npmjs.org/speakingurl/-/speakingurl-14.0.1.tgz", @@ -11110,11 +12067,19 @@ "dev": true, "license": "MIT" }, + "node_modules/string_decoder": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz", + "integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==", + "dependencies": { + "safe-buffer": "~5.2.0" + } + }, "node_modules/string-width": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz", "integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "eastasianwidth": "^0.2.0", @@ -11133,7 +12098,7 @@ "version": "4.2.3", "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "emoji-regex": "^8.0.0", @@ -11148,7 +12113,7 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, + "devOptional": true, "license": "MIT", "engines": { "node": ">=8" @@ -11158,14 +12123,14 @@ "version": "8.0.0", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", - "dev": true, + "devOptional": true, "license": "MIT" }, "node_modules/string-width-cjs/node_modules/strip-ansi": { "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -11193,7 +12158,7 @@ "version": "7.1.2", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz", "integrity": "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "ansi-regex": "^6.0.1" @@ -11210,7 +12175,7 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -11223,7 +12188,7 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, + "devOptional": true, "license": "MIT", "engines": { "node": ">=8" @@ -11542,6 +12507,11 @@ "node": ">= 0.6" } }, + "node_modules/typedarray": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/typedarray/-/typedarray-0.0.6.tgz", + "integrity": "sha512-/aCDEGatGvZ2BIk+HmLf4ifCJFwvKFNb9/JeZPMulfgFracn9QFcAf5GO8B/mweUjSoblS5In0cWhqpfs/5PQA==" + }, "node_modules/typescript": { "version": "5.9.3", "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz", @@ -11556,6 +12526,17 @@ "node": ">=14.17" } }, + "node_modules/undici": { + "version": "5.29.0", + "resolved": "https://registry.npmjs.org/undici/-/undici-5.29.0.tgz", + "integrity": "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg==", + "dependencies": { + "@fastify/busboy": "^2.0.0" + }, + "engines": { + "node": ">=14.0" + } + }, "node_modules/undici-types": { "version": "7.14.0", "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.14.0.tgz", @@ -11648,11 +12629,17 @@ "version": "4.4.1", "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, "license": "BSD-2-Clause", "dependencies": { "punycode": "^2.1.0" } }, + "node_modules/util-deprecate": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", + "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==" + }, "node_modules/uuid": { "version": "9.0.1", "resolved": "https://registry.npmjs.org/uuid/-/uuid-9.0.1.tgz", @@ -11661,7 +12648,6 @@ "https://github.com/sponsors/broofa", "https://github.com/sponsors/ctavan" ], - "license": "MIT", "bin": { "uuid": "dist/bin/uuid" } @@ -12368,6 +13354,14 @@ } } }, + "node_modules/web-streams-polyfill": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz", + "integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==", + "engines": { + "node": ">= 8" + } + }, "node_modules/webidl-conversions": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", @@ -12426,11 +13420,54 @@ "node": ">=0.10.0" } }, + "node_modules/worker-factory": { + "version": "7.0.48", + "resolved": "https://registry.npmjs.org/worker-factory/-/worker-factory-7.0.48.tgz", + "integrity": "sha512-CGmBy3tJvpBPjUvb0t4PrpKubUsfkI1Ohg0/GGFU2RvA9j/tiVYwKU8O7yu7gH06YtzbeJLzdUR29lmZKn5pag==", + "dependencies": { + "@babel/runtime": "^7.28.6", + "fast-unique-numbers": "^9.0.26", + "tslib": "^2.8.1" + } + }, + "node_modules/worker-timers": { + "version": "8.0.30", + "resolved": "https://registry.npmjs.org/worker-timers/-/worker-timers-8.0.30.tgz", + "integrity": "sha512-8P7YoMHWN0Tz7mg+9oEhuZdjBIn2z6gfjlJqFcHiDd9no/oLnMGCARCDkV1LR3ccQus62ZdtIp7t3aTKrMLHOg==", + "dependencies": { + "@babel/runtime": "^7.28.6", + "tslib": "^2.8.1", + "worker-timers-broker": "^8.0.15", + "worker-timers-worker": "^9.0.13" + } + }, + "node_modules/worker-timers-broker": { + "version": "8.0.15", + "resolved": "https://registry.npmjs.org/worker-timers-broker/-/worker-timers-broker-8.0.15.tgz", + "integrity": "sha512-Te+EiVUMzG5TtHdmaBZvBrZSFNauym6ImDaCAnzQUxvjnw+oGjMT2idmAOgDy30vOZMLejd0bcsc90Axu6XPWA==", + "dependencies": { + "@babel/runtime": "^7.28.6", + "broker-factory": "^3.1.13", + "fast-unique-numbers": "^9.0.26", + "tslib": "^2.8.1", + "worker-timers-worker": "^9.0.13" + } + }, + "node_modules/worker-timers-worker": { + "version": "9.0.13", + "resolved": "https://registry.npmjs.org/worker-timers-worker/-/worker-timers-worker-9.0.13.tgz", + "integrity": "sha512-qjn18szGb1kjcmh2traAdki1eiIS5ikFo+L90nfMOvSRpuDw1hAcR1nzkP2+Hkdqz5thIRnfuWx7QSpsEUsA6Q==", + "dependencies": { + "@babel/runtime": "^7.28.6", + "tslib": "^2.8.1", + "worker-factory": "^7.0.48" + } + }, "node_modules/wrap-ansi": { "version": "8.1.0", "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-8.1.0.tgz", "integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "ansi-styles": "^6.1.0", @@ -12449,7 +13486,7 @@ "version": "7.0.0", "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "ansi-styles": "^4.0.0", @@ -12467,7 +13504,7 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", - "dev": true, + "devOptional": true, "license": "MIT", "engines": { "node": ">=8" @@ -12477,14 +13514,14 @@ "version": "8.0.0", "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", - "dev": true, + "devOptional": true, "license": "MIT" }, "node_modules/wrap-ansi-cjs/node_modules/string-width": { "version": "4.2.3", "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "emoji-regex": "^8.0.0", @@ -12499,7 +13536,7 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", - "dev": true, + "devOptional": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -12512,7 +13549,7 @@ "version": "6.2.3", "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz", "integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==", - "dev": true, + "devOptional": true, "license": "MIT", "engines": { "node": ">=12" @@ -12566,6 +13603,11 @@ "node": ">=10" } }, + "node_modules/yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==" + }, "node_modules/yargs": { "version": "17.7.2", "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", @@ -12656,12 +13698,11 @@ } }, "node_modules/zod-to-json-schema": { - "version": "3.24.6", - "resolved": "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.24.6.tgz", - "integrity": "sha512-h/z3PKvcTcTetyjl1fkj79MHNEjm+HpD6NXheWjzOekY7kV+lwDYnHw+ivHkijnCSMz1yJaWBD9vu/Fcmk+vEg==", - "license": "ISC", + "version": "3.25.1", + "resolved": "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.25.1.tgz", + "integrity": "sha512-pM/SU9d3YAggzi6MtR4h7ruuQlqKtad8e9S0fmxcMi+ueAK5Korys/aWcV9LIIHTVbj01NdzxcnXSN+O74ZIVA==", "peerDependencies": { - "zod": "^3.24.1" + "zod": "^3.25 || ^4" } }, "node_modules/zwitch": { diff --git a/package.json b/package.json index d48d828..0b8897a 100644 --- a/package.json +++ b/package.json @@ -73,6 +73,7 @@ "@aws-sdk/client-s3": "^3.922.0", "@aws-sdk/credential-providers": "^3.927.0", "@aws-sdk/s3-request-presigner": "^3.922.0", + "@jsonld-ex/core": "^0.1.1", "@koa/cors": "^5.0.0", "@koa/router": "^13.1.0", "@modelcontextprotocol/sdk": "^1.19.1", @@ -87,9 +88,13 @@ "@opentelemetry/semantic-conventions": "^1.25.0", "@supabase/supabase-js": "^2.74.0", "dotenv": "^16.4.0", + "fast-json-stable-stringify": "^2.1.0", "ioredis": "^5.4.1", "koa": "^3.1.1", "koa-bodyparser": "^4.4.1", + "mongodb": "^7.1.0", + "mqtt": "^5.15.0", + "neo4j-driver": "^6.0.1", "p-limit": "^7.2.0", "pino": "^10.1.0", "pino-pretty": "^13.1.3", @@ -97,10 +102,12 @@ "zod": "^3.25.76" }, "devDependencies": { + "@types/fast-json-stable-stringify": "^2.1.2", "@types/koa": "^2.15.0", "@types/koa__cors": "^5.0.0", "@types/koa__router": "^12.0.4", "@types/koa-bodyparser": "^4.3.12", + "@types/mqtt": "^2.5.0", "@types/node": "^24.7.0", "@types/pino": "^7.0.4", "@typescript-eslint/eslint-plugin": "^8.46.0", diff --git a/scripts/inspect-jsonld-ex.ts b/scripts/inspect-jsonld-ex.ts new file mode 100644 index 0000000..73d9a9c --- /dev/null +++ b/scripts/inspect-jsonld-ex.ts @@ -0,0 +1,9 @@ + +import * as core from '@jsonld-ex/core'; +// Try importing from extensions if available +// @ts-ignore +import * as security from '@jsonld-ex/core/dist/extensions/security.js'; +// @ts-ignore +import * as aiml from '@jsonld-ex/core/dist/extensions/ai-ml.js'; + +console.log('AI/ML Exports:', JSON.stringify(Object.keys(aiml || {}), null, 2)); diff --git a/scripts/verify-jsonld.ts b/scripts/verify-jsonld.ts new file mode 100644 index 0000000..a22265a --- /dev/null +++ b/scripts/verify-jsonld.ts @@ -0,0 +1,100 @@ + +import { randomUUID } from 'crypto'; +import { VCon, Analysis } from '../src/types/vcon.js'; +import { + toJsonLd, + enrichAnalysis, + signVCon, + verifyIntegrity +} from '../src/jsonld/index.js'; + +console.log("Starting JSON-LD Integration Verification..."); +console.log("------------------------------------------------"); + +// 1. Create a sample vCon +const vcon: VCon = { + vcon: "0.3.0", + uuid: randomUUID(), + created_at: new Date().toISOString(), + parties: [ + { name: "Alice", tel: "+15551234567" }, + { name: "Bot", name: "AI Agent" } + ], + dialog: [ + { + type: "recording", + start: new Date().toISOString(), + duration: 10, + mimetype: "audio/wav", + body: "base64encodeddata...", + encoding: "base64url" + } + ], + analysis: [ + { + type: "transcript", + vendor: "openai", + schema: "transcript-schema", + body: "Alice: Hello\nBot: Hi there" + } + ] +}; + +console.log("1. Created Base vCon:", vcon.uuid); + +// 2. Test JSON-LD Context +const jsonLdVcon = toJsonLd(vcon); +console.log("\n2. Converted to JSON-LD:"); +if (jsonLdVcon["@context"]) { + console.log(" PASS: @context is present"); +} else { + console.error(" FAIL: @context is missing"); + process.exit(1); +} + +// 3. Test Enrichment +console.log("\n3. Testing Analysis Enrichment:"); +const analysis = vcon.analysis![0]; +const enrichedAnalysis = enrichAnalysis(analysis, 0.95, "https://api.openai.com/v1/chat/completions"); + +console.log(" Enriched Analysis:", JSON.stringify(enrichedAnalysis, null, 2)); + +if (enrichedAnalysis["@confidence"] === 0.95 && enrichedAnalysis["@source"] === "https://api.openai.com/v1/chat/completions") { + console.log(" PASS: Enrichment fields present"); +} else { + console.error(" FAIL: Enrichment fields incorrect"); + process.exit(1); +} + +// 4. Test Integrity +console.log("\n4. Testing Integrity Signing & Verification:"); + +// Sign the JSON-LD vCon (which includes the enriched analysis if we update it) +// Let's update the vCon with enriched analysis first +jsonLdVcon.analysis = [enrichedAnalysis]; + +const signedVCon = signVCon(jsonLdVcon); +console.log(" Signed vCon @integrity:", signedVCon["@integrity"]); + +const isValid = verifyIntegrity(signedVCon); +if (isValid) { + console.log(" PASS: Integrity valid immediately after signing"); +} else { + console.error(" FAIL: Integrity check failed"); + process.exit(1); +} + +// Tamper with data +console.log(" Tampering with data..."); +signedVCon.parties[0].name = "Mallory"; + +const isTamperedValid = verifyIntegrity(signedVCon); +if (!isTamperedValid) { + console.log(" PASS: Integrity check failed after tampering (Expected)"); +} else { + console.error(" FAIL: Integrity check PASSED after tampering (Unexpected)"); + process.exit(1); +} + +console.log("------------------------------------------------"); +console.log("Verification SUCCESS"); diff --git a/scripts/verify-mongo-analytics.ts b/scripts/verify-mongo-analytics.ts new file mode 100644 index 0000000..3f611fd --- /dev/null +++ b/scripts/verify-mongo-analytics.ts @@ -0,0 +1,107 @@ + +import { getMongoClient, closeMongoClient } from '../src/db/mongo-client.js'; +import { MongoDatabaseInspector } from '../src/db/mongo-inspector.js'; +import { MongoDatabaseAnalytics } from '../src/db/mongo-analytics.js'; +import { MongoDatabaseSizeAnalyzer } from '../src/db/mongo-size-analyzer.js'; +import * as fs from 'fs'; +import * as path from 'path'; + +const LOG_FILE = path.join(process.cwd(), 'verif_analytics_log.txt'); + +function log(msg: string, ...args: any[]) { + let text = msg; + if (args.length > 0) { + text += ' ' + args.map(a => { + try { + return JSON.stringify(a, null, 2); + } catch (e) { + return String(a); + } + }).join(' '); + } + console.log(text); + try { + fs.appendFileSync(LOG_FILE, text + '\n'); + } catch (e) { + console.error('Failed to write to log file', e); + } +} + +async function runVerification() { + try { + fs.writeFileSync(LOG_FILE, 'Starting MongoDB Analytics Verification...\n'); + } catch (e) { + console.error('Failed to init log file', e); + } + + if (!process.env.MONGO_URL) { + log('MONGO_URL not set'); + process.exit(1); + } + + try { + const { db } = await getMongoClient(); + + const inspector = new MongoDatabaseInspector(db); + const analytics = new MongoDatabaseAnalytics(db); + const sizeAnalyzer = new MongoDatabaseSizeAnalyzer(db); + + log('------------------------------------------------'); + log('Testing MongoDatabaseInspector'); + log('------------------------------------------------'); + + log('Getting Connection Info...'); + const connInfo = await inspector.getConnectionInfo(); + log('Connection Info:', connInfo); + + log('Getting Database Shape...'); + const shape = await inspector.getDatabaseShape({ includeCounts: true }); + log('Database Shape (Collections):', shape.collections.map((c: any) => `${c.name} (${c.document_count} docs)`)); + + log('------------------------------------------------'); + log('Testing MongoDatabaseAnalytics'); + log('------------------------------------------------'); + + log('Getting Database Analytics (Summary)...'); + const dbAnalyticsRes = await analytics.getDatabaseAnalytics({ + includeGrowthTrends: false, + includeContentAnalytics: false + }); + log('Analytics Summary:', dbAnalyticsRes.summary); + + log('Getting Attachment Analytics...'); + const attachmentAnalytics = await analytics.getAttachmentAnalytics(); + log('Attachment Types:', attachmentAnalytics.types); + + log('Getting Tag Analytics...'); + const tagAnalytics = await analytics.getTagAnalytics(); + log('Tag Keys:', tagAnalytics.tag_keys); + + log('------------------------------------------------'); + log('Testing MongoDatabaseSizeAnalyzer'); + log('------------------------------------------------'); + + log('Getting Database Size Info...'); + const sizeInfo = await sizeAnalyzer.getDatabaseSizeInfo(); + log('Size Info:', { + total_vcons: sizeInfo.total_vcons, + size_category: sizeInfo.size_category, + total_size_pretty: sizeInfo.total_size_pretty + }); + + log('Getting Smart Search Limits (Basic)...'); + const smartLimits = await sizeAnalyzer.getSmartSearchLimits('basic', 'small'); + log('Smart Limits:', smartLimits); + + log('------------------------------------------------'); + log('Verification SUCCESS'); + + } catch (error) { + log('Verification FAILED:', error); + process.exit(1); + } finally { + await closeMongoClient(); + } +} + +runVerification(); diff --git a/scripts/verify-mongo-init.ts b/scripts/verify-mongo-init.ts new file mode 100644 index 0000000..5105301 --- /dev/null +++ b/scripts/verify-mongo-init.ts @@ -0,0 +1,49 @@ + +import { getMongoClient, closeMongoClient } from '../src/db/mongo-client.js'; +import { MongoVConQueries } from '../src/db/mongo-queries.js'; +import * as fs from 'fs'; + +async function verifyInit() { + console.log('Starting MongoDB Initialization Verification...'); + + if (!process.env.MONGO_URL) { + console.error('MONGO_URL not set'); + process.exit(1); + } + + try { + const { db } = await getMongoClient(); + const queries = new MongoVConQueries(db); + + console.log('Calling queries.initialize()...'); + await queries.initialize(); + console.log('Initialization complete.'); + + console.log('Verifying indexes on "vcons" collection...'); + const indexes = await db.collection('vcons').indexes(); + console.log('Indexes found:', indexes.map(i => i.name)); + + const hasUuid = indexes.some(i => i.name === 'uuid_1' || (i.key && i.key.uuid === 1)); + const hasText = indexes.some(i => i.name === 'vcon_text_search' || (i.weights && Object.keys(i.weights).length > 0)); + + if (hasUuid && hasText) { + console.log('PASS: Required indexes present (uuid_1, vcon_text_search)'); + fs.writeFileSync('verification_result.txt', 'SUCCESS: uuid_1 and vcon_text_search found'); + } else { + console.error('FAIL: Missing required indexes'); + console.error('Has UUID Index:', hasUuid); + console.error('Has Text Index:', hasText); + fs.writeFileSync('verification_result.txt', `FAILURE: Valid indexes missing (uuid:${hasUuid}, text:${hasText})`); + process.exit(1); + } + + } catch (error) { + console.error('Verification FAILED:', error); + fs.writeFileSync('verification_result.txt', `ERROR: ${error}`); + process.exit(1); + } finally { + await closeMongoClient(); + } +} + +verifyInit(); diff --git a/scripts/verify-mongo.ts b/scripts/verify-mongo.ts new file mode 100644 index 0000000..95a0d64 --- /dev/null +++ b/scripts/verify-mongo.ts @@ -0,0 +1,180 @@ + +import { getMongoClient, closeMongoClient } from '../src/db/mongo-client.js'; +import { MongoVConQueries } from '../src/db/mongo-queries.js'; +import { randomUUID } from 'crypto'; +import { VCon } from '../src/types/vcon.js'; +import * as fs from 'fs'; +import * as path from 'path'; + +const LOG_FILE = path.join(process.cwd(), 'verif_log.txt'); + +function log(msg: string, ...args: any[]) { + let text = msg; + if (args.length > 0) { + text += ' ' + args.map(a => { + try { + return JSON.stringify(a); + } catch (e) { + return String(a); + } + }).join(' '); + } + console.log(text); + try { + fs.appendFileSync(LOG_FILE, text + '\n'); + } catch (e) { + console.error('Failed to write to log file', e); + } +} + +async function runVerification() { + try { + fs.writeFileSync(LOG_FILE, 'Starting MongoDB verification...\n'); + } catch (e) { + console.error('Failed to init log file', e); + } + + if (!process.env.MONGO_URL) { + log('MONGO_URL not set'); + process.exit(1); + + } + + try { + const { db } = await getMongoClient(); + const queries = new MongoVConQueries(db); + + log('Initializing collections...'); + await queries.initialize(); + + const testUuid = randomUUID(); + const testVCon: VCon = { + vcon: '0.3.0', + uuid: testUuid, + created_at: new Date().toISOString(), + updated_at: new Date().toISOString(), + subject: 'Test MongoDB VCon Verification', + parties: [ + { + name: 'Alice', + tel: '+1234567890', + uuid: randomUUID() + } + ], + dialog: [], + analysis: [], + attachments: [] + }; + + log('Creating vCon...'); + const result = await queries.createVCon(testVCon); + log('Created vCon:', result); + + log('Retrieving vCon...'); + const retrieved = await queries.getVCon(testUuid); + log('Retrieved vCon subject:', retrieved.subject); + + if (retrieved.uuid !== testUuid) throw new Error('UUID mismatch'); + + log('Searching vCon...'); + // Wait for text index? + await new Promise(r => setTimeout(r, 2000)); + + const searchResults = await queries.keywordSearch({ query: 'Verification' }); + log('Search results count:', searchResults.length); + const found = searchResults.find(r => r.vcon_id === testUuid); + + if (found) { + log('Found vCon in search results'); + } else { + log('ERROR: vCon NOT found in search results'); + // Don't fail yet, just log + } + + log('Adding dialog...'); + await queries.addDialog(testUuid, { + type: 'text', + start: new Date().toISOString(), + duration: 10, + parties: [0], + originator: 0, + mediatype: 'text/plain', + body: 'Hello Mongo Verification', + encoding: 'none' + }); + log('Dialog added'); + + log('Deleting vCon...'); + await queries.deleteVCon(testUuid); + log('vCon deleted'); + + try { + await queries.getVCon(testUuid); + log('Error: vCon should have been deleted but was found'); + } catch (e) { + log('Verified vCon is deleted (got expected error)'); + } + + log('------------------------------------------------'); + log('Starting Phase 2: Vector Search Verification'); + + // Create a new vCon for vector search + const vectorVConUuid = randomUUID(); + const vectorVCon: VCon = { ...testVCon, uuid: vectorVConUuid, subject: 'Vector Search Test VCon' }; + + await queries.createVCon(vectorVCon); + log('Created vCon for vector search'); + + // Insert dummy embedding directly into vcon_embeddings collection + // We need to bypass queries interface as it doesn't expose embedding insertion (handled by embed-vcons script) + const embeddingsColl = db.collection('vcon_embeddings'); + const dummyEmbedding = Array(384).fill(0.1); // 384-dim vector + + await embeddingsColl.insertOne({ + vcon_id: vectorVConUuid, + content_type: 'subject', + content_reference: null, + content_text: vectorVCon.subject, + embedding: dummyEmbedding, + embedding_model: 'test-model', + embedding_dimension: 384, + created_at: new Date() + }); + log('Inserted dummy embedding'); + + log('Running semantic search...'); + const semanticResults = await queries.semanticSearch({ + embedding: dummyEmbedding, + limit: 5 + }); + log('Semantic search results:', semanticResults.length); + if (semanticResults.length === 0) { + log('NOTE: Semantic search returned 0 results. This is EXPECTED if the Atlas Vector Search Index is not yet created.'); + log('To fix: Create a Vector Search Index on `vcon_embeddings` collection with definition provided in documentation.'); + } else { + log('SUCCESS: Semantic search returned results!'); + } + + log('Running hybrid search...'); + const hybridResults = await queries.hybridSearch({ + keywordQuery: 'Vector', + embedding: dummyEmbedding, + limit: 5 + }); + log('Hybrid search results:', hybridResults.length); + + // Cleanup + log('Cleaning up vector test data...'); + await queries.deleteVCon(vectorVConUuid); + await embeddingsColl.deleteMany({ vcon_id: vectorVConUuid }); + + log('Verification SUCCESS'); + } catch (error) { + log('Verification FAILED:', error); + process.exit(1); + } finally { + await closeMongoClient(); + } +} + +runVerification(); diff --git a/src/db/database-analytics.ts b/src/db/database-analytics.ts index 1e032ab..8c71992 100644 --- a/src/db/database-analytics.ts +++ b/src/db/database-analytics.ts @@ -6,54 +6,20 @@ import { SupabaseClient } from '@supabase/supabase-js'; -export interface DatabaseAnalyticsOptions { - includeGrowthTrends?: boolean; - includeContentAnalytics?: boolean; - includeAttachmentStats?: boolean; - includeTagAnalytics?: boolean; - includeHealthMetrics?: boolean; - monthsBack?: number; -} - -export interface MonthlyGrowthOptions { - monthsBack?: number; - includeProjections?: boolean; - granularity?: 'monthly' | 'weekly' | 'daily'; -} - -export interface AttachmentAnalyticsOptions { - includeSizeDistribution?: boolean; - includeTypeBreakdown?: boolean; - includeTemporalPatterns?: boolean; - topNTypes?: number; -} -export interface TagAnalyticsOptions { - includeFrequencyAnalysis?: boolean; - includeValueDistribution?: boolean; - includeTemporalTrends?: boolean; - topNKeys?: number; - minUsageCount?: number; -} -export interface ContentAnalyticsOptions { - includeDialogAnalysis?: boolean; - includeAnalysisBreakdown?: boolean; - includePartyPatterns?: boolean; - includeConversationMetrics?: boolean; - includeTemporalContent?: boolean; -} +import { + IDatabaseAnalytics, + DatabaseAnalyticsOptions, + MonthlyGrowthOptions, + AttachmentAnalyticsOptions, + TagAnalyticsOptions, + ContentAnalyticsOptions, + DatabaseHealthOptions +} from './types.js'; -export interface DatabaseHealthOptions { - includePerformanceMetrics?: boolean; - includeStorageEfficiency?: boolean; - includeIndexHealth?: boolean; - includeConnectionMetrics?: boolean; - includeRecommendations?: boolean; -} - -export class DatabaseAnalytics { - constructor(private supabase: SupabaseClient) {} +export class SupabaseDatabaseAnalytics implements IDatabaseAnalytics { + constructor(private supabase: SupabaseClient) { } /** * Get comprehensive database analytics @@ -497,7 +463,7 @@ export class DatabaseAnalytics { private async getVConCreationTrends(monthsBack: number, granularity: string) { const dateTrunc = granularity === 'daily' ? 'day' : granularity === 'weekly' ? 'week' : 'month'; - + const query = ` SELECT DATE_TRUNC('${dateTrunc}', created_at) as period, @@ -520,7 +486,7 @@ export class DatabaseAnalytics { private async getSizeGrowthTrends(monthsBack: number, granularity: string) { const dateTrunc = granularity === 'daily' ? 'day' : granularity === 'weekly' ? 'week' : 'month'; - + const query = ` WITH size_trends AS ( SELECT @@ -557,7 +523,7 @@ export class DatabaseAnalytics { private async getContentVolumeTrends(monthsBack: number, granularity: string) { const dateTrunc = granularity === 'daily' ? 'day' : granularity === 'weekly' ? 'week' : 'month'; - + const query = ` WITH content_trends AS ( SELECT @@ -600,16 +566,16 @@ export class DatabaseAnalytics { const latest = vconTrends[vconTrends.length - 1]; const previous = vconTrends[vconTrends.length - 2]; - - const vconGrowthRate = previous.vcon_count > 0 - ? ((latest.vcon_count - previous.vcon_count) / previous.vcon_count) * 100 + + const vconGrowthRate = previous.vcon_count > 0 + ? ((latest.vcon_count - previous.vcon_count) / previous.vcon_count) * 100 : 0; const latestSize = sizeTrends[sizeTrends.length - 1]; const previousSize = sizeTrends[sizeTrends.length - 2]; - - const sizeGrowthRate = previousSize?.total_size > 0 - ? ((latestSize?.total_size - previousSize?.total_size) / previousSize?.total_size) * 100 + + const sizeGrowthRate = previousSize?.total_size > 0 + ? ((latestSize?.total_size - previousSize?.total_size) / previousSize?.total_size) * 100 : 0; return { @@ -1179,7 +1145,7 @@ export class DatabaseAnalytics { const first = monthlyData[0]; const last = monthlyData[monthlyData.length - 1]; - + const totalGrowth = last.vcon_count - first.vcon_count; const avgMonthlyGrowth = totalGrowth / monthlyData.length; diff --git a/src/db/database-inspector.ts b/src/db/database-inspector.ts index edf0345..caf13f9 100644 --- a/src/db/database-inspector.ts +++ b/src/db/database-inspector.ts @@ -4,8 +4,10 @@ import { SupabaseClient } from '@supabase/supabase-js'; -export class DatabaseInspector { - constructor(private supabase: SupabaseClient) {} +import { IDatabaseInspector, InspectorOptions, InspectorStatsOptions } from './types.js'; + +export class SupabaseDatabaseInspector implements IDatabaseInspector { + constructor(private supabase: SupabaseClient) { } /** * Get comprehensive database shape information @@ -68,7 +70,7 @@ export class DatabaseInspector { q: `SELECT COUNT(*) as count FROM ${table.tablename}`, params: {} }); - + if (!countError && countData && countData.length > 0) { tableInfo.row_count = parseInt(countData[0].count); } diff --git a/src/db/database-size-analyzer.ts b/src/db/database-size-analyzer.ts index cfbce8e..f943748 100644 --- a/src/db/database-size-analyzer.ts +++ b/src/db/database-size-analyzer.ts @@ -6,39 +6,11 @@ import { SupabaseClient } from '@supabase/supabase-js'; -export interface DatabaseSizeInfo { - total_vcons: number; - total_size_bytes: number; - total_size_pretty: string; - size_category: 'small' | 'medium' | 'large' | 'very_large'; - recommendations: { - max_basic_search_limit: number; - max_content_search_limit: number; - max_semantic_search_limit: number; - max_analytics_limit: number; - recommended_response_format: string; - memory_warning: boolean; - }; - table_sizes: { - [table_name: string]: { - row_count: number; - size_bytes: number; - size_pretty: string; - }; - }; -} - -export interface SmartLimits { - query_type: string; - estimated_result_size: string; - recommended_limit: number; - recommended_response_format: string; - memory_warning: boolean; - explanation: string; -} +import { IDatabaseSizeAnalyzer, DatabaseSizeInfo, SmartLimits } from './types.js'; +import { calculateSmartSearchLimits } from './shared-utils.js'; -export class DatabaseSizeAnalyzer { - constructor(private supabase: SupabaseClient) {} +export class SupabaseDatabaseSizeAnalyzer implements IDatabaseSizeAnalyzer { + constructor(private supabase: SupabaseClient) { } /** * Get comprehensive database size information @@ -91,7 +63,7 @@ export class DatabaseSizeAnalyzer { tableData?.forEach((row: any) => { const sizeBytes = parseInt(row.size_bytes); const rowCount = parseInt(row.row_count); - + tableSizes[row.table_name] = { row_count: rowCount, size_bytes: sizeBytes, @@ -144,7 +116,7 @@ export class DatabaseSizeAnalyzer { */ async getSmartSearchLimits(queryType: string, estimatedResultSize: string): Promise { const sizeInfo = await this.getDatabaseSizeInfo(false); - + let recommendedLimit: number; let recommendedFormat: string; let memoryWarning: boolean; @@ -178,7 +150,7 @@ export class DatabaseSizeAnalyzer { const baseLimit = baseLimits[queryType as keyof typeof baseLimits] || 50; const sizeMult = sizeMultiplier[sizeInfo.size_category]; const resultMult = resultMultiplier[estimatedResultSize as keyof typeof resultMultiplier]; - + recommendedLimit = Math.max(1, Math.round(baseLimit * sizeMult * resultMult)); // Determine response format @@ -197,7 +169,7 @@ export class DatabaseSizeAnalyzer { explanation = `Database has ${sizeInfo.total_vcons.toLocaleString()} vCons (${sizeInfo.size_category} size). `; explanation += `For ${queryType} queries with ${estimatedResultSize} results, `; explanation += `recommend limit of ${recommendedLimit} with ${recommendedFormat} format.`; - + if (memoryWarning) { explanation += ' โš ๏ธ Memory warning: Large dataset detected.'; } diff --git a/src/db/interfaces.ts b/src/db/interfaces.ts new file mode 100644 index 0000000..cb904cb --- /dev/null +++ b/src/db/interfaces.ts @@ -0,0 +1,134 @@ +/** + * Interface for vCon Database Queries + * + * Provides abstraction layer for database operations to support multiple backends + * (Supabase/PostgreSQL and MongoDB) + */ + +import { Analysis, Attachment, Dialog, VCon } from '../types/vcon.js'; + +export interface SearchResult { + vcon_id: string; + score: number; + highlight?: string; + // Fields for specific search types + rank?: number; + snippet?: string | null; + combined_score?: number; + semantic_score?: number; + keyword_score?: number; + content_type?: string; + content_reference?: string | null; + content_text?: string; + similarity?: number; +} + +export interface IVConQueries { + /** + * Initialize the database connection and schema (e.g. indexes) + */ + initialize(): Promise; + + /** + * Create a new vCon with all related entities + */ + createVCon(vcon: VCon): Promise<{ uuid: string; id: string }>; + + /** + * Get a complete vCon by UUID + */ + getVCon(uuid: string): Promise; + + /** + * Add dialog to a vCon + */ + addDialog(vconUuid: string, dialog: Dialog): Promise; + + /** + * Add analysis to a vCon + */ + addAnalysis(vconUuid: string, analysis: Analysis): Promise; + + /** + * Delete a vCon by UUID + */ + deleteVCon(uuid: string): Promise; + + /** + * Add attachment to a vCon + */ + addAttachment(vconUuid: string, attachment: Attachment): Promise; + + /** + * Keyword search + */ + keywordSearch(params: { + query: string; + startDate?: string; + endDate?: string; + tags?: Record; + limit?: number; + }): Promise>; + + /** + * Get count of distinct vCons matching keyword search criteria + */ + keywordSearchCount(params: { + query: string; + startDate?: string; + endDate?: string; + tags?: Record; + }): Promise; + + /** + * Semantic search (vector based) + */ + semanticSearch(params: { + embedding: number[]; // vector(384) + tags?: Record; + threshold?: number; + limit?: number; + }): Promise>; + + /** + * Hybrid search (keyword + semantic) + */ + hybridSearch(params: { + keywordQuery?: string; + embedding?: number[]; + tags?: Record; + semanticWeight?: number; // 0..1 + limit?: number; + }): Promise>; + + /** + * Search vCons by metadata filters + */ + searchVCons(filters: { + subject?: string; + partyName?: string; + partyEmail?: string; + partyTel?: string; + startDate?: string; + endDate?: string; + tags?: Record; + limit?: number; + }): Promise; +} diff --git a/src/db/mongo-analytics.ts b/src/db/mongo-analytics.ts new file mode 100644 index 0000000..ca3d1d5 --- /dev/null +++ b/src/db/mongo-analytics.ts @@ -0,0 +1,225 @@ + +import { Db } from 'mongodb'; +import { + IDatabaseAnalytics, + DatabaseAnalyticsOptions, + MonthlyGrowthOptions, + AttachmentAnalyticsOptions, + TagAnalyticsOptions, + ContentAnalyticsOptions, + DatabaseHealthOptions +} from './types.js'; + +export class MongoDatabaseAnalytics implements IDatabaseAnalytics { + constructor(private db: Db) { } + + async getDatabaseAnalytics(options: DatabaseAnalyticsOptions = {}) { + const { + includeGrowthTrends = true, + includeContentAnalytics = true, + includeAttachmentStats = true, + includeTagAnalytics = true, + includeHealthMetrics = true, + monthsBack = 12, + } = options; + + const analytics: any = { + timestamp: new Date().toISOString(), + summary: {}, + }; + + // Summary usage stats + const vconsCount = await this.db.collection('vcons').countDocuments(); + // Assuming dialogs/attachments are inside vcons or separate? + // In Mongo implementation vCon is a document, so we aggregate sub-arrays + + // We need to check if we are using separate collections or embedded arrays. + // In MongoVConQueries, we store vCon as a single document in 'vcons' collection. + // So dialog, analysis, attachments are arrays inside the vcon document. + + const summaryAggregation = await this.db.collection('vcons').aggregate([ + { + $project: { + dialog_count: { $size: { $ifNull: ["$dialog", []] } }, + analysis_count: { $size: { $ifNull: ["$analysis", []] } }, + attachment_count: { $size: { $ifNull: ["$attachments", []] } }, + // Estimate size? BSON size is hard to get in aggregation without $bsonSize (5.0+) + // We can use $bsonSize if available, or just skip size for now + } + }, + { + $group: { + _id: null, + total_dialogs: { $sum: "$dialog_count" }, + total_analysis: { $sum: "$analysis_count" }, + total_attachments: { $sum: "$attachment_count" } + } + } + ]).toArray(); + + const summary = summaryAggregation[0] || { total_dialogs: 0, total_analysis: 0, total_attachments: 0 }; + + analytics.summary = { + total_vcons: vconsCount, + total_dialogs: summary.total_dialogs, + total_analysis: summary.total_analysis, + total_attachments: summary.total_attachments, + }; + + if (includeGrowthTrends) { + analytics.growth = await this.getMonthlyGrowthAnalytics({ monthsBack }); + } + + if (includeContentAnalytics) { + analytics.content = await this.getContentAnalytics(); + } + + if (includeAttachmentStats) { + analytics.attachments = await this.getAttachmentAnalytics(); + } + + if (includeTagAnalytics) { + analytics.tags = await this.getTagAnalytics(); + } + + if (includeHealthMetrics) { + analytics.health = await this.getDatabaseHealthMetrics(); + } + + return analytics; + } + + async getMonthlyGrowthAnalytics(options: MonthlyGrowthOptions = {}) { + // Mongo aggregation for monthly growth + // Group by creation date + const { monthsBack = 12 } = options; + const cutoffDate = new Date(); + cutoffDate.setMonth(cutoffDate.getMonth() - monthsBack); + + const trends = await this.db.collection('vcons').aggregate([ + { $match: { created_at: { $gte: cutoffDate.toISOString() } } }, // creation_date is string ISO usually + { + $group: { + _id: { + // Try to parse ISO string to date if possible, or substring + // Assuming created_at is ISO string YYYY-MM-DD... + month: { $substr: ["$created_at", 0, 7] } // YYYY-MM + }, + vcon_count: { $sum: 1 }, + dialog_count: { $sum: { $size: { $ifNull: ["$dialog", []] } } } + } + }, + { $sort: { "_id.month": 1 } } + ]).toArray(); + + return { + timestamp: new Date().toISOString(), + trends: trends.map(t => ({ + period: t._id.month, + vcon_count: t.vcon_count, + dialog_count: t.dialog_count + })) + }; + } + + async getAttachmentAnalytics(options: AttachmentAnalyticsOptions = {}) { + // Unwind attachments to analyze them + const attachmentStats = await this.db.collection('vcons').aggregate([ + { $unwind: "$attachments" }, + { + $group: { + _id: "$attachments.type", + count: { $sum: 1 }, + // size if available, usually not in vCon standard text structure unless metadata has it + } + }, + { $sort: { count: -1 } }, + { $limit: options.topNTypes || 10 } + ]).toArray(); + + return { + types: attachmentStats.map(a => ({ type: a._id, count: a.count })) + }; + } + + async getTagAnalytics(options: TagAnalyticsOptions = {}) { + // Tags on vCon are usually at root? or attachments? + // Type def says `tags?: Record;` on VCon? + // Wait, let's check src/types/vcon.ts + // Assuming root level tags for now, typically "tags" object + + // Since tags is a Map/Object, we need to convert to array to aggregate + // $objectToArray (Mongo 3.4+) + + const tagStats = await this.db.collection('vcons').aggregate([ + { $project: { tags: { $objectToArray: "$tags" } } }, + { $unwind: "$tags" }, + { + $group: { + _id: "$tags.k", + count: { $sum: 1 }, + values: { $addToSet: "$tags.v" } // careful with cardinality + } + }, + { $sort: { count: -1 } }, + { $limit: options.topNKeys || 20 } + ]).toArray(); + + return { + tag_keys: tagStats.map(t => ({ key: t._id, count: t.count, unique_values: t.values.length })) + }; + } + + async getContentAnalytics(options: ContentAnalyticsOptions = {}) { + // Better approach: Use facets to separate the concerns + // One facet for party count (per vcon), one for duration (per dialog) + const complexStats = await this.db.collection('vcons').aggregate([ + { + $facet: { + "general": [ + { + $project: { + party_count: { $size: { $ifNull: ["$parties", []] } } + } + }, + { + $group: { + _id: null, + avg_parties: { $avg: "$party_count" } + } + } + ], + "content": [ + { $unwind: { path: "$dialog", preserveNullAndEmptyArrays: false } }, // Only consider vcons with dialogs for duration stats + { + $group: { + _id: null, + total_duration: { $sum: "$dialog.duration" }, + avg_duration: { $avg: "$dialog.duration" } + } + } + ] + } + } + ]).toArray(); + + const generalStats = (complexStats[0].general && complexStats[0].general[0]) || {}; + const durationStats = (complexStats[0].content && complexStats[0].content[0]) || {}; + + return { + avg_parties_per_vcon: generalStats.avg_parties || 0, + total_conversation_duration: durationStats.total_duration || 0, + avg_dialog_duration: durationStats.avg_duration || 0 + }; + } + + async getDatabaseHealthMetrics(options: DatabaseHealthOptions = {}) { + // Basic health check + try { + const ping = await this.db.command({ ping: 1 }); + return { status: "healthy", ping: ping }; + } catch (e) { + return { status: "unhealthy", error: e }; + } + } +} diff --git a/src/db/mongo-client.ts b/src/db/mongo-client.ts new file mode 100644 index 0000000..5907b35 --- /dev/null +++ b/src/db/mongo-client.ts @@ -0,0 +1,91 @@ +/** + * MongoDB Database Client + * + * Singleton client for connecting to MongoDB + */ + +import { MongoClient, Db, ServerApiVersion } from 'mongodb'; +import { logWithContext } from '../observability/instrumentation.js'; + +let client: MongoClient | null = null; +let db: Db | null = null; + +/** + * Get or create MongoDB client instance + * @returns Initialized MongoDB client and database + * @throws Error if environment variables are missing + */ +export async function getMongoClient(): Promise<{ client: MongoClient; db: Db }> { + if (client && db) { + return { client, db }; + } + + const url = process.env.MONGO_URL; + const dbName = process.env.MONGO_DB_NAME || 'vcon'; + + if (!url) { + throw new Error( + 'Missing MongoDB credentials. Set MONGO_URL environment variable.' + ); + } + + logWithContext('info', 'Initializing MongoDB client', { + url_masked: url.replace(/:([^:@]+)@/, ':***@'), // Mask password + db_name: dbName, + }); + + try { + client = new MongoClient(url, { + serverApi: { + version: ServerApiVersion.v1, + strict: false, + deprecationErrors: true, + } + }); + + await client.connect(); + db = client.db(dbName); + + // Verify connection + await db.command({ ping: 1 }); + logWithContext('info', 'MongoDB connected successfully'); + + } catch (error) { + // Reset singleton state so retry is possible + client = null; + db = null; + + logWithContext('error', 'Failed to connect to MongoDB', { + error_message: error instanceof Error ? error.message : String(error), + }); + throw error; + } + + return { client, db }; +} + +/** + * Close and reset the MongoDB client connection + */ +export async function closeMongoClient(): Promise { + if (client) { + await client.close(); + client = null; + db = null; + logWithContext('info', 'MongoDB connection closed'); + } +} + +/** + * Test database connectivity + */ +export async function testMongoConnection(): Promise { + try { + const { db } = await getMongoClient(); + await db.command({ ping: 1 }); + return true; + } catch (error) { + console.error('MongoDB connection test failed:', error); + return false; + } +} diff --git a/src/db/mongo-inspector.ts b/src/db/mongo-inspector.ts new file mode 100644 index 0000000..9be89e1 --- /dev/null +++ b/src/db/mongo-inspector.ts @@ -0,0 +1,163 @@ +import { Db } from 'mongodb'; +import { IDatabaseInspector, InspectorOptions, InspectorStatsOptions } from './types.js'; + +export class MongoDatabaseInspector implements IDatabaseInspector { + constructor(private db: Db) { } + + /** + * Get comprehensive database shape information + */ + async getDatabaseShape(options: InspectorOptions = {}) { + const { + includeCounts = true, + includeSizes = true, + includeIndexes = true, + includeColumns = false, + } = options; + + const shape: any = { + timestamp: new Date().toISOString(), + collections: [], + }; + + const collections = await this.db.listCollections().toArray(); + + for (const collectionInfo of collections) { + const collName = collectionInfo.name; + const collection = this.db.collection(collName); + + // Basic info + const collStats: any = { + name: collName, + type: collectionInfo.type, + }; + + // Sizes and Counts (using collateral stats for speed if available, or direct calls) + if (includeSizes || includeCounts) { + // collection.stats() is deprecated in some drivers but still useful, or use aggregations + // For standard drivers, we can use distinct commands + + if (includeCounts) { + collStats.document_count = await collection.countDocuments(); + } + + if (includeSizes) { + // We can't easily get exact storage size without stats() command which requires privileges + // We can try to estimate or use db.stats() for global, but per collection is harder without privileges in some Atlas tiers + // For now, let's try to get what we can from a sample or just skip detailed size per collection if restricted + // Actually, listCollections doesn't return size. + // We will skip size per collection for now unless we want to run a heavier command + } + } + + // Indexes + if (includeIndexes) { + collStats.indexes = await collection.indexes(); + } + + // Columns (Schema Inference from sample) + if (includeColumns) { + const sample = await collection.findOne({}); + if (sample) { + collStats.schema_sample = Object.keys(sample).map(key => ({ + name: key, + type: typeof sample[key] + })); + } + } + + shape.collections.push(collStats); + } + + return shape; + } + + /** + * Get database performance statistics + */ + async getDatabaseStats(options: InspectorStatsOptions = {}) { + const { + includeQueryStats = true, + includeIndexUsage = true, + includeCacheStats = true, + tableName, // interpreted as collection name + } = options; + + const stats: any = { + timestamp: new Date().toISOString(), + }; + + // Index Usage using $indexStats + if (includeIndexUsage) { + const collections = tableName ? [{ name: tableName }] : await this.db.listCollections().toArray(); + const indexUsage: Record = {}; + + for (const coll of collections) { + try { + const usage = await this.db.collection(coll.name).aggregate([ + { $indexStats: {} } + ]).toArray(); + indexUsage[coll.name] = usage; + } catch (e) { + // Ignore if not supported/allowed + } + } + stats.index_usage = indexUsage; + } + + // Server Status (requires privilege, might fail on shared tiers) + try { + if (includeCacheStats || includeQueryStats) { + const serverStatus = await this.db.command({ serverStatus: 1 }); + + if (includeCacheStats && serverStatus.wiredTiger) { + stats.cache = serverStatus.wiredTiger.cache; + } + + if (includeQueryStats && serverStatus.opcounters) { + stats.opcounters = serverStatus.opcounters; + } + } + } catch (e) { + stats.error = "Insufficient privileges for serverStatus"; + } + + return stats; + } + + /** + * Analyze a query's execution plan + * For Mongo, this will interpret 'query' as a JSON aggregation pipeline if strict, + * but since the interface accepts a string, we might need a convention. + * For now, we'll assume it's checking specific collection performance or just return available info. + */ + async analyzeQuery(query: string, analyzeMode: 'explain' | 'explain_analyze' = 'explain') { + return { + warning: "Query analysis for raw strings not fully implemented for Mongo. Use specific aggregation explain methods.", + query: query + }; + } + + /** + * Get database connection info + */ + async getConnectionInfo() { + try { + const buildInfo = await this.db.command({ buildInfo: 1 }); + const dbStats = await this.db.stats(); + + return { + database_name: this.db.databaseName, + version: buildInfo.version, + data_size: dbStats.dataSize, + storage_size: dbStats.storageSize, + objects: dbStats.objects, + }; + } catch (e) { + return { + database_name: this.db.databaseName, + error: "Could not fetch detailed stats" + }; + } + } +} diff --git a/src/db/mongo-queries.ts b/src/db/mongo-queries.ts new file mode 100644 index 0000000..cf60063 --- /dev/null +++ b/src/db/mongo-queries.ts @@ -0,0 +1,438 @@ +/** + * MongoDB Implementation of VCon Queries + */ + +import { Db, ObjectId } from 'mongodb'; +import { Analysis, Attachment, Dialog, VCon } from '../types/vcon.js'; +import { IVConQueries } from './interfaces.js'; +import { logWithContext, recordCounter, withSpan } from '../observability/instrumentation.js'; +import { ATTR_DB_OPERATION, ATTR_SEARCH_RESULTS_COUNT, ATTR_SEARCH_THRESHOLD, ATTR_SEARCH_TYPE, ATTR_VCON_UUID } from '../observability/attributes.js'; + +export class MongoVConQueries implements IVConQueries { + private db: Db; + private readonly VCONS_COLLECTION = 'vcons'; + + constructor(db: Db) { + this.db = db; + } + + /** + * Initialize collections and indexes + */ + async initialize(): Promise { + const vcons = this.db.collection(this.VCONS_COLLECTION); + + // Unique index on UUID + await vcons.createIndex({ uuid: 1 }, { unique: true }); + + // Text index for keyword search + await vcons.createIndex({ + subject: 'text', + 'analysis.body': 'text', + 'dialog.body': 'text', + 'parties.name': 'text', + 'parties.tel': 'text', + 'parties.mailto': 'text' + }, { + name: 'vcon_text_search' + }); + + // Indexes for sorting and filtering + await vcons.createIndex({ created_at: -1 }); + await vcons.createIndex({ 'parties.tel': 1 }); + await vcons.createIndex({ 'parties.mailto': 1 }); + } + + /** + * Create a new vCon + */ + async createVCon(vcon: VCon): Promise<{ uuid: string; id: string }> { + return withSpan('mongo.createVCon', async (span) => { + span.setAttributes({ + [ATTR_VCON_UUID]: vcon.uuid, + [ATTR_DB_OPERATION]: 'insert', + }); + + const collection = this.db.collection(this.VCONS_COLLECTION); + + // Check if exists + const existing = await collection.findOne({ uuid: vcon.uuid }); + if (existing) { + throw new Error(`vCon with UUID ${vcon.uuid} already exists`); + } + + await collection.insertOne(vcon); + + recordCounter('db.query.count', 1, { operation: 'createVCon', type: 'mongo' }); + + return { uuid: vcon.uuid, id: vcon.uuid }; + }); + } + + async deleteVCon(uuid: string): Promise { + return withSpan('mongo.deleteVCon', async (span) => { + span.setAttributes({ + [ATTR_VCON_UUID]: uuid, + [ATTR_DB_OPERATION]: 'delete', + }); + + const collection = this.db.collection(this.VCONS_COLLECTION); + const result = await collection.deleteOne({ uuid }); + + if (result.deletedCount === 0) { + // Not found is considered success for idempotent delete, but service expects false if not found? + // Actually service checks existence first. + } + }); + } + + async getVCon(uuid: string): Promise { + return withSpan('mongo.getVCon', async (span) => { + span.setAttributes({ + [ATTR_VCON_UUID]: uuid, + [ATTR_DB_OPERATION]: 'find', + }); + + const collection = this.db.collection(this.VCONS_COLLECTION); + const vcon = await collection.findOne({ uuid }); + + if (!vcon) { + throw new Error(`vCon not found: ${uuid}`); + } + + // Remove MongoDB internal _id + const { _id, ...rest } = vcon; + return rest as unknown as VCon; + }); + } + + async addDialog(vconUuid: string, dialog: Dialog): Promise { + return withSpan('mongo.addDialog', async () => { + const collection = this.db.collection(this.VCONS_COLLECTION); + const result = await collection.updateOne( + { uuid: vconUuid }, + { $push: { dialog: dialog } } + ); + + if (result.matchedCount === 0) { + throw new Error(`vCon not found: ${vconUuid}`); + } + }); + } + + async addAnalysis(vconUuid: string, analysis: Analysis): Promise { + return withSpan('mongo.addAnalysis', async () => { + const collection = this.db.collection(this.VCONS_COLLECTION); + const result = await collection.updateOne( + { uuid: vconUuid }, + { $push: { analysis: analysis } } + ); + + if (result.matchedCount === 0) { + throw new Error(`vCon not found: ${vconUuid}`); + } + }); + } + + async addAttachment(vconUuid: string, attachment: Attachment): Promise { + return withSpan('mongo.addAttachment', async () => { + const collection = this.db.collection(this.VCONS_COLLECTION); + const result = await collection.updateOne( + { uuid: vconUuid }, + { $push: { attachments: attachment } } + ); + + if (result.matchedCount === 0) { + throw new Error(`vCon not found: ${vconUuid}`); + } + }); + } + + async keywordSearch(params: { + query: string; + startDate?: string; + endDate?: string; + tags?: Record; + limit?: number; + }): Promise> { + return withSpan('mongo.keywordSearch', async (span) => { + span.setAttributes({ + [ATTR_SEARCH_TYPE]: 'keyword', + [ATTR_DB_OPERATION]: 'search', + }); + + const collection = this.db.collection(this.VCONS_COLLECTION); + const limit = params.limit || 50; + + const query: any = { $text: { $search: params.query } }; + + if (params.startDate || params.endDate) { + query.created_at = {}; + if (params.startDate) query.created_at.$gte = params.startDate; + if (params.endDate) query.created_at.$lte = params.endDate; + } + + // Note: tags search not strictly implemented in this basic version + // as tags structure in vCon can vary (attachment vs inline) + + const results = await collection + .find(query) + .project({ score: { $meta: 'textScore' }, uuid: 1, created_at: 1, subject: 1 }) + .sort({ score: { $meta: 'textScore' } }) + .limit(limit) + .toArray(); + + span.setAttributes({ + [ATTR_SEARCH_RESULTS_COUNT]: results.length, + }); + + return results.map(r => ({ + vcon_id: r.uuid, + doc_type: 'vcon', // Basic implementation treats whole document as match + rank: r.score as number, + snippet: (r as any).subject || null, + ref_index: null + })); + }); + } + + async keywordSearchCount(params: { + query: string; + startDate?: string; + endDate?: string; + tags?: Record; + }): Promise { + const collection = this.db.collection(this.VCONS_COLLECTION); + const query: any = { $text: { $search: params.query } }; + + if (params.startDate || params.endDate) { + query.created_at = {}; + if (params.startDate) query.created_at.$gte = params.startDate; + if (params.endDate) query.created_at.$lte = params.endDate; + } + + return await collection.countDocuments(query); + } + + async semanticSearch(params: { + embedding: number[]; + tags?: Record; + threshold?: number; + limit?: number; + }): Promise> { + return withSpan('mongo.semanticSearch', async (span) => { + span.setAttributes({ + [ATTR_SEARCH_TYPE]: 'semantic', + [ATTR_DB_OPERATION]: 'search', + [ATTR_SEARCH_THRESHOLD]: params.threshold || 0.7, + }); + + const collection = this.db.collection('vcon_embeddings'); + const limit = params.limit || 50; + const indexName = 'vector_index'; // Assumed index name + + // Basic $vectorSearch aggregation + // Note: This requires an Atlas Vector Search index to be created on 'vcon_embeddings' + const pipeline: any[] = [ + { + $vectorSearch: { + index: indexName, + path: "embedding", + queryVector: params.embedding, + numCandidates: limit * 10, + limit: limit + } + }, + { + $project: { + vcon_id: 1, + content_type: 1, + content_reference: 1, + content_text: 1, + score: { $meta: "vectorSearchScore" } + } + } + ]; + + // Filter by threshold if provided + if (params.threshold) { + pipeline.push({ + $match: { + score: { $gte: params.threshold } + } + }); + } + + try { + const results = await collection.aggregate(pipeline).toArray(); + + span.setAttributes({ + [ATTR_SEARCH_RESULTS_COUNT]: results.length, + }); + + return results.map(r => ({ + vcon_id: r.vcon_id, + content_type: r.content_type, + content_reference: r.content_reference, + content_text: r.content_text, + similarity: r.score + })); + } catch (error) { + // If index doesn't exist, we might get an error. + // Log and return empty to avoid crashing if feature not set up. + logWithContext('error', 'Vector search failed (likely missing Atlas index)', { + error: error instanceof Error ? error.message : String(error) + }); + return []; + } + }); + } + + async hybridSearch(params: { + keywordQuery?: string; + embedding?: number[]; + tags?: Record; + semanticWeight?: number; + limit?: number; + }): Promise> { + // Implementation of Reciprocal Rank Fusion (RRF) or Linear Combination + // For simplicity matching Supabase implementation, we'll fetch both and combine in memory + // since MongoDB doesn't easily support joining $text and $vectorSearch in one performant query phase + // without complex lookups. + + const limit = params.limit || 50; + const semanticWeight = params.semanticWeight ?? 0.6; + const keywordWeight = 1.0 - semanticWeight; + + let keywordResults: any[] = []; + let semanticResults: any[] = []; + + // Run searches in parallel + await Promise.all([ + // Keyword Search + (async () => { + if (params.keywordQuery) { + keywordResults = await this.keywordSearch({ + query: params.keywordQuery, + tags: params.tags, + limit: limit + }); + } + })(), + // Semantic Search + (async () => { + if (params.embedding) { + semanticResults = await this.semanticSearch({ + embedding: params.embedding, + tags: params.tags, + limit: limit + }); + } + })() + ]); + + // Normalize scores (0-1 range approx) + // Text scores in Mongo are arbitrary, so we normalize by max score + const maxKeywordScore = Math.max(...keywordResults.map(r => r.rank), 1); + const maxSemanticScore = Math.max(...semanticResults.map(r => r.similarity), 1); + + const resultsMap = new Map(); + + // Process Keyword Results + keywordResults.forEach(r => { + const normalizedScore = r.rank / maxKeywordScore; + resultsMap.set(r.vcon_id, { + vcon_id: r.vcon_id, + semantic_score: 0, + keyword_score: normalizedScore + }); + }); + + // Process Semantic Results + semanticResults.forEach(r => { + const existing = resultsMap.get(r.vcon_id) || { + vcon_id: r.vcon_id, + semantic_score: 0, + keyword_score: 0 + }; + existing.semantic_score = r.similarity; // already 0-1 usually (cosine) + resultsMap.set(r.vcon_id, existing); + }); + + // Calculate combined score and sort + const finalResults = Array.from(resultsMap.values()).map(r => ({ + ...r, + combined_score: (r.keyword_score * keywordWeight) + (r.semantic_score * semanticWeight) + })); + + return finalResults + .sort((a, b) => b.combined_score - a.combined_score) + .slice(0, limit); + } + + async searchVCons(filters: { + subject?: string; + partyName?: string; + partyEmail?: string; + partyTel?: string; + startDate?: string; + endDate?: string; + tags?: Record; + limit?: number; + }): Promise { + const collection = this.db.collection(this.VCONS_COLLECTION); + const query: any = {}; + + if (filters.subject) { + query.subject = { $regex: filters.subject, $options: 'i' }; + } + + if (filters.startDate || filters.endDate) { + query.created_at = {}; + if (filters.startDate) query.created_at.$gte = filters.startDate; + if (filters.endDate) query.created_at.$lte = filters.endDate; + } + + if (filters.partyName) { + query['parties.name'] = { $regex: filters.partyName, $options: 'i' }; + } + if (filters.partyEmail) { + query['parties.mailto'] = { $regex: filters.partyEmail, $options: 'i' }; + } + if (filters.partyTel) { + query['parties.tel'] = { $regex: filters.partyTel, $options: 'i' }; + } + + const results = await collection + .find(query) + .sort({ created_at: -1 }) + .limit(filters.limit || 10) + .toArray(); + + return results.map(r => { + const { _id, ...rest } = r; + return rest as unknown as VCon; + }); + } +} diff --git a/src/db/mongo-size-analyzer.ts b/src/db/mongo-size-analyzer.ts new file mode 100644 index 0000000..f0ddaed --- /dev/null +++ b/src/db/mongo-size-analyzer.ts @@ -0,0 +1,126 @@ + +import { Db } from 'mongodb'; +import { IDatabaseSizeAnalyzer, DatabaseSizeInfo, SmartLimits } from './types.js'; +import { calculateSmartSearchLimits } from './shared-utils.js'; + +export class MongoDatabaseSizeAnalyzer implements IDatabaseSizeAnalyzer { + constructor(private db: Db) { } + + async getDatabaseSizeInfo(includeRecommendations: boolean = true): Promise { + // Get stats from collections + const collections = ['vcons', 'vcon_embeddings']; // Main collections + const tableSizes: any = {}; + let totalSizeBytes = 0; + let totalVCons = 0; + + for (const collName of collections) { + try { + const stats = await this.db.command({ collStats: collName }); + // storageSize or totalSize? totalSize includes indexes + const sizeBytes = stats.storageSize + (stats.totalIndexSize || 0); + const rowCount = stats.count; + + tableSizes[collName] = { + row_count: rowCount, + size_bytes: sizeBytes, + size_pretty: this.formatBytes(sizeBytes) + }; + + totalSizeBytes += sizeBytes; + if (collName === 'vcons') { + totalVCons = rowCount; + } + } catch (e) { + // Collection might not exist yet + tableSizes[collName] = { row_count: 0, size_bytes: 0, size_pretty: '0 Bytes' }; + } + } + + // Determine size category + let sizeCategory: 'small' | 'medium' | 'large' | 'very_large'; + if (totalVCons < 1000) { + sizeCategory = 'small'; + } else if (totalVCons < 10000) { + sizeCategory = 'medium'; + } else if (totalVCons < 100000) { + sizeCategory = 'large'; + } else { + sizeCategory = 'very_large'; + } + + const info: DatabaseSizeInfo = { + total_vcons: totalVCons, + total_size_bytes: totalSizeBytes, + total_size_pretty: this.formatBytes(totalSizeBytes), + size_category: sizeCategory, + recommendations: { + max_basic_search_limit: 10, + max_content_search_limit: 50, + max_semantic_search_limit: 50, + max_analytics_limit: 100, + recommended_response_format: 'metadata', + memory_warning: false + }, + table_sizes: tableSizes + }; + + if (includeRecommendations) { + info.recommendations = this.generateRecommendations(totalVCons, totalSizeBytes, sizeCategory); + } + + return info; + } + + async getSmartSearchLimits(queryType: string, estimatedResultSize: string): Promise { + const sizeInfo = await this.getDatabaseSizeInfo(false); + return calculateSmartSearchLimits(sizeInfo, queryType, estimatedResultSize); + } + + private generateRecommendations(totalVCons: number, totalSizeBytes: number, sizeCategory: string) { + const recommendations: any = { + max_basic_search_limit: 10, + max_content_search_limit: 50, + max_semantic_search_limit: 50, + max_analytics_limit: 100, + recommended_response_format: 'metadata', + memory_warning: false + }; + + if (sizeCategory === 'small') { + recommendations.max_basic_search_limit = 100; + recommendations.max_content_search_limit = 200; + recommendations.max_semantic_search_limit = 200; + recommendations.max_analytics_limit = 500; + recommendations.recommended_response_format = 'full'; + } else if (sizeCategory === 'medium') { + recommendations.max_basic_search_limit = 50; + recommendations.max_content_search_limit = 100; + recommendations.max_semantic_search_limit = 100; + recommendations.max_analytics_limit = 200; + recommendations.recommended_response_format = 'metadata'; + } else if (sizeCategory === 'large') { + recommendations.max_basic_search_limit = 25; + recommendations.max_content_search_limit = 50; + recommendations.max_semantic_search_limit = 50; + recommendations.max_analytics_limit = 100; + recommendations.recommended_response_format = 'metadata'; + recommendations.memory_warning = true; + } else { // very_large + recommendations.max_basic_search_limit = 10; + recommendations.max_content_search_limit = 25; + recommendations.max_semantic_search_limit = 25; + recommendations.max_analytics_limit = 50; + recommendations.recommended_response_format = 'metadata'; + recommendations.memory_warning = true; + } + + return recommendations; + } + + private formatBytes(bytes: number): string { + const sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB']; + if (bytes === 0) return '0 Bytes'; + const i = Math.floor(Math.log(bytes) / Math.log(1024)); + return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i]; + } +} diff --git a/src/db/queries.ts b/src/db/queries.ts index 3828183..4e73e3d 100644 --- a/src/db/queries.ts +++ b/src/db/queries.ts @@ -19,7 +19,9 @@ import { Analysis, Attachment, Dialog, VCon } from '../types/vcon.js'; const logger = createLogger('queries'); -export class VConQueries { +import { IVConQueries } from './interfaces.js'; + +export class SupabaseVConQueries implements IVConQueries { private redis: Redis | null = null; private cacheEnabled: boolean = false; private cacheTTL: number = 3600; // Default 1 hour @@ -43,6 +45,17 @@ export class VConQueries { } } + /** + * Initialize the database connection + * For Supabase, this is mostly a no-op as the client is stateless/HTTP based, + * but could be used to verify connection or warm up cache. + */ + async initialize(): Promise { + // No specific initialization needed for Supabase REST client + // Connection verification is handled by database-health-check + return Promise.resolve(); + } + /** * Create a new vCon with all related entities * โœ… Uses corrected field names throughout @@ -97,45 +110,45 @@ export class VConQueries { throw vconError; } - // Insert parties - if (vcon.parties.length > 0) { - const partiesData = vcon.parties.map((party, index) => ({ - vcon_id: vconData.id, - party_index: index, - tel: party.tel, - sip: party.sip, - stir: party.stir, - mailto: party.mailto, - name: party.name, - did: party.did, // โœ… Added per spec - uuid: party.uuid, // โœ… Added per spec Section 4.2.12 - validation: party.validation, - jcard: party.jcard, - gmlpos: party.gmlpos, - civicaddress: party.civicaddress, - timezone: party.timezone, - })); - - const { error: partiesError } = await this.supabase - .from('parties') - .insert(partiesData); - - if (partiesError) throw partiesError; - } + // Insert parties + if (vcon.parties.length > 0) { + const partiesData = vcon.parties.map((party, index) => ({ + vcon_id: vconData.id, + party_index: index, + tel: party.tel, + sip: party.sip, + stir: party.stir, + mailto: party.mailto, + name: party.name, + did: party.did, // โœ… Added per spec + uuid: party.uuid, // โœ… Added per spec Section 4.2.12 + validation: party.validation, + jcard: party.jcard, + gmlpos: party.gmlpos, + civicaddress: party.civicaddress, + timezone: party.timezone, + })); + + const { error: partiesError } = await this.supabase + .from('parties') + .insert(partiesData); + + if (partiesError) throw partiesError; + } - // Insert dialog if present - if (vcon.dialog && vcon.dialog.length > 0) { - for (let i = 0; i < vcon.dialog.length; i++) { - await this.addDialog(vconData.uuid, vcon.dialog[i]); + // Insert dialog if present + if (vcon.dialog && vcon.dialog.length > 0) { + for (let i = 0; i < vcon.dialog.length; i++) { + await this.addDialog(vconData.uuid, vcon.dialog[i]); + } } - } - // Insert analysis if present - if (vcon.analysis && vcon.analysis.length > 0) { - for (let i = 0; i < vcon.analysis.length; i++) { - await this.addAnalysis(vconData.uuid, vcon.analysis[i]); + // Insert analysis if present + if (vcon.analysis && vcon.analysis.length > 0) { + for (let i = 0; i < vcon.analysis.length; i++) { + await this.addAnalysis(vconData.uuid, vcon.analysis[i]); + } } - } // Insert attachments if present if (vcon.attachments && vcon.attachments.length > 0) { @@ -613,115 +626,115 @@ export class VConQueries { }, 'Database query count'); // Cache miss - fetch from Supabase - // Get main vcon - const { data: vconData, error: vconError } = await this.supabase - .from('vcons') - .select('*') - .eq('uuid', uuid) - .single(); + // Get main vcon + const { data: vconData, error: vconError } = await this.supabase + .from('vcons') + .select('*') + .eq('uuid', uuid) + .single(); - if (vconError) { - // Handle "not found" case (PGRST116: no rows returned) - if (vconError.code === 'PGRST116') { - throw new Error(`vCon not found: ${uuid}`); + if (vconError) { + // Handle "not found" case (PGRST116: no rows returned) + if (vconError.code === 'PGRST116') { + throw new Error(`vCon not found: ${uuid}`); + } + throw vconError; } - throw vconError; - } - - // Get parties - const { data: parties } = await this.supabase - .from('parties') - .select('*') - .eq('vcon_id', vconData.id) - .order('party_index'); - // Get dialog - const { data: dialogs } = await this.supabase - .from('dialog') - .select('*') - .eq('vcon_id', vconData.id) - .order('dialog_index'); - - // Get analysis - โœ… Queries 'schema' field (NOT 'schema_version') - const { data: analysis } = await this.supabase - .from('analysis') - .select('*') - .eq('vcon_id', vconData.id) - .order('analysis_index'); - - // Get attachments - const { data: attachments } = await this.supabase - .from('attachments') - .select('*') - .eq('vcon_id', vconData.id) - .order('attachment_index'); - - // Reconstruct vCon with all correct field names - const vcon: VCon = { - vcon: vconData.vcon_version as '0.3.0', - uuid: vconData.uuid, - extensions: vconData.extensions, - must_support: vconData.must_support, - created_at: vconData.created_at, - updated_at: vconData.updated_at, - subject: vconData.subject, - parties: parties?.map(p => ({ - tel: p.tel, - sip: p.sip, - stir: p.stir, - mailto: p.mailto, - name: p.name, - did: p.did, - uuid: p.uuid, // โœ… Correct field - validation: p.validation, - jcard: p.jcard, - gmlpos: p.gmlpos, - civicaddress: p.civicaddress, - timezone: p.timezone, - })) || [], - dialog: dialogs?.map(d => ({ - type: d.type, - start: d.start_time, - duration: d.duration_seconds, - parties: d.parties, - originator: d.originator, - mediatype: d.mediatype, - filename: d.filename, - body: d.body, - encoding: d.encoding, - url: d.url, - content_hash: d.content_hash, - disposition: d.disposition, - session_id: d.session_id, // โœ… Correct field - application: d.application, // โœ… Correct field - message_id: d.message_id, // โœ… Correct field - })), - analysis: analysis?.map(a => ({ - type: a.type, - dialog: a.dialog_indices?.length === 1 ? a.dialog_indices[0] : a.dialog_indices, - mediatype: a.mediatype, - filename: a.filename, - vendor: a.vendor, // โœ… Required field - product: a.product, - schema: a.schema, // โœ… CORRECT: 'schema' NOT 'schema_version' - body: a.body, // โœ… TEXT type - encoding: a.encoding, - url: a.url, - content_hash: a.content_hash, - })), - attachments: attachments?.map(att => ({ - type: att.type, - start: att.start_time, - party: att.party, - dialog: att.dialog, // โœ… Correct field - mediatype: att.mimetype, - filename: att.filename, - body: att.body, - encoding: att.encoding, - url: att.url, - content_hash: att.content_hash, - })), - }; + // Get parties + const { data: parties } = await this.supabase + .from('parties') + .select('*') + .eq('vcon_id', vconData.id) + .order('party_index'); + + // Get dialog + const { data: dialogs } = await this.supabase + .from('dialog') + .select('*') + .eq('vcon_id', vconData.id) + .order('dialog_index'); + + // Get analysis - โœ… Queries 'schema' field (NOT 'schema_version') + const { data: analysis } = await this.supabase + .from('analysis') + .select('*') + .eq('vcon_id', vconData.id) + .order('analysis_index'); + + // Get attachments + const { data: attachments } = await this.supabase + .from('attachments') + .select('*') + .eq('vcon_id', vconData.id) + .order('attachment_index'); + + // Reconstruct vCon with all correct field names + const vcon: VCon = { + vcon: vconData.vcon_version as '0.3.0', + uuid: vconData.uuid, + extensions: vconData.extensions, + must_support: vconData.must_support, + created_at: vconData.created_at, + updated_at: vconData.updated_at, + subject: vconData.subject, + parties: parties?.map(p => ({ + tel: p.tel, + sip: p.sip, + stir: p.stir, + mailto: p.mailto, + name: p.name, + did: p.did, + uuid: p.uuid, // โœ… Correct field + validation: p.validation, + jcard: p.jcard, + gmlpos: p.gmlpos, + civicaddress: p.civicaddress, + timezone: p.timezone, + })) || [], + dialog: dialogs?.map(d => ({ + type: d.type, + start: d.start_time, + duration: d.duration_seconds, + parties: d.parties, + originator: d.originator, + mediatype: d.mediatype, + filename: d.filename, + body: d.body, + encoding: d.encoding, + url: d.url, + content_hash: d.content_hash, + disposition: d.disposition, + session_id: d.session_id, // โœ… Correct field + application: d.application, // โœ… Correct field + message_id: d.message_id, // โœ… Correct field + })), + analysis: analysis?.map(a => ({ + type: a.type, + dialog: a.dialog_indices?.length === 1 ? a.dialog_indices[0] : a.dialog_indices, + mediatype: a.mediatype, + filename: a.filename, + vendor: a.vendor, // โœ… Required field + product: a.product, + schema: a.schema, // โœ… CORRECT: 'schema' NOT 'schema_version' + body: a.body, // โœ… TEXT type + encoding: a.encoding, + url: a.url, + content_hash: a.content_hash, + })), + attachments: attachments?.map(att => ({ + type: att.type, + start: att.start_time, + party: att.party, + dialog: att.dialog, // โœ… Correct field + mediatype: att.mimetype, + filename: att.filename, + body: att.body, + encoding: att.encoding, + url: att.url, + content_hash: att.content_hash, + })), + }; // Cache the result for future reads await this.setCachedVCon(uuid, vcon); @@ -730,6 +743,29 @@ export class VConQueries { }); } + /** + * Delete a vCon by UUID + */ + async deleteVCon(uuid: string): Promise { + return withSpan('db.deleteVCon', async (span) => { + span.setAttributes({ + [ATTR_VCON_UUID]: uuid, + [ATTR_DB_OPERATION]: 'delete', + }); + + // Get ID first to delete related data if not cascading + // Supabase cascade delete should handle related tables if configured + const { error } = await this.supabase + .from('vcons') + .delete() + .eq('uuid', uuid); + + if (error) throw error; + + await this.invalidateCachedVCon(uuid); + }); + } + /** * Search vCons by various criteria */ @@ -994,17 +1030,7 @@ export class VConQueries { /** * Delete a vCon and all related entities */ - async deleteVCon(uuid: string): Promise { - const { error } = await this.supabase - .from('vcons') - .delete() - .eq('uuid', uuid); - - if (error) throw error; - // Invalidate cache - await this.invalidateCachedVCon(uuid); - } /** * Update vCon metadata diff --git a/src/db/shared-utils.ts b/src/db/shared-utils.ts new file mode 100644 index 0000000..6fc02f8 --- /dev/null +++ b/src/db/shared-utils.ts @@ -0,0 +1,78 @@ + +import { DatabaseSizeInfo, SmartLimits } from './types.js'; + +/** + * Shared utility to calculate smart search limits based on database size. + * Used by both Supabase and MongoDB implementations to ensure consistent behavior. + */ +export function calculateSmartSearchLimits( + sizeInfo: DatabaseSizeInfo, + queryType: string, + estimatedResultSize: string +): SmartLimits { + let recommendedLimit: number; + let recommendedFormat: string; + let memoryWarning: boolean; + let explanation: string; + + // Base limits by query type + const baseLimits = { + basic: 50, + content: 100, + semantic: 100, + hybrid: 100, + analytics: 200 + }; + + // Adjust based on database size + const sizeMultiplier = { + small: 1.0, + medium: 0.8, + large: 0.5, + very_large: 0.3 + }; + + // Adjust based on estimated result size + const resultMultiplier = { + small: 1.0, + medium: 0.7, + large: 0.4, + unknown: 0.5 + }; + + const baseLimit = baseLimits[queryType as keyof typeof baseLimits] || 50; + const sizeMult = sizeMultiplier[sizeInfo.size_category]; + const resultMult = resultMultiplier[estimatedResultSize as keyof typeof resultMultiplier]; + + recommendedLimit = Math.max(1, Math.round(baseLimit * sizeMult * resultMult)); + + // Determine response format + if (sizeInfo.size_category === 'very_large' || estimatedResultSize === 'large') { + recommendedFormat = 'metadata'; + memoryWarning = true; + } else if (sizeInfo.size_category === 'large' || estimatedResultSize === 'medium') { + recommendedFormat = 'metadata'; + memoryWarning = false; + } else { + recommendedFormat = 'full'; + memoryWarning = false; + } + + // Generate explanation + explanation = `Database has ${sizeInfo.total_vcons.toLocaleString()} vCons (${sizeInfo.size_category} size). `; + explanation += `For ${queryType} queries with ${estimatedResultSize} results, `; + explanation += `recommend limit of ${recommendedLimit} with ${recommendedFormat} format.`; + + if (memoryWarning) { + explanation += ' โš ๏ธ Memory warning: Large dataset detected.'; + } + + return { + query_type: queryType, + estimated_result_size: estimatedResultSize, + recommended_limit: recommendedLimit, + recommended_response_format: recommendedFormat, + memory_warning: memoryWarning, + explanation + }; +} diff --git a/src/db/types.ts b/src/db/types.ts new file mode 100644 index 0000000..641be3e --- /dev/null +++ b/src/db/types.ts @@ -0,0 +1,124 @@ + +/** + * Database Inspector Interface + * Defines contracts for analyzing database structure and performance + */ +export interface IDatabaseInspector { + getDatabaseShape(options?: InspectorOptions): Promise; + getDatabaseStats(options?: InspectorStatsOptions): Promise; + analyzeQuery(query: string, analyzeMode?: 'explain' | 'explain_analyze'): Promise; + getConnectionInfo(): Promise; +} + +export interface InspectorOptions { + includeCounts?: boolean; + includeSizes?: boolean; + includeIndexes?: boolean; + includeColumns?: boolean; +} + +export interface InspectorStatsOptions { + includeQueryStats?: boolean; + includeIndexUsage?: boolean; + includeCacheStats?: boolean; + tableName?: string; +} + +/** + * Database Analytics Interface + * Defines contracts for business intelligence and data growth analysis + */ +export interface IDatabaseAnalytics { + getDatabaseAnalytics(options?: DatabaseAnalyticsOptions): Promise; + getMonthlyGrowthAnalytics(options?: MonthlyGrowthOptions): Promise; + getAttachmentAnalytics(options?: AttachmentAnalyticsOptions): Promise; + getTagAnalytics(options?: TagAnalyticsOptions): Promise; + getContentAnalytics(options?: ContentAnalyticsOptions): Promise; + getDatabaseHealthMetrics(options?: DatabaseHealthOptions): Promise; +} + +export interface DatabaseAnalyticsOptions { + includeGrowthTrends?: boolean; + includeContentAnalytics?: boolean; + includeAttachmentStats?: boolean; + includeTagAnalytics?: boolean; + includeHealthMetrics?: boolean; + monthsBack?: number; +} + +export interface MonthlyGrowthOptions { + monthsBack?: number; + includeProjections?: boolean; + granularity?: 'monthly' | 'weekly' | 'daily'; +} + +export interface AttachmentAnalyticsOptions { + includeSizeDistribution?: boolean; + includeTypeBreakdown?: boolean; + includeTemporalPatterns?: boolean; + topNTypes?: number; +} + +export interface TagAnalyticsOptions { + includeFrequencyAnalysis?: boolean; + includeValueDistribution?: boolean; + includeTemporalTrends?: boolean; + topNKeys?: number; + minUsageCount?: number; +} + +export interface ContentAnalyticsOptions { + includeDialogAnalysis?: boolean; + includeAnalysisBreakdown?: boolean; + includePartyPatterns?: boolean; + includeConversationMetrics?: boolean; + includeTemporalContent?: boolean; +} + +export interface DatabaseHealthOptions { + includePerformanceMetrics?: boolean; + includeStorageEfficiency?: boolean; + includeIndexHealth?: boolean; + includeConnectionMetrics?: boolean; + includeRecommendations?: boolean; +} + +/** + * Database Size Analyzer Interface + * Defines contracts for analyzing database size and providing smart recommendations + */ +export interface IDatabaseSizeAnalyzer { + getDatabaseSizeInfo(includeRecommendations?: boolean): Promise; + getSmartSearchLimits(queryType: string, estimatedResultSize: string): Promise; +} + +export interface DatabaseSizeInfo { + total_vcons: number; + total_size_bytes: number; + total_size_pretty: string; + size_category: 'small' | 'medium' | 'large' | 'very_large'; + recommendations: { + max_basic_search_limit: number; + max_content_search_limit: number; + max_semantic_search_limit: number; + max_analytics_limit: number; + recommended_response_format: string; + memory_warning: boolean; + }; + table_sizes: { + [table_name: string]: { + row_count: number; + size_bytes: number; + size_pretty: string; + }; + }; +} + +export interface SmartLimits { + query_type: string; + estimated_result_size: string; + recommended_limit: number; + recommended_response_format: string; + memory_warning: boolean; + explanation: string; +} diff --git a/src/jsonld/context.ts b/src/jsonld/context.ts new file mode 100644 index 0000000..d307972 --- /dev/null +++ b/src/jsonld/context.ts @@ -0,0 +1,53 @@ + +/** + * JSON-LD Context for vCon + * + * Defines the mapping of vCon terms to IRIs. + * Includes @jsonld-ex/core extensions for AI/ML metadata. + */ + +export const VCON_CONTEXT = { + "@context": [ + "https://schema.org/docs/jsonldcontext.json", + { + "vcon": "https://vcon.dev/ns/", + "xsd": "http://www.w3.org/2001/XMLSchema#", + + // vCon Core Terms + "parties": "vcon:parties", + "dialog": "vcon:dialog", + "analysis": "vcon:analysis", + "attachments": "vcon:attachments", + "group": "vcon:group", + "redacted": "vcon:redacted", + "appended": "vcon:appended", + + // Analysis Terms + "type": "@type", + "vendor": "vcon:vendor", + "product": "vcon:product", + "schema": "vcon:schema", + "body": "vcon:body", + "encoding": "vcon:encoding", + + // @jsonld-ex Extensions + "@confidence": { + "@id": "https://w3id.org/jsonld-ex/confidence", + "@type": "xsd:float" + }, + "@source": { + "@id": "https://w3id.org/jsonld-ex/source", + "@type": "@id" + }, + "@integrity": { + "@id": "https://w3id.org/jsonld-ex/integrity", + "@type": "xsd:string" + } + } + ] +}; + +export interface JsonLdDocument { + "@context"?: any; + [key: string]: any; +} diff --git a/src/jsonld/enrichment.ts b/src/jsonld/enrichment.ts new file mode 100644 index 0000000..e1b3eed --- /dev/null +++ b/src/jsonld/enrichment.ts @@ -0,0 +1,63 @@ + +import { Analysis, VCon } from '../types/vcon.js'; +import { VCON_CONTEXT, JsonLdDocument } from './context.js'; + +/** + * Extended Analysis type including JSON-LD extensions + */ +export interface EnrichedAnalysis extends Analysis, JsonLdDocument { + "@confidence"?: number; + "@source"?: string; +} + +/** + * Extended vCon type including JSON-LD context and extensions + */ +export interface EnrichedVCon extends VCon, JsonLdDocument { + "@integrity"?: string; +} + +/** + * Enriches an Analysis object with confidence score and source provenance. + * + * @param analysis The original Analysis object + * @param confidence Confidence score (0.0 to 1.0) + * @param source URI of the model or agent that generated the analysis + * @returns EnrichedAnalysis object + */ +export function enrichAnalysis( + analysis: Analysis, + confidence?: number, + source?: string +): EnrichedAnalysis { + const enriched: EnrichedAnalysis = { ...analysis }; + + if (confidence !== undefined) { + if (confidence < 0 || confidence > 1) { + throw new Error("Confidence score must be between 0.0 and 1.0"); + } + enriched["@confidence"] = confidence; + } + + if (source) { + enriched["@source"] = source; + } + + return enriched; +} + +/** + * Converts a standard vCon to a JSON-LD EnrichedVCon. + * Adds the @context definition. + * + * @param vcon The original vCon object + * @returns EnrichedVCon with @context + */ +export function toJsonLd(vcon: VCon): EnrichedVCon { + return { + "@context": VCON_CONTEXT["@context"], + ...vcon, + // Safely cast analysis array if it exists to allow for EnrichedAnalysis + analysis: vcon.analysis as EnrichedAnalysis[] | undefined + }; +} diff --git a/src/jsonld/index.ts b/src/jsonld/index.ts new file mode 100644 index 0000000..8ea6e6c --- /dev/null +++ b/src/jsonld/index.ts @@ -0,0 +1,4 @@ + +export * from './context.js'; +export * from './enrichment.js'; +export * from './integrity.js'; diff --git a/src/jsonld/integrity.ts b/src/jsonld/integrity.ts new file mode 100644 index 0000000..536b8e2 --- /dev/null +++ b/src/jsonld/integrity.ts @@ -0,0 +1,65 @@ + +import { createHash } from 'crypto'; +import stringify from 'fast-json-stable-stringify'; +import { VCon } from '../types/vcon.js'; +import { EnrichedVCon } from './enrichment.js'; + +/** + * Calculates a SHA-256 hash of the vCon content. + * Uses fast-json-stable-stringify to ensure deterministic serialization. + * Excludes the @integrity field from the hash calculation. + * + * @param vcon The vCon to hash + * @returns SHA-256 hash in hex format + */ +export function calculateHash(vcon: VCon | EnrichedVCon): string { + // Create a shallow copy to modify + const vconCopy = { ...vcon } as EnrichedVCon; + + // Remove @integrity field if it exists + delete vconCopy["@integrity"]; + + // Serialize deterministically + const serialized = stringify(vconCopy); + + // Compute SHA-256 hash + return createHash('sha256').update(serialized).digest('hex'); +} + +/** + * Adds an @integrity field to the vCon containing its SHA-256 hash. + * + * @param vcon The vCon to sign + * @returns EnrichedVCon with @integrity field + */ +export function signVCon(vcon: VCon | EnrichedVCon): EnrichedVCon { + const hash = calculateHash(vcon); + return { + ...vcon, + "@integrity": `sha256-${hash}` + }; +} + +/** + * Verifies the integrity of a vCon by recalculating the hash + * and comparing it with the @integrity field. + * + * @param vcon The vCon to verify + * @returns true if integrity is valid, false otherwise + */ +export function verifyIntegrity(vcon: EnrichedVCon): boolean { + if (!vcon["@integrity"]) { + return false; + } + + const providedHash = vcon["@integrity"]; + + // Support incomplete/flexible prefixes if needed, but for now enforce sha256- prefix + if (!providedHash.startsWith('sha256-')) { + return false; // Unsupported algo + } + + const calculatedHash = `sha256-${calculateHash(vcon)}`; + + return providedHash === calculatedHash; +} diff --git a/src/server/setup.ts b/src/server/setup.ts index d082469..c1430a4 100644 --- a/src/server/setup.ts +++ b/src/server/setup.ts @@ -6,10 +6,21 @@ import { Server } from '@modelcontextprotocol/sdk/server/index.js'; import { getRedisClient, getSupabaseClient } from '../db/client.js'; -import { DatabaseAnalytics } from '../db/database-analytics.js'; -import { DatabaseInspector } from '../db/database-inspector.js'; -import { DatabaseSizeAnalyzer } from '../db/database-size-analyzer.js'; -import { VConQueries } from '../db/queries.js'; +// Supabase implementations +import { SupabaseDatabaseAnalytics } from '../db/database-analytics.js'; +import { SupabaseDatabaseInspector } from '../db/database-inspector.js'; +import { SupabaseDatabaseSizeAnalyzer } from '../db/database-size-analyzer.js'; +import { SupabaseVConQueries } from '../db/queries.js'; +// Interfaces +import { IVConQueries } from '../db/interfaces.js'; +import { + IDatabaseInspector, + IDatabaseAnalytics, + IDatabaseSizeAnalyzer +} from '../db/types.js'; // Updated import path +// Mongo implementations (dynamic import or static if we convert to ESM fully) +// We will use dynamic imports for Mongo to avoid hard dependency if not needed, or stick to static if preferred. +// verification script uses static import. import { debugTenantVisibility, setTenantContext, verifyTenantContext } from '../db/tenant-context.js'; import { PluginManager } from '../hooks/plugin-manager.js'; import { createLogger } from '../observability/logger.js'; @@ -20,10 +31,10 @@ const logger = createLogger('server-setup'); export interface ServerContext { server: Server; - queries: VConQueries; - dbInspector: DatabaseInspector; - dbAnalytics: DatabaseAnalytics; - dbSizeAnalyzer: DatabaseSizeAnalyzer; + queries: IVConQueries; + dbInspector: IDatabaseInspector; + dbAnalytics: IDatabaseAnalytics; + dbSizeAnalyzer: IDatabaseSizeAnalyzer; supabase: any; redis: any; pluginManager: PluginManager; @@ -54,42 +65,89 @@ export function createServer(): Server { * Initialize database and cache clients */ export async function initializeDatabase(): Promise<{ - queries: VConQueries; - dbInspector: DatabaseInspector; - dbAnalytics: DatabaseAnalytics; - dbSizeAnalyzer: DatabaseSizeAnalyzer; + queries: IVConQueries; + dbInspector: IDatabaseInspector; + dbAnalytics: IDatabaseAnalytics; + dbSizeAnalyzer: IDatabaseSizeAnalyzer; supabase: any; redis: any; + mongoClient?: { client: any; db: any }; }> { - const supabase = getSupabaseClient(); - const redis = getRedisClient(); // Optional - returns null if not configured - const queries = new VConQueries(supabase, redis); - const dbInspector = new DatabaseInspector(supabase); - const dbAnalytics = new DatabaseAnalytics(supabase); - const dbSizeAnalyzer = new DatabaseSizeAnalyzer(supabase); - - logger.info({ - has_redis: !!redis, - cache_enabled: !!redis - }, 'Database client initialized'); - - // Set tenant context for RLS if enabled - try { - await setTenantContext(supabase); - await verifyTenantContext(supabase); - - // Debug tenant visibility - if (process.env.MCP_DEBUG === 'true' || process.env.RLS_DEBUG === 'true') { - await debugTenantVisibility(supabase); + const dbType = process.env.DB_TYPE || 'supabase'; + + let queries: IVConQueries; + let dbInspector: IDatabaseInspector; + let dbAnalytics: IDatabaseAnalytics; + let dbSizeAnalyzer: IDatabaseSizeAnalyzer; + + let supabase: any = null; + let redis: any = null; + let mongoClient: any = null; + + if (dbType === 'mongodb') { + // Dynamic imports for Mongo modules + const { getMongoClient } = await import('../db/mongo-client.js'); + const { MongoVConQueries } = await import('../db/mongo-queries.js'); + const { MongoDatabaseInspector } = await import('../db/mongo-inspector.js'); + const { MongoDatabaseAnalytics } = await import('../db/mongo-analytics.js'); + const { MongoDatabaseSizeAnalyzer } = await import('../db/mongo-size-analyzer.js'); + + mongoClient = await getMongoClient(); + const db = mongoClient.db; + + queries = new MongoVConQueries(db); + dbInspector = new MongoDatabaseInspector(db); + dbAnalytics = new MongoDatabaseAnalytics(db); + dbSizeAnalyzer = new MongoDatabaseSizeAnalyzer(db); + + logger.info({ db_type: 'mongodb' }, 'Initialized MongoDB backend'); + } else { + // Default to Supabase + supabase = getSupabaseClient(); + redis = getRedisClient(); // Optional + queries = new SupabaseVConQueries(supabase, redis); + dbInspector = new SupabaseDatabaseInspector(supabase); + dbAnalytics = new SupabaseDatabaseAnalytics(supabase); + dbSizeAnalyzer = new SupabaseDatabaseSizeAnalyzer(supabase); + + logger.info({ + db_type: 'supabase', + has_redis: !!redis + }, 'Initialized Supabase backend'); + } + + // To avoid breaking existing code that expects Supabase to exist (e.g. some plugins might need it?): + if (!supabase && process.env.SUPABASE_URL && dbType === 'mongodb') { + try { + // Optional: Initialize Supabase client even in Mongo mode if creds exist, + // but don't use it for core queries. + supabase = getSupabaseClient(); + } catch (e) { + // Ignore } - } catch (error) { - logger.warn({ - err: error, - error_message: error instanceof Error ? error.message : String(error) - }, 'Failed to set tenant context'); - // Continue anyway - the server should still work for non-RLS scenarios } + // Set tenant context for RLS if enabled (Supabase only) + if (supabase) { + try { + await setTenantContext(supabase); + await verifyTenantContext(supabase); + + // Debug tenant visibility + if (process.env.MCP_DEBUG === 'true' || process.env.RLS_DEBUG === 'true') { + await debugTenantVisibility(supabase); + } + } catch (error) { + logger.warn({ + err: error, + error_message: error instanceof Error ? error.message : String(error) + }, 'Failed to set tenant context'); + } + } + + // Initialize DB (create indexes, etc) + await queries.initialize(); + return { queries, dbInspector, @@ -97,6 +155,7 @@ export async function initializeDatabase(): Promise<{ dbSizeAnalyzer, supabase, redis, + mongoClient }; } diff --git a/src/services/vcon-service.ts b/src/services/vcon-service.ts index d5c7472..ef05529 100644 --- a/src/services/vcon-service.ts +++ b/src/services/vcon-service.ts @@ -7,7 +7,7 @@ */ import { randomUUID } from 'crypto'; -import { VConQueries } from '../db/queries.js'; +import { IVConQueries } from '../db/interfaces.js'; import { PluginManager } from '../hooks/plugin-manager.js'; import { RequestContext } from '../hooks/plugin-interface.js'; import { ATTR_VCON_UUID } from '../observability/attributes.js'; @@ -20,7 +20,7 @@ import { validateVCon } from '../utils/validation.js'; // ============================================================================ export interface VConServiceContext { - queries: VConQueries; + queries: IVConQueries; pluginManager: PluginManager; } @@ -76,7 +76,7 @@ export interface DeleteVConOptions { // ============================================================================ export class VConService { - constructor(private context: VConServiceContext) {} + constructor(private context: VConServiceContext) { } /** * Normalize partial RequestContext to full RequestContext @@ -276,7 +276,7 @@ export class VConService { * Search vCons with hook support */ async search( - filters: Parameters[0], + filters: Parameters[0], options: { requestContext?: Partial; skipHooks?: boolean } = {} ): Promise { const requestContext = this.normalizeRequestContext(options.requestContext, 'search'); diff --git a/tests/mongo-crud.test.ts b/tests/mongo-crud.test.ts new file mode 100644 index 0000000..04f7fbc --- /dev/null +++ b/tests/mongo-crud.test.ts @@ -0,0 +1,104 @@ +import { describe, it, expect, beforeAll, afterAll } from 'vitest'; +import { getMongoClient, closeMongoClient } from '../src/db/mongo-client.js'; +import { MongoVConQueries } from '../src/db/mongo-queries.js'; +import { randomUUID } from 'crypto'; +import { VCon } from '../src/types/vcon.js'; + +// Only run if MONGO_URL is set +const runTest = process.env.MONGO_URL ? describe : describe.skip; + +runTest('MongoDB VCon CRUD', () => { + let queries: MongoVConQueries; + const testUuid = randomUUID(); + const testVCon: VCon = { + vcon: '0.3.0', + uuid: testUuid, + created_at: new Date().toISOString(), + updated_at: new Date().toISOString(), + subject: 'Test MongoDB VCon', + parties: [ + { + name: 'Alice', + tel: '+1234567890', + uuid: randomUUID() + } + ], + dialog: [], + analysis: [], + attachments: [] + }; + + beforeAll(async () => { + const { db } = await getMongoClient(); + queries = new MongoVConQueries(db); + await queries.initialize(); + }); + + afterAll(async () => { + // Clean up + if (queries) { + try { + await queries.deleteVCon(testUuid); + } catch (e) { + console.error('Cleanup failed', e); + } + } + await closeMongoClient(); + }); + + it('should create a vCon', async () => { + const result = await queries.createVCon(testVCon); + expect(result.uuid).toBe(testUuid); + }); + + it('should retrieve the created vCon', async () => { + const retrieved = await queries.getVCon(testUuid); + expect(retrieved.uuid).toBe(testUuid); + expect(retrieved.subject).toBe(testVCon.subject); + expect(retrieved.parties).toHaveLength(1); + expect(retrieved.parties[0].name).toBe('Alice'); + }); + + it('should find vCon by keyword search', async () => { + // Wait briefly for text index to update? (MongoDB text indexes depend on commit interval) + // Default is 60s? or immediate? Usually near-realtime but not atomic. + // We'll try. + + // Simple retry loop for search + let found = false; + for (let i = 0; i < 5; i++) { + const results = await queries.keywordSearch({ query: 'MongoDB' }); + if (results.some(r => r.vcon_id === testUuid)) { + found = true; + break; + } + await new Promise(r => setTimeout(r, 1000)); + } + + expect(found).toBe(true); + }); + + it('should add a dialog', async () => { + const dialog = { + type: 'text' as const, + start: new Date().toISOString(), + duration: 10, + parties: [0], + originator: 0, + mediatype: 'text/plain', + body: 'Hello Mongo', + encoding: 'none' as const + }; + + await queries.addDialog(testUuid, dialog); + + const updated = await queries.getVCon(testUuid); + expect(updated.dialog).toHaveLength(1); + expect(updated.dialog![0].body).toBe('Hello Mongo'); + }); + + it('should get search count', async () => { + const count = await queries.keywordSearchCount({ query: 'MongoDB' }); + expect(count).toBeGreaterThan(0); + }); +}); diff --git a/verification_result.txt b/verification_result.txt new file mode 100644 index 0000000..6899a54 --- /dev/null +++ b/verification_result.txt @@ -0,0 +1 @@ +SUCCESS: uuid_1 and vcon_text_search found \ No newline at end of file