An async Rust library for LLM-based content analysis, providing hallucination detection and code critique functionality. Built with Tokio for high-performance concurrent operations.
- Hallucination Detection: Analyze text for factual errors and unsupported claims
- Code Critique: Review Rust code for bugs, style issues, and missing tests
- Async/Await Support: Built on Tokio for maximum performance
- Concurrent Execution: Run multiple analyses simultaneously with
tokio::join! - Multiple LLM Providers: Currently supports OpenAI, easily extensible
- Type-Safe: Strong typing for all data structures and responses
- Error Handling: Comprehensive error types with detailed messages
Add this to your Cargo.toml:
[dependencies]
llm_affector = "0.1.0"
tokio = { version = "1.0", features = ["full"] }-
Set your OpenAI API key:
export LLM_API_KEY="sk-your-openai-api-key-here"
Or create a
.envfile:LLM_API_KEY=sk-your-openai-api-key-here
-
Basic usage:
use llm_affector::{detect_hallucination, critique_code, Verdict};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let suspicious_text = "Scientists have proven that coffee beans grow on the moon.";
let risky_code = "fn divide(a: i32, b: i32) -> i32 { a / b }";
// Run both analyses concurrently for maximum performance
let (hallucination_result, critique_result) = tokio::join!(
detect_hallucination(suspicious_text),
critique_code(risky_code)
);
// Handle hallucination detection
match hallucination_result? {
Verdict::Pass => println!("β
No hallucinations detected"),
Verdict::Fail(issues) => {
println!("β Hallucinations found:");
for issue in issues {
println!(" - {}: {}", issue.claim, issue.explanation);
}
}
}
// Handle code critique
let report = critique_result?;
if !report.risks.is_empty() {
println!("β οΈ Code risks identified:");
for risk in report.risks {
println!(" - {}", risk);
}
}
Ok(())
}use llm_affector::{detect_hallucination, Verdict, Issue};
let result = detect_hallucination("Your text to analyze").await?;
match result {
Verdict::Pass => {
// No issues found
}
Verdict::Fail(issues) => {
for issue in issues {
println!("Problematic claim: {}", issue.claim);
println!("Explanation: {}", issue.explanation);
}
}
}use llm_affector::{critique_code, CritiqueReport};
let report = critique_code(r#"
fn unsafe_function() {
let data = std::ptr::null();
// Potential issues here...
}
"#).await?;
println!("Risks: {:?}", report.risks);
println!("Improvements: {:?}", report.improvements);
println!("Missing tests: {:?}", report.missing_tests);use llm_affector::{detect_hallucination, LlmAffectorError};
match detect_hallucination("text").await {
Ok(verdict) => { /* handle success */ }
Err(LlmAffectorError::ApiKeyNotFound) => {
eprintln!("Please set LLM_API_KEY environment variable");
}
Err(LlmAffectorError::HttpError(e)) => {
eprintln!("Network error: {}", e);
}
Err(e) => {
eprintln!("Other error: {}", e);
}
}| Variable | Description | Default |
|---|---|---|
LLM_API_KEY |
OpenAI API key (required) | - |
LLM_BASE_URL |
Custom API endpoint | https://api.openai.com/v1 |
LLM_MODEL |
Model to use | gpt-4 |
LLM_TIMEOUT_SECONDS |
Request timeout | 30 |
Create a .env file in your project root:
LLM_API_KEY=sk-your-openai-api-key-here
LLM_MODEL=gpt-4
LLM_TIMEOUT_SECONDS=60Add to your Cargo.toml:
[dependencies]
dotenv = "0.15"use llm_affector::{detect_hallucination, critique_code};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Run analyses one after another
let hallucination_result = detect_hallucination("Text to check").await?;
let critique_result = critique_code("fn example() {}").await?;
// Process results...
Ok(())
}use llm_affector::{detect_hallucination, critique_code};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Run analyses concurrently for better performance
let (hallucination_result, critique_result) = tokio::join!(
detect_hallucination("Text to check"),
critique_code("fn example() {}")
);
let verdict = hallucination_result?;
let report = critique_result?;
// Process results...
Ok(())
}use llm_affector::detect_hallucination;
use futures::future::join_all;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let texts = vec![
"Text 1 to analyze",
"Text 2 to analyze",
"Text 3 to analyze",
];
// Analyze all texts concurrently
let futures = texts.iter().map(|text| detect_hallucination(text));
let results = join_all(futures).await;
for (i, result) in results.into_iter().enumerate() {
match result? {
Verdict::Pass => println!("Text {}: β
Clean", i + 1),
Verdict::Fail(issues) => println!("Text {}: β {} issues", i + 1, issues.len()),
}
}
Ok(())
}llm_affector/
βββ src/
β βββ lib.rs # Public API exports
β βββ client.rs # HTTP client for LLM APIs
β βββ types.rs # Data structures (Verdict, Issue, etc.)
β βββ errors.rs # Error types and handling
β βββ hallucination.rs # Hallucination detection logic
β βββ critique.rs # Code critique logic
β βββ main.rs # Example binary
βββ examples/ # Usage examples
βββ tests/ # Integration tests
βββ docs/ # Additional documentation
Run the test suite:
cargo testRun examples:
# Basic usage
cargo run --example basic_usage
# Concurrent analysis
cargo run --example concurrent_analysis
# Error handling
cargo run --example error_handlingThe library is designed for high performance:
- Async/Await: Non-blocking I/O operations
- Concurrent Execution: Run multiple analyses simultaneously
- HTTP Connection Pooling: Reuse connections via
reqwest - Efficient JSON Parsing: Streaming JSON with
serde
| Operation | Time (avg) | Concurrent Speedup |
|---|---|---|
| Single hallucination check | ~2.1s | - |
| Single code critique | ~1.8s | - |
| Both analyses (sequential) | ~3.9s | - |
| Both analyses (concurrent) | ~2.3s | 1.7x faster |
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
The library is designed to be extensible. To add a new provider:
- Implement the client interface in
src/client.rs - Add provider-specific types in
src/types.rs - Update the configuration system
- Add tests and examples
git clone https://github.com/Mattbusel/llm_affector
cd llm_affector
cp .env.example .env
# Edit .env with your API key
cargo build
cargo test- Built with Tokio for async runtime
- HTTP client powered by reqwest
- JSON handling via serde
- Error handling with thiserror
- Initial release
- Hallucination detection functionality
- Code critique functionality
- OpenAI API integration
- Async/await support with Tokio
- Comprehensive error handling
- Example usage and documentation
Made with β€οΈ in Rust π¦