UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
-
Updated
Jan 20, 2026 - Python
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
[NeurIPS 2025] SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations
RAG Hallucination Detecting By LRP.
CRoPS (TMLR)
Novel Hallucination detection method
Semi-supervised pipeline to detect LLM hallucinations. Uses Mistral-7B for zero-shot pseudo-labeling and DeBERTa for efficient classification.
Build your own open-source REST API endpoint to detect hallucination in LLM generated responses.
Source code for the paper: Mitigating Hallucination in Security Policy Generation in Large Language Models: A Prompt Ensembling Approach
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
This repository contains the codebase for the PoC of LLM package hallucination and associated vulnerabilties.
Add a description, image, and links to the llm-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the llm-hallucination topic, visit your repo's landing page and select "manage topics."