We build tools to audit and improve the legal compliance of large language models through explainable AI and causal inference. Privacy laws increasingly require causal evidence of compliance, which existing XAI methods cannot provide. Our Compliance Audit Framework addresses this gap, anchored in the principles of data minimization and purpose limitation.
- Compliance Audit Benchmark (CAB). A causal, counterfactual evaluation suite for finance, healthcare, and employment.
- Empirical evaluation of LLMs against CAB using causal factor analysis.
- Minimal-cause Explainer for actionable, legally meaningful audit guidance.
- Causal Intervention Component. A real-time guardrail that mitigates detected compliance issues during deployment.
📊 compliance-audit-benchmark: Data, scripts, and artifacts of CAB. MIT licensed.
- Prof. Sebastian Zimmeck, Wesleyan University, Privacy Tech Lab
- Prof. Baishakhi Ray, Columbia University
- Prof. Pooyan Jamshidi, University of South Carolina
Contact: sebastian@privacytechlab.org