Official code of "StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis" (CVPR 2022)
-
Updated
Jul 31, 2022 - Python
Official code of "StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis" (CVPR 2022)
Official code of "Discover the Unknown Biased Attribute of an Image Classifier" (ICCV 2021)
Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360
Demographic bias in misdiagnosis by computational pathology models - Nature Medicine
Notes, references and materials on AI Fairness that I found useful and helped me in my academic research.
Ethical AI Governance Platform | Bias Detection | Compliance | Fairness Testing for ML, LLM & Multimodal AI | Open Source
This repository contains the dataset and code used in our paper, “MENA Values Benchmark: Evaluating Cultural Alignment and Multilingual Bias in Large Language Models.” It provides tools to evaluate how large language models represent Middle Eastern and North African cultural values across 16 countries, multiple languages, and perspectives.
FairWell is a Responsible AI tool developed using Streamlit
Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine le…
Fairness Analysis in US Mortgage Lending with Machine Learning Algorithms
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
Here we deal with the issue of fairness in machine learning classification algorithm and we try to exploit regularization technique to attain fairness.
A project on bias detection in transformer-based LLMs, with a weakly supervised approach.
This project introduces a method to mitigate demographic bias in generative face aging models. We adapt the StyleGAN-based SAM model by adding a race-consistency loss, enforced by the DeepFace classifier, to promote more equitable and identity-preserving age transformations across different racial groups.
A benchmark of the different word embedding techniques on fairness and bias in AI models
Multi-label toxic comment classification using DistilBERT with explainable AI via Captum Integrated Gradients (IG). Trained on the Jigsaw dataset, the model predicts six toxicity categories : toxic, severe toxic, obscene, threat, insult, and identity hate , while highlighting key words driving each prediction.
A model card inspired by Derman & Wilmott's "Modelers' Hippocratic Oath", adapted for responsible and nuanced ML.
Fairness and bias detection library for Elixir AI/ML systems
Deep-Learning approach for generating Fair and Accurate Input Representation for crime rate estimation in continuous protected attributes and continuous targets.
Add a description, image, and links to the ai-fairness topic page so that developers can more easily learn about it.
To associate your repository with the ai-fairness topic, visit your repo's landing page and select "manage topics."