User Tools

Site Tools


products:ict:ai:fainess_indicators

Introduction to Fairness Indicators

Fairness Indicators is a suite of tools built on top of TensorFlow Model Analysis (TFMA) that enable regular evaluation of fairness metrics in product pipelines. TFMA is a library for evaluating both TensorFlow and non-TensorFlow machine learning models. It allows you to evaluate your models on large amounts of data in a distributed manner, compute in-graph and other metrics over different slices of data, and visualize them in notebooks.

Fairness Indicators: Scalable Infrastructure for Fair ML Systems

While industry and academia continue to explore the benefits of using machine learning (ML) to make better products and tackle important problems, algorithms and the datasets on which they are trained also have the ability to reflect or reinforce unfair biases. For example, consistently flagging non-toxic text comments from certain groups as “spam” or “high toxicity” in a moderation system leads to exclusion of those groups from conversation.

Fairness Indicators

Fairness Indicators is a library that enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool suite, you can:

Fairness Indicators for Systematic Assessments of Visual Feature Extractors

Concepts on AI fairness

An overview of some available Fairness Frameworks & Packages

AI Fairness 360

Fairness Indicators Demo: Scalable Infrastructure for Fair ML Systems

Google AI introduces Fairness Indicators for ML systems

products/ict/ai/fainess_indicators.txt · Last modified: 2022/04/26 14:19 by 127.0.0.1