Chinese Yellow Pages | Classifieds | Knowledge | Tax | IME

General intro


Open source implementation

Responsible AI Toolkits for AI Ethics & Privacy

TensorFlow Privacy

TensorFlow Privacy is a Python library that includes implementations of TensorFlow optimizers for training machine learning models with differential privacy.

TensorFlow Federated

TFF has been developed to facilitate open research and experimentation with Federated Learning (FL), an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally.


deon is a command-line tool that allows you to easily add an ethics checklist to your data science projects. The goal of deon is to push that conversation forward and provide concrete, actionable reminders to the developers that have influence over how data science gets done. Federated Learning helps preserve privacy, as it’s a new machine learning paradigm to learn a shared model across users or organizations without direct access to the data.

AI Transparency & Bias 

Transparent AI allows people to look under the hood of AI models, so that the model can be properly explained and communicated. Similar to explainable AI, providing transparency into the motives, data, or intent behind the model will take the guesswork out of the model.

Model Card Toolkit

MCT streamlines and automates the generation of Model Cards [1], machine learning documents that provide context and transparency into a model’s development and performance.

TensorFlow Model Remediation

TensorFlow Model Remediation is a library that provides solutions for machine learning practitioners working to create and train models in a way that reduces or eliminates user harm resulting from underlying performance biases.


A core principle of ethical AI, fairness includes protecting individuals and groups from discrimination, bias, or mistreatment. Models need to be evaluated for fairness so that there’s no bias towards any groups, factors, or variables.

AI Fairness 360

The AI Fairness 360 toolkit from IBM is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle.


Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as metrics for model assessment.

Responsible AI Toolbox

From Microsoft, the Responsible AI Toolbox is a suite of tools that provides a collection of model and data exploration and assessment user interfaces that enable a better understanding of AI systems. It’s an approach to assessing, developing, and deploying AI systems in a safe, trustworthy, and ethical manner, and taking responsible decisions and actions.

AI Explainability

Explainability, aka XAI, is a set of processes and methods that allows human users to better understand and trust the results and output created by machine learning algorithms.


The moDel Agnostic Language for Exploration and eXplanation (aka DALEX) package Xrays any model and helps to explore and explain its behavior, while helping to understand how complex models are working.

TensorFlow Data Validation

TensorFlow Data Validation (TFDV) is a library for exploring and validating machine learning data. It is designed to be highly scalable and to work well with TensorFlow and TensorFlow Extended (TFX).


XAI is a machine learning library that is designed with AI explainability at its core. XAI contains various tools that enable for analysis and evaluation of data and models.

Adversarial Machine Learning & Trusted AI

Adversarial machine learning is a machine learning technique that attempts to exploit models by taking advantage of obtainable model information and using it to create malicious attacks. Responsible AI toolkits can help to protect these attacks from happening and to save the systems should they occur.


Fawkes is an algorithm and software tool that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. This involves distorting personal images, or cloaking them, so that they can’t be detected by malicious models.


TextAttack is a Python framework for adversarial attacks, adversarial training, and data augmentation in NLP. TextAttack makes experimenting with the robustness of NLP models seamless, fast, and easy. It’s also useful for NLP model training, adversarial training, and data augmentation.


AdverTorch is a Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, also scripts for adversarial training.



Other docs: