AI Ethics: Building Responsible & Ethical AI Systems

“AI systems (will) take decisions that have ethical grounds and consequences.”

Prof. Dr. Virginia Dignum from Umeå University

On March 23, 2016, Microsoft released its AI-based chatbot Tay via Twitter. The bot was trained to generate its responses based on interactions with users. But there was a catch. Various users started posting offensive tweets toward the bot, resulting in Tay making replies in the same language. The bot basically turned into a racist, sexist, hate machine. Less than a day after the initial release, Tay was taken offline by the developer company followed by an official apology for the bot’s controversial tweets.Tay is an example of the dark side of AI use. One of many, to be honest. The world knows quite a few cases when AI went wrong.

Though it’s not Darth Vader, AI has its dark side

To prevent potential negative outcomes, companies must regulate their artificial intelligence programming and set a clear governance framework in advance. They need to implement responsible AI.In our post, we present an informative explanation of responsible AI, its principles, and the approaches to ensure AI projects are implemented with responsibility in mind.

What is responsible AI and why does it matter?

Responsible AI (sometimes referred to as ethical or trustworthy AI) is a set of principles and normative declarations used to document and regulate how artificial intelligence systems should be developed, deployed, and governed to comply with ethics and laws. In other words, organizations attempting to deploy AI models responsibly first build a framework with pre-defined principles, ethics, and rules to govern AI.Why is responsibility in AI programming important?

As AI advances, more and more companies use various machine learning (ML) models to automate and improve tasks that used to require human intervention.An ML model is an algorithm (e.g., Decision Trees, Random Forests, or Neural Networks) that has been trained on data to generate predictions and help a computer system, a human, or their tandem make decisions. The decisions can include anything from telling whether a transaction is fraudulent to accepting/declining loan applications to detecting brain tumors on MRIs and working as a diagnosis support tool for doctors.The models and data they are trained on are far from perfect. They may introduce intended and unintended negative outcomes, not only positive ones. Remember Tay? That’s what we’re talking about.By taking a responsible approach, companies will be able to

  • create AI systems that are efficient and compatible with regulations;
  • ensure that development processes consider all the ethical, legal, and societal implications of AI;
  • track and mitigate bias in AI models;
  • build trust in AI;
  • prevent or minimize negative effects of AI; and
  • get rid of ambiguity about “whose fault it is” if something in AI goes wrong.

All of this will help organizations pursuing AI power prevent potential reputational and financial damage down the road.Responsible AI isn’t something that exists only in theory. Three major tech giants — Google, Microsoft, and IBM — have called for artificial intelligence to be regulated and built their own governance frameworks and guidelines. Google CEO Sundar Pichai declared the importance of developing international regulatory principles saying, “We need to be clear-eyed about what could go wrong with AI.”This brings us to the heart and soul of responsible artificial intelligence — its principles.

Key principles of responsible AI

Responsible AI frameworks aim at mitigating or eliminating the risks and dangers machine learning poses. For this, companies should make their models transparent, fair, secure, and reliable, to say the least.

Responsible AI principles

The picture above represents the core principles but their number can vary from one organization to another. Not to mention that the ways they are interpreted and operationalized can vary too. We’ll review the main principles of responsible AI in this section.

Fairness

Principle: AI systems should treat everyone fairly and avoid affecting similarly-situated groups of people in different ways. Simply put, they should be unbiased.

Humans are prone to be biased in their judgments. Computer systems, in theory, have the potential to be fairer when making decisions. But we shouldn’t forget that ML models learn from real-world data which is highly likely to contain biases. This can lead to unfair results as initially equal groups of people may be systematically disadvantaged because of their gender, race, or sexual orientation.For example, Facebook’s ad-serving algorithm was accused of being discriminatory as it reproduced real-world gender disparities when showing job listings, even among equally qualified candidates.This case proves the necessity to work towards developing systems that are fair and inclusive for all, without favoring anyone or discriminating against them, and without causing harm.To enable algorithm fairness, you can:

  • research biases and their causes in data (e.g., an unequal representation of classes in training data like when the recruitment tool is shown more resumes from men, making women a minority class);
  • identify and document what impact the technology may have and how it may behave;
  • define your model fairness for different use cases (e.g., for a certain number of age groups); and
  • update training and testing data based on the feedback from those who use the model and how they do it.

Questions to be answered to ensure fairness: Is your AI fair? Is there any bias in training data? Are those responsible for developing AI systems fair-minded?

Privacy and security

Principle: AI systems should be able to protect private information and resist attacks just like other technology.As said, ML models learn from training data to make predictions on new data inputs. In many cases, especially in the healthcare industry, training data can be quite sensitive. For example, CT scans and MRIs contain details that identify a patient and are known as protected health information. When using patient data for AI purposes, companies must do additional data preprocessing work such as anonymization and de-identification not to violate HIPAA rules (the Health Insurance Portability and Accountability Act that protects the privacy of health records in the US.)