We Must Design AI Governance Frameworks That Promote Well-Being

We Must Design AI Governance Frameworks That Promote Well-Being

Sep 14, 2022
Abstract digital transformation illustration with silhouettes of people, a cog, and circuits

Decades of gathering big data with network-connected technologies and advances in high-performance computing enabled the widespread adoption of machine learning and artificial intelligence (AI). With these technologies comes the great opportunity to use them to advance equity and promote well-being. At the same time, we face a great threat to equity if we do nothing to govern AI, or we allow new laws to perpetuate the status quo, as we have seen with data privacy laws in some states. Governments and organizations can and should develop AI frameworks that protect privacy, advance equity, and serve the public good.

The danger of continuing the status quo

Since their development, AI technologies have been used to make life-altering decisions for and about people, often without consideration for how they might impact lives and absent options for redress. From higher mortgage interest rates for Black and Latino communities, to discriminatory job and housing advertising on Facebook, to biased recidivism predictions, the absence of meaningful guardrails has exacerbated unfair impacts.

If we fail to make swift and meaningful efforts to govern AI, we are likely to see increased inequity, along with deepening invasions of our privacy. For example, last year Gartner, a well-known research firm that advises corporate executives, included the Internet of Behaviors (IoB) in their Top Strategic Technology Trends for 2021. The IoB is the result of combining personally identifiable information with data from browsers and smart devices like heart rate, sleeping patterns, blood pressure, and geotracking data to predict behavior. The IoB is being touted as a tool that will help organizations achieve the flexibility needed to adapt to an unpredictable economy, though Gartner admits there are significant “ethical and societal implications” brought about by the IoB. Given some of the inequitable outcomes we’ve seen from the use of search behavior and Internet profiles, one can imagine how the incorporation of Internet of Things (IoT) data could have other life-altering consequences—from seeing health insurance rates increase based on smart watch data to being excluded from job opportunities due to pregnancy.

Setting the gold standard for AI governance

There are a couple of important AI frameworks in development that organizations can use as guides to develop their own frameworks that help prevent these consequences. These include the European Union’s proposed AI Act and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF). The AI Act is focused on building trustworthiness and minimizing harm, but it also prioritizes increased well-being and protecting fundamental rights, including the right to human dignity, respect for private life, non-discrimination, gender equality, and more.

The EU’s General Data Protection Regulation (GDPR) set the bar for data privacy and individual privacy protections on an international scale—the same will likely be true for the AI Act if it is adopted and enforced. As we saw with GDPR, the AI Act will affect companies around the world, as it will be more cost-effective to develop products according to one set of standards rather than multiple ones.

NIST is currently drafting the AI RMF. Its framework seeks to promote the trustworthiness of AI and focuses on accountability, fairness, and equity as guiding principles. It recognizes that individuals, groups, communities, and organizations are affected by AI, and that mitigating risks and gauging impact requires input from a broad set of stakeholders, including representatives from communities that would be affected.

Developing an AI framework that addresses fairness and bias

Perhaps some of the most important guidance provided in the AI RMF is, “Decision making throughout the AI lifecycle [should be] informed by a demographically and disciplinarily diverse team, including internal and external personnel.” An AI framework should provide for an iterative process that continuously receives input from diverse perspectives. For this to be a reality, organizations may have to reevaluate their values to ensure that diversity, equity, and inclusion is prioritized and established across the organization. In an environment that fosters fairness and inclusion, organizations can follow these steps to begin their AI governance journey.

  1. Define objectives and engage stakeholders.
    It’s impossible to make AI fair without first understanding the objectives of an AI tool and the potential positive and negative impacts of its results or actions. What problem will it solve or what predictions will it make? Will the model affect people’s lives? If yes, who will it affect and in what ways? Once the questions are outlined, gather the relevant stakeholders to help find the answers. These should always include business, research, designer, developer, and data science stakeholders within the organization. Outside the organization, stakeholders may include members of the public across affected groups, other organizations, such as advocacy or trade organizations, and external auditors. Tap members of these audiences to answer all questions until you have clear objectives and impacts outlined before moving to data collection and analysis.
  2. Examine the data for biases and gaps.
    An organization’s values must guide their approach to collecting, organizing, and analyzing data. For Mathematica, these values include objectivity and a diversity of perspectives. These values can guide questions. How was the data collected? What potential historical biases could be reflected in the data? Are these the right data to answer the questions we need to answer, or are we looking at proxies (for example, using zip codes to predict buying power in place of income)? Do we have sufficient data on all affected groups? What about users that may fall into more than one group? There could be a need to create synthetic data to compensate for unfair outcomes from the past that are captured in the data. Other questions to explore include questions about the data collection process. Did the data contributors provide their consent? Has the data been anonymized?
  3. Make models fair, transparent, and explainable.
    Auditors and users—not just model builders— should be able to understand how models make decisions, and those who are affected by models should be able to understand how they were evaluated. Organizations should develop and maintain documentation of the criteria a model uses to make decisions, the weaknesses or limitations of the tool, and how to interpret the tool’s output. Visualization tools can be helpful for explaining a model’s outputs. Evaluators should also ensure a model’s predictions are equitable across protected classes and social groups, but not without considering intersectionality—the fact that people belong to more than one group. Research on intersectionality shows that even when biases against groups are taken into account, algorithms can produce biased results for those who are members of multiple groups.
  4. Plan for continuous evaluation or auditing.
    Continuous evaluation should be built into the AI lifecycle so that models are continuously measured and have human oversight as they evolve in the real world. The first step is to identify measures for accuracy, reliability, explainability, interpretability, bias, and privacy. These can be, and most likely will be, a combination of qualitative and quantitative measures. Subject matter experts should help test and measure model performance to ensure they perform as expected in specific settings. In addition, they should set up processes for monitoring and evaluating model performance over time, and watch out for concept drift and data drift.
  5. Keep an open feedback loop.
    Ensure that affected individuals can appeal to a human being if they think they were evaluated unfairly, felt unsafe or were harmed, or if they have feedback about the way a model works in general. Keep channels open for feedback internally. Employees should feel comfortable challenging a model’s design or development at any time and should understand the process for doing so. Put plans in place for prioritizing feedback and deactivating and decommissioning an AI product, should they ever be needed.

We can advance equity and well-being with AI

If AI is built with equity and well-being in mind from data collection to deployment, we can build public trust in AI and advance evidence-based decision making. Some examples of how Mathematica is using and evaluating AI to advance well-being include creating agent-based models to predict the spread of disease, predicting and preventing health emergencies, and helping overwhelmed child welfare agencies determine when children are at risk while increasing transparency and reducing bias.

While we likely won’t see a final draft of the complete AI RMF or the AI Act until next year, organizations can and should start working on AI frameworks that center diversity, equity, and well-being. Let’s seize the opportunity to build consideration for the public good into corporate and organizational risk management and decision making.

About the Author