conversational data

When Algorithms Decide Whose Voices Will Be Heard

What if we engineered AI to be faithful to one simple principle: Human beings, regardless of age, gender, race, origin, religion, location, intelligence, income or wealth, should be treated equally, fairly and consistently?

What was the first thing that you did this morning when you woke up? And what was the last thing that you did before you went to bed last night?

Chances are that many of us — probably most of us — were on our smartphones. Our day-to-day consumption of all things digital is increasingly being analyzed and dictated by algorithms: what we see (or don’t see) in our news and social media feeds, the products we buy, the music we listen to. When we type a query into a search engine, the results are determined and ranked by based on what is deemed to be “useful” and “relevant.” Serendipity has often been replaced by curated content, with all of us enveloped inside our own personalized bubbles.

By Theodora (Theo) Lau and Uday Akkaraju

Are we giving up our freedom of expression and action in the name of convenience? While we may have the perceived power to express ourselves digitally, our ability to be seen is increasingly governed by algorithms — with lines of codes and logic — programmed by fallible humans. Unfortunately, what dictates and controls the outcomes of such programs is more often than not a black box. 

Consider a recent write-up in Wired, which illustrated how dating app algorithms reinforce bias. Apps such as Tinder, Hinge, and Bumble use “collaborative filtering,” which generates recommendations based on majority opinion. Over time, such algorithms reinforce societal bias by limiting what we can see. A review by researchers at Cornell University identified similar design features for some of the same dating apps — and their algorithms’ potential for introducing more subtle forms of bias. They found that most dating apps employ algorithms that generate matches based on users’ past personal preferences, and the matching history of people who are similar.

But what if algorithms operating in a black box start to impact more than just dating or hobbies? What if they decide whose voice is prioritized? What if, instead of a public square where free speech flourishes, the internet becomes a guarded space where only a select group of individuals get heard — and our society in turn gets shaped by those voices? To take this even further, what if every citizen were to get a social score, based on a set of values, and the services that we receive are then governed by that score — how would we fare then? One example of such a system – called the Social Credit System — is expected to become fully operational in China in 2020. While the full implications of China’s program are yet to be understood, imagine when access to credit is gauged not just by our credit history, but by the friends in our social media circle; when our worthiness is deemed by an algorithm with no transparency or human recourse; when our eligibility for insurance could be determined by machine learning systems based on our DNA and our perceived digital profiles.

In these cases, whose values will the algorithm be based on? Whose ethics will be embedded in the calculation? What types of historical data will be used? And would we be able to preserve transparency into these issues and others? Without clear answers to these questions — and without standardized definitions of what bias is, and what fairness means — human and societal bias will unconsciously seep through. This becomes even more worrisome when institutions do not have diverse representation on their staff that reflect the demographics they serve. The outcome of such algorithms can disproportionately impact those who don’t belong.

So how does society prevent this — or scale back on it when it occurs? By paying attention to who owns the data. In a world where data is the oxygen that fuels the AI engine, those who own the most useful data will win. In this world, we must decide who will be the gatekeepers as big technology giants increasingly play a central role in every aspect of our lives, and where the line is drawn between public and private interests. (In the U.S., the gatekeepers are generally the tech companies themselves. In other regions, like Europe, the government is starting to step into that role.)

What if we engineered AI to be faithful to one simple principle: Human beings, regardless of age, gender, race, origin, religion, location, intelligence, income or wealth, should be treated equally, fairly and consistently?

Further, as AI continues to learn, and the stakes become higher when people’s health and wealth are involved, there are a few checks and balances these gatekeepers should focus on. They must ensure that AI does not use historical data to pre-judge outcomes; implemented incorrectly, AI will only repeat the mistakes of the past. It is imperative that data and computational scientists integrate input from experts of other domains, such as behavioral economics, sociology, cognitive science, and human-centered design, in order to calibrate the intangible dimensions of the human brain, and to predict context, rather than outcome. Performing validity checks with the data source and the owner of the data for bias at various points in the development process becomes more crucial as we design AI to anticipate interactions and correct biases.

Organizations also play a role. While they may not want to disclose what is inside their own black box, they must be open and transparent in disclosing what defines fairness and biases — that is, the boundaries of the black box. To this end, organizations should adopt universal guidelines for creating and using AI, such as the ones proposed by the The Organization for Economic Co-operation and Development (OECD): “The principles would require that AI respects human rights, democratic values, and the law. It should also be safe, open, and obvious to users, while those who make and use AI should be held responsible for their actions and offer transparency.” And Steve Andriole, professor of Business Technology at the Villanova School of Business, recently asked a thought-provoking question: “What if we engineered AI to be faithful to one simple principle: Human beings, regardless of age, gender, race, origin, religion, location, intelligence, income or wealth, should be treated equally, fairly and consistently?”?

What if. Or if only.

We can’t wait to address these issues. For example, it’s very likely that human bankers will soon be augmented with AI. New York’s Department of Financial Services (NYFS) has released new guidelines that will allow life insurance companies to use customers’ social media data to determine their premiums (as long as they don’t discriminate). It likely won’t be before long before insurers begin to use other alternate data sources as well — following the footsteps of many fintech lending startups. So as we continue the journey of technology evolution, we must ensure that we are not sacrificing fairness and transparency in the name of efficiency gains. We must ensure that we address our technologies’ blind spots, that inequality is not exacerbated, and that history does not repeat itself. Collectively, we must hold leaders, technologists, and those with immense power accountable for their actions, so that technology and data can be used for good, and for improving the well-being of all citizens.

We must work towards a future where our fate is not determined by an algorithm working in a black box.


Theodora (Theo) Lau is the founder of Unconventional Ventures and co-host of Rhetoriq, a podcast on stories with purpose and thought-provoking discussions on longevity, technology, and innovation. As a speaker, writer, and advisor, she seeks to spark innovation to improve consumer financial well-being and make banking better. Follow her on Twitter @psb_dc .

Uday Akkaraju is a human-centered designer specializing in cognitive science. His work focuses on making machine intelligence empathetic. He is the CEO at BOND.AI. Find him on Twitter @Uday_Akkaraju.


This article originally appeared in Harvard Business Review. Photo by Brett Jordan on Unsplash.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Article
Consumer Consent

Consumers Reject All Online Tracking Without Explicit Consent [REPORT]

Next Article

Episerver Signs Agreement to Acquire Idio

Related Posts

Subscribe to TheCustomer Report

Customer Enlightenment Delivered Daily.

    Get the latest insights, tips, and technologies to help you build and protect your customer estate.