Skip to content
Business
Link copied to clipboard

In new book, Wharton prof shows how Facebook, Amazon, and Netflix algorithms shape our decisions

Kartik Hosanagar, a technology and digital business professor at Wharton, proposes an Algorithmic Bill of Rights to protect consumers.

Professor Kartik Hosanagar speaks with students at the Wharton Business School on Thursday.
Professor Kartik Hosanagar speaks with students at the Wharton Business School on Thursday.Read moreANTHONY PEZZOTTI / Staff Photographer

For centuries, philosophers have argued whether humans have free will. That debate takes on new meaning in this age of algorithms, where artificially intelligent machines are making more decisions.

Consider how algorithms on Facebook, Google, and Amazon shape our choices about what we read, see, or buy. While we may ultimately have the final say, the reality is that nearly all possible options were excluded due to personalization algorithms, notes Kartik Hosanagar, a technology and digital business professor at the Wharton School of the University of Pennsylvania.

“In this brave new world, many of our choices are in fact predestined, and all the seemingly small effects that algorithms have on our decisions add up to a transformative impact on our lives,” Hosanagar writes in his new book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control.

Inspired by our Founding Fathers, the Philadelphia resident proposes an “Algorithmic Bill of Rights” to help protect consumers. The Inquirer spoke with Hosanagar about algorithms and how they affect us. The interview was edited for space and clarity.

What choices are algorithms making for us?

"Algorithms at Amazon are recommending products we could buy. ‘People who bought this also bought this” or “people who viewed this eventually bought that.’ And at Amazon they drive over a third of the product choices there. At Netflix, studies suggest that more than 80 percent of the viewing hours are driven by algorithmic recommendations.

"Now you go to something slightly more consequential: whom we date and marry. At dating apps like Tinder, almost all of the matches originate from algorithmic recommendations. Let’s say we have a mortgage application. An algorithm decides whether it gets approved or not and what interest rate we are charged. You apply for a job; an algorithm decides which resumés to short list and invite for interviews.

“Even life and death decisions: In courtrooms in the U.S., judges are asked to consult algorithms that predict the risk score of a defendant, such as the likelihood [a person] will reoffend. We are moving toward personalized medicine, where the treatment for an individual is different from another individual with similar diseases or symptoms, and that’s based on their DNA profiles and so on. That’s again algorithmically driven.”

What are some of the biases these algorithms have?

"For the most part, these algorithms provide a lot of value to their users. But there are also many instances where these algorithms go wrong, and we don’t usually suspect that, because people tend to think of algorithms as being rational, objective decision-makers.

"Reuters reported last year that Amazon’s resumé-screening algorithm had a gender bias, but Amazon detected it. ProPublica did a study of one of these algorithms used in Florida courtrooms, and they found that the algorithm had a race bias. Look at chat bots: Microsoft had this famous 'Microsoft Tay’ that was sexist, racist, and fascist.

”Everyone is familiar with Facebook’s news feed algorithm, which was curating trending news stories. Previously, human editors used to do that, but human editors were accused of having a political bias, so they shifted to an algorithm. While it didn’t have a political bias, unfortunately it [couldn’t] detect false news stories.”

How does nature vs. nurture explain algorithmic behavior?

"Most of us have come across the nature vs. nurture analogy for human behavior. ‘Nature’ meaning our genetic code and what we inherit from our parents, and ‘nurture’ being our environment. If you look at problems like, say, alcoholism, it has a basis in nature and a basis in nurture.

"Algorithms are similar. Their behavior are driven in part by their code. It’s not the genetic code, but it is the code the engineer gives it, and that’s the nature. And it’s also driven in part by the data from which they learn, and that’s their environment. That’s the nurture for algorithms.

“It used to be the case that algorithms were more nature than nurture because engineers determined every step an algorithm would take, and it was all in the code. But increasingly, we are moving toward machine learning, which involves learning from data, so it’s less nature and a lot more nurture now.”

How do we get people to trust algorithms?

"We may be willing to trust YouTube’s algorithms on which video to watch, but maybe we are less willing to trust an algorithm to invest our savings.

"There are several factors that play a role here. One is that a study shows people are willing to trust algorithms, but when they see it fail, they are not as forgiving about algorithm failures as they are about human failures. One implication of that is we have to raise the bar for algorithms before we deploy them. We have to say users have higher expectations from algorithms than humans, so it has to perform much better.

"The other thing is, when people have some control over the algorithm they’re more likely to trust it. Several studies have shown that. Amazon’s recommendations and Netflix’s recommendations often fail.But we have a lot of control there, because in the end we can reject the suggestion, so we don’t have a problem with that. If it’s an autonomous algorithm and it makes one mistake, because we don’t have control, we are willing to walk away from it immediately.

“Another aspect is interpretability. We want to be able to understand a system a little bit before we trust it and use it. If these systems are black boxes that are presented to us with no information, then we tend to doubt it a little bit more.”

What rights should consumers have in the age of algorithms?

"Algorithms and the technology companies deploying these algorithms have a lot of clout and impact on consumers, and we need some consumer protection that helps address the concerns that have been coming up lately. I’ve proposed an Algorithmic Bill of Rights.

"The first pillar is related to transparency with regard to the data that firms are using to make decisions for us and about us. So for example, you apply for a job and it’s rejected. First of all, inform the user that an algorithm made the decision, and secondly, what kinds of data are being used by that algorithm. Sometimes it might be that a system is using information beyond what we provided it, like it might be looking at our social media posts.

"Also, transparency with regard to the actual decision-making, that is, give explanations regarding what drove this decision. You apply for a loan, and it’s rejected by an algorithm. Inform us that an algorithm made the decision and explain, ‘This algorithm tends to look heavily at these sets of things.’

"Another pillar is that we should keep a human in the loop. We shouldn’t go to a world where users have no way of affecting how algorithms make decisions for them, or even giving feedback to an algorithm. For example, Facebook’s news feed algorithm two years back had no feature where users could report to Facebook, ‘This post in my news feed is false news.’ Today, with just two clicks you can alert Facebook’s algorithm.

“The last pillar is not a right so much as it is a responsibility. A lot of people tend to use algorithms and technology very passively without recognizing how it is changing our decisions or decisions for others. We have to have a little bit more insight into these algorithms as users, and firms also should provide us that insight as well.”