Skip to content
Politics
Link copied to clipboard

Is that tweet from a human? N.J. lawmaker wants bots to identify themselves, but experts have concerns.

“This is a thoughtful effort to address a significant problem. But it has legal perils.”

Humanoid communication robot Kirobo talks with Fuminori Kataoka, a Toyota Motor Corp. manager, at a media unveiling in Tokyo. As lawmakers seek to regulate new technologies, such as a proposal in New Jersey to require social media bots to disclose their non-human nature, lawyers and technologists warn those regulations could have unintended consequences. Could a law aimed at today’s bots stop the malicious bots of the future? Or, on the flip side, could it unintentionally affect other technologies in the future?
Humanoid communication robot Kirobo talks with Fuminori Kataoka, a Toyota Motor Corp. manager, at a media unveiling in Tokyo. As lawmakers seek to regulate new technologies, such as a proposal in New Jersey to require social media bots to disclose their non-human nature, lawyers and technologists warn those regulations could have unintended consequences. Could a law aimed at today’s bots stop the malicious bots of the future? Or, on the flip side, could it unintentionally affect other technologies in the future?Read moreSHIZUO KAMBAYASHI / AP

As part of a sprawling Russian influence operation in the 2016 presidential election — and since — armies of “bots” on social media helped spread fake news stories, drown out legitimate conversation, and exploit existing political and social tensions.

Lawmakers and tech companies have been scrambling to catch up.

Now, with a new presidential election season starting and information continuing to emerge regarding bots’ malicious influence, legislators have begun proposing regulations.

In New Jersey, Assemblyman Andrew Zwicker (D., Middlesex) this fall introduced a bill to require upfront identification of online communication bots — a term derived from the word robots to describe automated accounts that generate messages, particularly on social media. The bill won approval from an Assembly committee this month, and Zwicker is hopeful it could get a full vote early next year.

“I’m very strongly opposed to using technology to hide your true intentions, to use technology to deceive people in a way that is unfair to the person who doesn’t know what’s going on,” Zwicker said in an interview. “And I believe if that is your intent — to deceive people — you should disclose you are not a human being.”

But legal experts and technologists warned that this proposal, and others like it, might not address the problems it seeks to solve, while also raising troubling questions about free speech: What exactly is a bot, and when is its speech distinguishable from the speech of its creator? What is political speech? Could disclosure lead to a loss of anonymity online? Could disclosure in the United States lead to censorship elsewhere?

“This is a thoughtful effort to address a significant problem. But it has legal perils,” said Toni M. Massaro, a constitutional law professor at the University of Arizona whose work in recent years has explored issues related to free expression and artificial intelligence.

What the bot bill says

Zwicker’s proposal is modeled on a new law in California that will require bot disclosure when it takes effect in July. At the federal level, Sen. Dianne Feinstein (D., Calif.) has introduced a proposal that would similarly require disclosure by bots.

On its face, the New Jersey bill is straightforward and takes up less than two pages: You can’t use a bot posing as a human to try to deceptively influence people’s purchases or votes, and bot accounts must identify themselves as such.

“The average American is every single day online and doing things,” said Zwicker, who chairs the Assembly science and technology committee and works at Princeton University’s plasma physics laboratory. “As big of a story as this has been in general in 2018, I think it’s going to continue to be a bigger story in 2019 and beyond, and it’s beholden on us to get a handle on the right public policies for just the everyday working person.”

Legal issues raised

Legal and technological experts, while recognizing the desire to combat the malicious use of bots, said Zwicker’s proposal raises complicated free-speech concerns.

One of the most concrete examples is the potential unmasking of anonymous accounts, said Ryan Calo, a law professor at the University of Washington whose work on emerging technologies includes a paper this year examining bot disclosure issues.

A human behind an account accused of being a bot could be forced to reveal his or her identity.

“So while on its face it doesn’t require someone to say who they are, as enforced it has that potential, and it creates a tool to unmask people just by calling them bots,” Calo said.

Even if that doesn’t happen, Calo said, he worries about the chilling effect: What accounts might never get made, what person’s speech might never get heard?

Calo’s coauthor, Madeline Lamo, speaking generally, said bot disclosure also raises questions of whether the government is unconstitutionally compelling speech.

Forcing disclosure also creates a structure that companies or other governments could exploit to censor some accounts, she said. For example, if bot disclosures are required in the United States, another country could use that to identify and completely block bots.

“Any regulation we do will have a ripple effect around the world,” Lamo said. “So if you are requiring that bots that interact with the United States or a certain state here to disclose they are bots, you implement a structure that enables other entities, governments, etc., that don’t value free speech in the same way to use and manipulate that information.”

Would it even work?

There are also practical concerns.

For one, Calo said, if the concern is a foreign country using bots to interfere with an election, he said, then the real problem is the foreign country, not the technology.

And bots that effectively alter discourse largely do so at scale, such as by flooding a hashtag to hijack the conversation or retweeting fringe views to make them seem mainstream.

“Knowing something is a bot doesn’t stop it from swamping and skewing discourse,” Calo said.

In addition, regulating bots as they are now could fail to encompass what bots do in the future. On the other hand, they could also unintentionally restrict technologies that have yet to appear.

“We know how today’s technology works,” said Jeremy Gillula, technology projects director at the nonprofit Electronic Frontier Foundation, a civil liberties group focused on the digital world. “It’s a lot harder to predict how regulation will affect technology in the future.” The group is neutral on this bill.

Massaro agreed that unintended consequences are a serious concern. That means lawmakers must be flexible, willing to adapt the law as circumstances evolve.

Lawmakers, she said, “should always err on the side of caution when the downsides of legislation may include serious liberty losses or other harms. They should walk, not run, into the shadows.”

What comes next

Zwicker said he recognizes his proposal is imperfect, but he hopes that the conversation it has sparked can help lead to reasonable measures to limit bad actors while still protecting freedoms and allowing new technology to flourish.

“I’m going to do what I can, but I don’t know the absolute right answer,” Zwicker said.

This month, the bill was passed unanimously out of a committee Zwicker chairs. It has not been scheduled for a floor vote in the Assembly, but Zwicker said he was “optimistic” it would pass the chamber “in the early part of next year.”

Its Senate counterpart was introduced by State Sen. Linda R. Greenstein (D., Middlesex) and referred to a committee.