If you're an Internet troll, you may feel a little put-upon right now.

Used to be you had a home. You could always find an online discussion to muck up with a disruptive, offensive, or off-point rant. Some sites - like celebrity-gossip and sports venues - even welcomed the snark, the rough-and-tumble, the occasional hater. Under the anonymity granted to most commenters, trolls have hidden in the bushes as long as there's been a Web.

But that is changing.

To raise the level of talk, more and more online newspapers, blogs, and other websites are instituting anti-troll measures.

Christine Taylor, digital editor of the Hartford Courant, recalls the moment in February 2012 when the trollery "got out of hand. We said, 'That's it. We have to do something else.' " Like several sites, the Courant farmed out its comment system to Facebook, which requires registration.

This fall, Inquirer.com, PhillyDailyNews.com and Philly.com will roll out a new comment system to "address ongoing problems with trolls," says Robert Cauthorn, architect of digital strategy for parent company Interstate General Media, which owns all three entities.

In the anti-troll movement, The Inquirer joins some of the most recognizable names in media: the New York Times, NBC News, National Public Radio, ESPN.com, Gawker.

Could we be witnessing a Web turning point? More adult conversation, less middle-school lunchroom?

Anti-trolling is "definitely something you're seeing more and more," says Marie Shanahan, assistant professor of journalism at the University of Connecticut and former deputy online editor at the Courant. "People are deciding that if comment streams aren't working, we have to do something different."

Enter Twitter. Last month, a caustic, harassing flood of tweets struck at two women who fought for a female face to appear on U.K. banknotes. More than 120,000 people signed petitions for Twitter to clamp down on abuse. Celebrity tweeters did one-day boycotts.

On Aug. 3, Twitter announced it had set new worldwide rules to police tweets and make it easier for users to report abuse.

Calling such attacks "not acceptable," general manager Tony Wang assured in a statement: "There is more we can and will be doing to protect our users."

On its main website and on apps, Twitter instituted an "abuse button" that allows users, in a single click, to report abuses.

Twitter added staff to monitor content in certain sensitive areas, and updated its rules to forbid the use of impersonation, targeted abuse, violence, and threats, and the revelation of others' personal information.

In the late 1990s, great was the excitement about online comments. "We liked the reader connection," Taylor says. But "we didn't know what we were getting into."

Discussion sites and comment threads proliferated. So did trolls, ruining it for the rest.

"Everyone has a right to their opinion, and with their commitment to the First Amendment, newspapers don't want to stifle that," Shanahan says. "But we struggled with how we were becoming associated with toxic commentary."

Such soul-searching isn't confined to print media online. National Public Radio opened its blogs to comment in the early 2000s and its entire website in 2008. By 2010, "while we liked the conversations, we also had a higher and higher number of comments that just did not meet our goals," says Kate Myers, product manager for social media.

So, do we want quantity (more people talking, more hits on the website) or quality?

"We decided that what we wanted was comment quality," Myers says. "We valued it higher than mere conversation."

So NPR hired Disqus, a popular online discussion management platform, "to foster a good, intelligent discussion, on-topic."

Different sites have different strategies. Some have shut down comments. At the New York Times, editors select a limited number of articles daily that may receive comments; two moderators keep an eye on things.

Like the Courant, some have turned to Facebook. To comment, you must register with Facebook and get a profile page and user name. Others can complain if your comments are abusive; you can be banned.

Many sites create self-policing communities. Call it "crowd-sourced moderation." Users vote comments up or down, penalize or block repeat offenders. The Huffington Post uses such a system, combined with automatic and human moderation. On the website Slashdot, a system called Karma lets users grade responses, so they see only those that meet their levels.

Venues that have changed their comment systems report satisfaction. Every plus, however, has a minus. Some folks are uncomfortable ceding control (and user information) to Facebook. And while NPR reports that comments are up, many sites see a downturn when new constraints kick in. At ESPN.com, pages that once got hundreds of comments now get merely dozens.

Claire Hardacker studies troll psychology. A member of the linguistics and English language department at the University of Lancaster in the U.K., she noted in an e-mail that the site TechCrunch saw trolling plummet after using the Facebook option, but it "also lost the 'sparkle' and interesting debate that anonymous users had felt safe to engage in."

Trolls have many motivations, she says. Some are ornery, some sick, some dislike authority.

And they reflect deep currents in our culture.

"This is a litmus test for our society," Cauthorn says. "This is the id speaking. A lot of people see name-calling as entertainment."

Horrors . . . we like snark? Some "may actually enjoy this 'spice' in the comments, and be very disappointed to see it go," Hardacker writes. "Humans are, by their very nature, entertained by conflict and aggression."

So, she's lukewarm on the anti-troll movement. No matter how good your strategy, "it is almost impossible to prevent people from registering with fake accounts and trolling.. . .

"If they're psychologically motivated to do so, then they will find a way."