Leatherneck Blogger

Google’s new “hate speech” algorithm is anti-Semitic and pro-jihad

with one comment

By Robert Spencer
Jihad Watch
July 30, 2017

As I revealed here several days ago, Google has bowed to Muslim pressure and changed its search results to conceal criticism of Islam and jihad. Search results that Muslim leaders (whose motives Google apparently never questions or investigates) have made sure that sites such as Jihad Watch are buried in search results, with numerous site dissimulating about the nature and magnitude of the jihad threat appearing above it.

And now this, which comes as no surprise given the fact that those who are manipulating Google are Muslims, and anti-Semitism is deeply embedded with the Qur’an.

Find out the full extent of what is happening in my book The Complete Infidel’s Guide to Free Speech (and Its Enemies).

“Google’s New Hate Speech Algorithm Has a Problem With Jews,” by Liel Leibovitz, The Tablet, July 26, 2017 (thanks to the Geller Report):

Don’t you just hate how vile some people are on the Internet? How easy it’s become to say horrible and hurtful things about other groups and individuals? How this tool that was supposed to spread knowledge, amity, and good cheer is being use to promulgate hate? No need to worry anymore: Google’s on it.

Earlier this year, Silicon Valley’s overlords introduced Perspective API, the latter being nerd-speak for Application Program Interface, or a set of tools for building software. The idea behind it is simple: because it’s impossible for an online publisher to manually monitor all the comments left on its website, Perspective will use advanced machine learning to help moderators track down comments that are likely to be “toxic.” Here’s how the company describes it: “The API uses machine learning models to score the perceived impact a comment might have on a conversation.”

That’s a strange sentiment. How do you measure the perceived impact of a conversation? And how can you tell if a conversation is good or bad? The answers, in Perspective’s case, are simple: machine learning works by giving computers access to vast databases, and letting them figure out the likely patterns. If a machine read all the cookbooks published in the English language in the last 100 years, say, it would be able to tell us interesting things about how we cook, like the peculiar fact that when we serve rice we’re very likely to serve beans as well. What can machines tell us about the way we converse and about what we may find offensive? That, of course, depends on what databases you let the machines learn. In Google’s case, the machines learned the comments sections of The New York Times, the Economist, and the Guardian.

What did the machines learn? Only one way to find out. I asked Perspective to rate the following sentiment: “Jews control the banks and the media.” This old chestnut, Perspective reported, had a 10 percent chance of being perceived as toxic.

111Maybe Perspective was just relaxed about sweeping generalizations that have been used to stain entire ethnic and religious groups, I thought. Maybe the nuance of harmful stereotypes was lost on Google’s algorithms. I tried again, this time with another group of people, typing “Many terrorists are radical Islamists.” The comment, Perspective informed me, was 92 percent likely to be seen as toxic.

What about straightforward statements of facts? I reached for the news, which, sadly, has been very grim lately, and wrote: “Three Israelis were murdered last night by a knife-wielding Palestinian terrorist who yelled ‘Allah hu Akbar.’” This, too, was 92 percent likely to be seen as toxic.

44You, too, can go online and have your fun, but the results shouldn’t surprise you. The machines learn from what they read, and when what they read are the Guardian and the Times, they’re going to inherit the inherent biases of these publications as well. Like most people who read the Paper of Record, the machine, too, has come to believe that statements about Jews being slaughtered are controversial, that addressing radical Islamism is verboten, and that casual anti-Semitism is utterly forgivable. The very term itself, toxicity, should’ve been enough of a giveaway: the only groups that talk about toxicity—see under: toxic masculinity—are those on the regressive left who creepily apply the metaphors of physical harm to censor speech not celebrate or promote it. No words are toxic, but the idea that we now have an algorithm replicating, amplifying, and automatizing the bigotry of the anti-Jewish left may very well be….

One Response

Subscribe to comments with RSS.

  1. Reblogged this on Brittius.

    Brittius

    August 9, 2017 at 08:56


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: