Wednesday, April 19, 2017 by Ethan Huff
http://www.suppressed.news/2017-04-19-google-invents-snowflake-technology-making-computers-feel-offended-and-intolerant-just-like-college-students.html
Be careful what you search for online because you might just end up “triggering” your search engine. In an effort to combat what it deems as inappropriate internet content, tech giant Google has reportedly tweaked its search platform to get “offended,” like a human would, whenever someone searches for something that Google has flagged as vulgar, such as videos of the terrorist group ISIS beheading or shooting its victims.
Similar to how a college student majoring in women’s studies might respond to a fellow classmate who suggests that men are the more rational of the two sexes, Google now has the ability to get upset and throw a fit whenever someone searches for something that the system doesn’t like, effectively hiding it from view.
Following the recent terror attack in Westminster, outcry over the ease with which members of the public can search Google for, say, a handbook on how to effectively blow up a building, has led to efforts that would block such information from ever appearing in Google search results. It’s censorship disguised as “terror prevention,” and something that the world is likely to see more of as corporations like Google coalesce to it.
According to reports, Google is busy at work refining this tracking tool to better identify “inappropriate” content in order to basically pull it from the web. Consequently, users who search for anything on the Google blacklist, even for educational or other purposes, will now have their search results filtered and sanitized without their consent.
“Currently teams of humans are checking the systems to see if they are doing a good job,” reports the Daily Mail Online about this first phase of the program. “Google now wants the computers which monitor content being uploaded through YouTube and other channels to understand the nuances of what makes a video offensive.”
As part of a dual effort that includes also trying to stamp out so-called “fake news,” Google hopes to eventually convert its censorship system, which currently relies on human oversight to identify “offensive” content, to one in which computers alone will be able to make that call. Human input won’t even be necessary; in other words, AI-equipped computer systems will eventually be able to do all the censoring on their own.
The way that Google hopes to construct such a system is to continue feeding it a stream of human-vetted examples of both “safe” and “unsafe” content, from which it can effectively learn the differences between the two over time. YouTube videos, for instance, will continue to be broken down frame by frame and analyzed by a human being whose job it is to teach the system using data patterns.
As for the current success of the program, Google claims that its system is already flagging five times more content than it had prior, though the company has yet to release official figures to prove this.
“Computers have a much harder time understanding context, and that’s why we’re actually using all of our latest and greatest learning abilities now to get a better feel for this,” says Philipp Schindler, Google’s chief business officer, about the endeavor.
Taking the program another step further, Google also hopes to eventually develop a system in which potential ISIS recruits and other suspected terrorists can be fed customized advertising content on their Google advertising banners that will deter them from wanting to join jihadist groups. As it turns out, there is apparently a very high demand for ISIS material online, experts claim, that needs to be better addressed for national security purposes.
Sources for this article include:
Tagged Under: Tags: Censorship, Google, robots