Fake news is the boldest sign of a post-truth society. Post-truth is the state of affairs when “objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” When we can’t agree on basic facts or even that there are such things as facts, how do we talk to each other?
Fake news is made-up stuff, masterfully handled to look like credible journalistic reports that are easily spread online to large audiences willing to believe the creative writing and spread the word.
Unlike news satire, fake news websites seek to mislead, rather than entertain, readers for financial, political, or other gain.
Some fake news websites use website spoofing, structured to make visitors believe they are visiting trusted sources like Techunzipped. The New York Times defined “fake news” on the Internet as fictitious articles deliberately fabricated to deceive readers, generally with the goal of profiting through clickbait. PolitiFact described fake news as fabricated content designed to fool readers and subsequently made viral through the Internet to crowds that increase its dissemination.
Every year Google tweaks the computer code of its search engine, but in an attempt to fight the plague of fake news and offensive content its engineers are beginning to collect data from a new source: regular humans.
Google has also pinched its search algorithms to ensure that “low-quality” content shows up lower in search results, which should minimize their reach.
Google said Tuesday that it will make it much easier for anyone to give it opinion on its search results, which is the way that most people use Google. For google users, that means that if you see a result featured on Google’s pages that you think is wrong or offensive, then you can actually do something about it.
“Today, in a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system,” Ben Gomes, vice president of engineering, said.
Users will be see and report bad information that shows up in “Featured Snippets” — a.k.a., the little summary boxes that appear at the top or sides of Google searches. Users will also be able to report offensive autocomplete suggestions that suggested phrases that show up when you begin typing a query in the search engine.
After today, a user who spots offensive autocomplete result will be able to flag it for Google’s engineers to review.
Googlers can report submissions for being hateful, explicit or violent.
The company blog described “low-quality ‘content farms,’ hidden text and other deceptive practices,” among the tactics. In that environment, Google’s challenge is to guard against abuse of the new feedback buttons. For instance, if Google guaranteed that flagging content would remove a search result, unethical users could exercise a “banhammer” to have content from completion blocked or those they don’t like or in order to favour their own content.
“There is likely to be [helpful] signal in there, even through all the noise through abuse,” says Mr. Nayak (a Google research fellow in search). “We don’t expect the problem will completely disappear.”