According to the Obama administration, social media is a threat to national security.

On Friday White House Chief of Staff Denis McDonough will head up a summit between top federal officials and the giants of social media in San Jose, California. Attorney General Loretta Lynch, FBI Director James Comey and National Intelligence Director James Clapper are scheduled to participate in the discussion.

The talks between the government and Facebook, Twitter, Google and Microsoft will center on the use of social media to recruit terrorists and, according to CNN, “find potential agents, inspire them to become violent, and coordinate attacks.”

The feds will likely insist tech companies do more to remove content and the companies will respond by arguing they do not have the resources. The bottom line will probably be a handout by the American taxpayer to tech companies to combat what the government defines as terrorism.

“For a start, social media companies themselves need to do more. It is not good enough to only pay attention when bad press threatens a company’s public image after something truly horrific is posted online. Instead, companies not only have a public responsibility but a legal obligation to do more,” Rep. Ted Poe, a Republican from Texas, wrote last February.

Poe does not address the monumental task involved. Facebook has more than 1.5 billion users and over a billion pieces of content posted daily. Twitter, cited as the favorite platform for the Islamic State, has over 300 million users and around 500 million tweets are posted per day and around 200 billion tweets per year. In 2013 Google+ reported approximately 300 million monthly active users.

It would be virtually impossible for social media companies to track and monitor this huge amount of traffic and ferret out what the government considers terrorist content.

“The lack of child pornography or stolen copyrighted material on social media platforms—content that is quickly removed if it appears at all—demonstrates what these companies can do,” Poe argues.

In regard to child pornography, the effort is relatively simple. Social media companies have developed technology to scan posted images and search for child sexual exploitation.

“But that is a challenge for several reasons. The child-exploitation scans employ a database of known images, created by the National Center for Missing and Exploited Children. There is no similar database for terror-related images,” notes The Wall Street Journal.

Moreover, much of the suspected terrorist content is textual and it will be extremely difficult to develop technology to differentiate between what the government considers terrorism and non-terrorist content.

In December Sen. Dianne Feinstein and Sen. Richard Burr introduced a bill requiring social media companies to inform law enforcement when they find “terrorist activity on their platforms.”

Companies responded by saying the legislation would pose an “impossible compliance problem.”

A practical solution would be to turn social media users into informants. Social media companies already do this with varying results. For instance YouTube users flag about 100,000 posts each day that are suspected of being in violation of the company’s terms of service. Facebook has  deployed teams of people around the world to review content flagged as terrorist-related and determine whether the posts are in fact from terrorist groups in violation of Facebook’s terms of service, reports The Washington Post.

But what is terrorist-related content? Is it strictly IS material and content posted by groups designated as terrorist organizations by the government? If users criticize the policies of Israel in regard to the Palestinians, Turkey’s treatment of the Kurds, or China’s persecution of Uyghurs, would that be considered terrorism against those governments? “Besides free expression concerns, regulation of the Internet raises collective action and baseline definition problems,” writes Paulina Wu.

It is not difficult to imagine the problems that will result when tech companies more actively encourage users to report what they consider to be terrorism. If legislation mandating such a system is enacted, the companies will undoubtedly ask the government for money to offset the cost of hiring additional staff and implement technology designed to wade through flagged content.

“If social media companies must identify and remove terrorist content, they’re going to err on the side of caution. They are unlikely to take a chance on liability, not to mention the public relations fallout, if any content on their sites was even remotely related to a terrorist plot,” writes Faiza Patel for Fortune.

“Given that terrorism is a political crime, talk of U.S. foreign policy, the wars in the Middle East, climate change, race, religion and the like—the types of topics that are the reason for free speech protections—would be particularly suspect. The global marketplace of ideas, which the Internet embodies, would shrink.”

Erring on the side of caution by corporations looking at the bottom line in a highly competitive market under the weight of government mandate may result in this post not appearing on Facebook or linked on Twitter.

It will almost certainly result in conspiracy theories—that is to say material at odds with the establishment’s narrative on a vast array of political issues—being removed and accounts shut down.


NEWSLETTER SIGN UP

Get the latest breaking news & specials from Alex Jones and the Infowars Crew.

Related Articles


Comments