IF SOCIAL MEDIA COMPANIES WON’T STOP EXTREMISM, CONGRESS CAN (AND SHOULD)!

IF SOCIAL MEDIA COMPANIES WON’T STOP EXTREMISM, CONGRESS CAN (AND SHOULD)!
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

On Wednesday January 17th, the Senate Commerce Committee will hold a hearing: “Terrorism and Social Media: #IsBigTechDoingEnough?”

Simple answer: #BigTechIsNOTDoingEnough!”

The sad fact is that the American public simply cannot trust social media companies to protect them against the very online extremism which facilitates terrorism in the U.S. As Governor Andrew Cuomo asserted following the awful truck ramming attack on a Manhattan bike path: “The internet is the training ground” for violent attackers. What can one say about all the empty pledges and promises from Twitter, Facebook, YouTube and its parent, Google? Social media continues to facilitate the commission of terrorism.

Following a string of deadly terrorist attacks in Europe and the U.S. in 2017, social media companies “voluntarily” agreed (largely against their will, mind you) to step up and more aggressively flag extremist content and kick terrorist recruiters off their platforms. They even pledged to protect corporate America whose ads were appearing on extremist content. But despite YouTube’s loss of over $700 million in ad revenue last January, a new wave of advertisers suspended ad buys at the end of 2017 because ads were once again appearing on extremist content, including neo-Nazi websites.

However, the defenseless American public is caught between half-hearted/half-empty pledges by social media companies to improve their efforts, and marginal progress to interdict terrorist content. YouTube has hired more “flaggers” to identify extremist content, and Facebook is now using artificial intelligence (AI) to help track down content along with more third-party flaggers and its community to detect terrorist content. But the sad fact is that these efforts don’t amount to a hill of beans when the bad stuff is still online to train and enable terrorists.

After a massive lobbying campaign to end its stubborn refusal to remove it a well-respected NGO convinced a recalcitrant YouTube to remove the worst sermons of radical Islamist cleric Anwar al Awlaki. That is a small victory against a company that simply refuses to proactively adopt the best existing technical fixes currently available to it to expedite the identification and removal of extremist content.

Al Awlaki’s sermons were found on the hard drives of the last two successful terrorist attacks in the US: 1) Sayfullo Saipov, responsible for the truck ramming attack which left eight dead. According to USA Today (11/3/17) the FBI found 90 videos and 3,800 photos of ISIS-related propaganda on one of the two cellphones found in the truck he had rented in New Jersey; and 2) Akayed Ullah, whose pipe bomb attack in the New York subway tunnel injured 5 and who found out how to build the pipe bomb on ordinary, unencrypted video searches on YouTube.

And its not just the claptrap of ISIS propaganda and radical Islamic videos which remain online. The suicide bomber who killed 22 persons at the end of the Ariana Grande concert in Manchester, England in May used YouTube videos to learn how to build the explosive device and how to plan his get-away, all via ISIS “Inspire” Magazine and YouTube videos.

Those very tactical videos are still on YouTube today. Imagine, after all the terrorism in Europe and the US social media companies refuse to proactively remove those videos unless folks like me flag them.

Where is the outcry?!

How can anyone in Google’s management (the parent of YouTube) justify the presence of tactical videos instructing terrorists how to make bombs from everyday hardware store supplies?

On Wednesday, the Senate Commerce Committee will hear from social media companies how much more they are doing to combat extremism. Senators will hear lofty assurances that social media companies are doing much better than before by:

· Removing “more” extremist content

· Using artificial intelligence to help track down content

· Relying on a beefed-up staff and outside consultants to ID extremist content

· Providing more support to law enforcement

· Using targeted advertising to divert bad actors away from ISIS videos

· Using new software to take down extremist accounts.

· That Twitter took down 300,000 extremist accounts in the first half of last year (evidencing how much Twitter’s platform has been hijacked by extremists).

That is what social media, in a nutshell, considers to be adequate progress against on-line terrorism – terrorism, which by all accounts from law enforcement officials, is becoming a more dangerous and alarming threat as ISIS shifts from its physically destroyed caliphate to a new, virtual caliphate. That, my friends, is over-the-counter-medicine, not 21st century technology fixes.

This, I propose, is where the measurement should begin by Congress if I were asking the questions at tomorrow’s hearing:

1. Why have social media companies, notably YouTube, refused to remove tactical videos explaining how to construct bombs, rent vehicles to commit terrorism, and how to use encrypted apps to circumvent law enforcement?

2. Why are corporate ads still appearing on extremist content despite YouTube/Google pledges a year ago to fix algorithms to prevent that from happening?

3. Why are there no “diversionary videos” directing viewers away from ISIS videos uploaded daily, particularly on its social media platform?

4. Why are third parties more able than the social media companies themselves to detect extremist content that Facebook’s own algorithms and monitoring systems are failing to identify?

5. Why do social media companies refuse to embrace third party technology readily off the shelf which would greatly expedite the interdiction and identification of genuine extremist content?

6. Why shouldn’t Congress amend the 1996 Communications Decency Act, which provides blanket immunity to social media companies from content liability, to require them to exercise “best efforts” to remove extremist content and provide legislative guidance to determine what is minimally expected by the American people to protect them?

7. What is Facebook doing to stop existing, but dormant accounts, from being hacked and then used to promote ISIS content?

8. Shouldn’t there be a clear, concise definition that defines “terrorism” and related violence by all social media companies by which to judge their respective transparency (or lack thereof) in revealing what has/has not been removed?

9. ISIS and Al Qaeda are gravitating more and more to use Google+ and Google Drive to divert their social media campaigns away from existing platforms which are under more corporate surveillance. What is Google doing to counteract these efforts?

10. Shouldn’t there be new, industry wide standards to ensure the timely and permanent removal of dangerous content, especially when produced by groups and individuals on the State Departments Foreign Terrorist Organization list?

11. How do social media companies determine when the sermons of a radical Islamists rise to the level of incitement? How do they even know who these extremists are, such as Turki al-Binali, Abdullah Faisal, Ysuf al-Qaradawi, and Ahmad Musa Jibril?

12. Why shouldn’t Congress compel social media companies to provide it a regular, six-month report explaining what each of them are doing to expedite the identification and removal of extremist content?

Ironically, Americans are inadvertently condoning this by failing to hold social media companies accountable through their elected representatives. They can, and should be, demanding more from Congress to rein in a stubborn Silicon Valley too defensive and offended by anyone who questions social media’s commitment and motives. There is simply too much evidence linking social media platforms to on-line radicalization and terrorist training not to compel them to bear some of the responsibility when terrorists attack.

Silicon Valley has a duty and responsibility to the American people to ensure that their safety is paramount from extremist content. Whether Twitter, Google, YouTube, or Facebook are prepared to fulfill that duty to their customers and the public at large remains in doubt because the evidence points otherwise. That is why it is time for Congress to regulate them.

These social media companies should be no different than the utilities or roads which are regulated to protect the public interest and its safety. The Senate Commerce Committee recently passed legislation regulating sex trafficking under the very same law (the CDA) governing social media companies on the internet. Surely, the threat of more ISIS-inspired terror in our major cities or our transportation links is as dangerous, if not more so, than the threat of on-line sex trafficking!

Popular in the Community

Close

What's Hot