YouTube's Blood Diamond Terror & Hate Profits

YouTube Acts Only When A Dollar Is At Stake
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

For over a decade YouTube has engaged in a scorched earth policy against anyone cajoling it to police its content for hate and incitement content directly implicated in the radicalization of American ISIS terrorists. Shamefully hiding behind the legal protection of the Communications Decency Act of 1996 (CDA) -- passed in a pre-terrorism era, YouTube sports “see-no-evil” blinders. The CDA shields internet service providers from content liability. YouTube has financially thrived in the online equivalent of marketing blood diamonds -- earning ad revenue from radical Islamic content and racist hate speech uploaded to incite and inspire terrorism.

Despite a torrent of appeals to its management by the U.S. government, law enforcement authorities, and private citizens to undertake reasonable best efforts to cleanse its platform of radical Islamic content directly implicated in virtual terrorism, YouTube’s management’s strategy has been to attack anyone challenging its policies and have its cadre of lawyers hold up the CDA as an alibi to avoid acting in the best interest of the American people. The tragedy is that while YouTube financially benefits from a massive amount of terror content on its platform, it has bobbed and weaved from taking a real hit to its bottom line.

Until now.

YouTube’s digital Maginot Line was finally breached this week when major American corporate advertisers, including AT&T, Johnson & Johnson, and Lyft yanked their ads from YouTube because in its corporate avarice YouTube was knowingly enabling these ads to appear next to offensive content, including radical Islamic incitement sermons, hate speech, and racist material. I commend these advertisers for wising up to YouTube’s shenanigans.

As the New York Times reported today, YouTube’s management has known for some time that its “ad matching” programmatic advertising has resulted in ads appearing on the likes of uploaded videos of Anwar al Awlaki – the Al Qaeda terrorist responsible for inciting most of the lone wolf attacks in the U.S. and ads promoting donations to the terrorist organization Hezbollah.

YouTube’s management claims it simply does not have the technological capacity to capture and decontaminate its platform from such content in real time. That is false, false, false – fake news from its communications team. On the contrary, YouTube’s management is well aware that software technology exists – including software known as “EGlyph” -- which could tag and identify in real time the very hate and terror speech which YouTube claims it cannot intercept. That technology was developed by Counter Extremism Project advisor and Dartmouth College Computer Science Professor Dr. Hany Farid, the world’s foremost authority on digital forensics and hashing technology, a technology that can identify the “worst of the worst” extremist images, video, and audio quickly and accurately for removal from the Internet and social media platforms. No technology is fool-proof or 100% effective, but YouTube’s management has used that as an alibi to avoid doing the right thing

CEP and Dr. Farid announced eGLYPH in June 2016 and offered it to technology companies free of charge. For those who may attempt to come to YouTube’s defense, it was the Obama White House that encouraged YouTube over three years ago to consider adopting the eGlyph technology – even in a pilot project. It mischievously declined the President’s recommendation. Why? Why wouldn’t YouTube do the right thing.

Like we always say, it’s always about money! YouTube’s management is deathly afraid having U.S. courts breach the CDA’s shield of liability since there are a number of cases weaving their way through U.S. federal courts brought by victims of terrorist attacks where there is direct evidence linking radical Islamic inspiration and incitement of terrorists to content on YouTube’s platform.

Even with a loss of ad revenue, YouTube’s response has been feckless and half-hearted. Its pledge to “tighten safeguards” and “reviewing its guidelines” is corporate Pablum and double-speak because the egregious content is still there.

The latest terrorist outrage in London points again to the threat we face from an ISIS leadership which may be exiled from the decaying ISIS caliphate, but determined as ever to engage in virtual terrorism.

Congress cannot trust YouTube to voluntarily clean up its act. The CDA is outdated and no longer serves the best interests of the American people. It is time for Congress to remove its shield from content liability and compel internet service providers to assume a reasonable duty and responsibility to prevent hate speech and terror talk on their platforms.

Ultimately, this is an urgent matter of protecting American lives. Do we want to find out yet again that another lone wolf attack was perpetrated by a deranged radical Islamic convert who, with a click of a mouse, was able to radicalize himself courtesy of YouTube’s dark content? How many San Bernandinos and Orlandos are we prepared to endure until YouTube is forced by the President and Congress to take the action it refuses to take voluntarily.

American corporate sponsors are finally hitting YouTube where it hurts. Let the pain begin!

Popular in the Community

Close

What's Hot