https://www.cnn.com/2018/05/02/us/hate-groups-using-internet/index.html

1. Social networking Mainstream social networking outlets such as Twitter and Facebook have struggled with how to handle hate groups on their platforms. These sites often find themselves trying to balance the right to share and debate ideas with the responsibility to protect society against potential attacks.

After Charlottesville, Facebook CEO Mark Zuckerberg said Facebook was already taking down "any post that promotes or celebrates hate crimes or acts of terrorism." In addition to removing posts connected to specific groups and events, Facebook said it was paying closer attention to its content in the wake of the Virginia attacks. Facebook said it has its own internal guidelines about what constitutes a hate group. Simply being white supremacists or identifying as "alt-right" doesn't necessarily qualify. A person or group must threaten violence, declare it has a violent mission or actually take part in acts of violence. Twitter also reacted following Charlottesville. Last October in an internal email, Twitter CEO Jack Dorsey detailed more aggressive policies, including treating hateful imagery and hate symbols on Twitter as "sensitive media." Like adult content and graphic violence, the content will be blurred and users will need to manually opt in to view. But Twitter didn't detail what it considers to be a hate symbol.

#MeToo?

2. Video platforms Countless messages of hate are posted worldwide on various video platforms. According to a statement posted on YouTube last June, YouTube and its owner Google promised to do more to identify and remove hateful content. White attacker is found guilty in the beating of a black man in a parking garage during Charlottesville protests White attacker is found guilty in the beating of a black man in a parking garage during Charlottesville protests "... (W)e will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetized, recommended or eligible for comments or user endorsements."

How is this not social networking? Aren't the whole online thing social networking?

And I still see feminist shit on there, how come that hate speech isn't removed? I notice videos discussing cultural and societal issues are removed, but only if men are the one's posting them.

3. Online funding Without the convenience of using the internet to raise funds, many hate groups would be crippled. The Southern Poverty Law Center, a nonprofit that monitors hate groups in the US, said organizers, speakers and individuals attending last year's Charlottesville rally used PayPal to move money ahead of the event.

Like the retards at the SPLC? Where do they get the funding or authority to determine what "hate speech" is or what a "hate group" is?

Lets see, people used pay pal to move money. Would a brief case filled with hundreds have been a better choice?

Dipshits.

PayPal said in a blog post last August it works to make sure its services aren't used to accept payments or donations that promote hate, violence or racial intolerance. That includes groups that encourage racist views, such as the KKK and white-supremacist organizations.

So non-SJWs? Gotcha. White marxists only.

4. Websites/Webhosting Websites are a basic piece of the hate propaganda machine. But Charlottesville may have made it more difficult for white supremacist and Neo-Nazi websites to remain online.

Yeah, no one can communicate on any medium. Sure. I wonder how those groups used to do it? Carrier pigeon?

Also, how come you don't remove anti-fa? That's a violent terrorist group. Don't see them being removed.

5. The dark web If you've never heard of the dark web, you should be made aware. The dark web is a part of the internet that can't be searched by Google or most common search engines. It can only be viewed with a special Tor browser. After being banned by GoDaddy, Google Domains and a Russian hosting outfit, The Daily Stormer was forced onto the dark web, where it couldn't be accessed through standard web browsers. Later the site was able to find a legitimate host and return to the internet.

Everyone has heard of the dark web, and you don't need a special "tor browser" to access it. Idiots wouldn't use a series of cut offs, but you don't have to use Tor. Is this another of those, "It's illegal to look at wikileaks" lies CNN? Or just your complete stupidity when it comes to using the inter-tubes?

Why haven't tech companies done more to combat hate groups online? One reason: Tech platforms are protected under the Communications Decency Act, a unique US legal protection that gives a broad layer of immunity to online companies from being held liable for content posted by users.

Because "hate" requires a specific definition, and just because you define "hate" as being "not ctrl-left with social justice warrior bullshit" does not automatically make something "hate". Again, don't see those BLM/Anti-Fa sites taken down, and they're all about hate and committing actual illegal acts against the public. Those hate groups get a pass? Terrified of them I suppose CNN? Interfere with your "hate" perhaps?

There are exceptions, but the law is meant to preserve freedom of expression. Companies are supposed to act in good faith to protect users.

Something the press is so fucking retarded about. How horrible that freedom of expression, even expression you disagree with, must be protected. But I guess you tyrannical totalitarian tools will make it all better when you overthrow democracy.

Which of course, brings us to MGTOW, with the recent incel-drama going on, I'm sure we'll be "looked at" as a hate group in no time, because of random women's precious fee fees about not being worshipped 25/8.