top of page

The Good, Bad, and Sometimes Ugly of Like-minded Groups


I have tackled the issue of “Like Mindedness” in this space before, but with the events of the last few weeks, we feel it’s important to bring it up again.

We all want to feel like we belong. We all want to be seen and have our voices heard. The amazing author and scholar Brené Brown says in her book “Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead”: “Connection is why we're here; it is what gives purpose and meaning to our lives. The power that connection holds in our lives was confirmed when the main concern about connection emerged as the fear of disconnection; the fear that something we have done or failed to do, something about who we are or where we come from, has made us unlovable and unworthy of connection.”

But what happens when connected, like-minded people turn into hate groups? It happens on all platforms. The list is long for those social networks who have full-time staff to police Facebook, Instagram, Snapchat and perhaps the most prolific and virulent corners for hate mongers – Twitter.

And while it’s beyond our pay grade to get into the nitty gritty of the legality of free speech (suffice it to say that we are, you know, FOR IT), but free speech does not mean freedom from consequences. GoDaddy is one example of action taken as a consequence of like-minded hatred (they booted a particularly unsavory alt-right website of their domain registration). Likewise, people identified at recent rallies are being fired from their jobs for their participation.

What is often overlooked is that “the internet” (i.e. social platforms, search engines, etc.) doesn’t “owe” you free speech. The popular platforms are owned by companies, who answer to shareholders and who have the right to pull the plug on anything that they deem unsavory or in violation of their corporate culture. Check out the “Mr. Nice Guy” initiative by Instagram in this piece. They took 20 employees who sat in a room and went through thousands of posts and put them into two buckets: Toxic and Non Toxic. They then “taught” Instagram to start doing it automatically.

Using machine learning to tag things as “toxic” and “not toxic” is a nice start, but it’s far too simple. Using things like Natural Language Processing (NLP), which is exactly what it sounds like – using computers to process human language – will be a huge factor in how we better understand and filter out the “stuff” we don’t want to see or hear.

All that said, words are just words. They only have as much power to hurt you as you give them. What we see as the future of information is the filtering of information that is pertinent to you. There WILL be dark corners of the internet. Hate will find hate. It is inevitable. And if you like that short of stuff, we say enjoy!

If it’s an underground bunker or a basement with an internet connection – it will happen. But it doesn’t have to have the prominence that we are seeing now. We believe that the more people select what is of interest to them, the “noise” that’s currently permeating the internet will diminish and people will find a place where they can discover, learn, share and create. That is what VISVA is all about.

Take this opportunity to be the first to test VISVA. Get on our Beta list and you could win some stuff https://getvisva.visva.com/. Share with others and you could increase your chances to win.

#CMO #troymickle #marketing #TroyMickle #media #socialmedia #video #newsmedia

Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page