YouTube’s (And Other Tech Giants’) Eventual Downfall Due to Their Poor Practices

The troubles began back in November of 2017. The New York Times reported that YouTube’s content labeled as safe for children was, in fact, the opposite. Characters from popular children’s shows committed suicide or dancing in a strip club, a child being abused under a video claiming to be about colors, or pirated children’s shows and movies are a small portion of the content you would have found even after using the presumably safe “YouTube Kids.”

Two days after this, writer James Bridle wrote a scathing piece highlighting the same problems but digging much deeper:

“Someone, something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.”

Bridle explains in disturbing detail how deep and vast the issues were with YouTube kids; bots not only creating content but creating views, videos showing widely popular characters like Peppa Pig being tortured or drinking bleach, and “verified accounts” mean absolutely nothing.

This was also the same year that Harvard University and Federal University of Minas Gerais published a study showing that children were not only watching inappropriate content, they were also being tricked into watching hours of advertisement under the guise of learning videos.

YouTube responded to the reports the same month, saying they were going to be quicker and stricter with their policy enforcement, remove inappropriate ads and block inappropriate comments on videos that featured minors, among other things. They banned accounts, deleted comments, and worked on taking that abusive material off the site.

This seemed like a step in the right direction, perhaps things would change for the better on YouTube.

Except these same problems exist even today.

On February 17th of this year, YouTube made major headlines yet again when YouTuber MattWhatItIs posted a video showing the vile world of soft pedophilia happening on the site. For the sake of myself and the reader, I will not go into detail on what the videos shows. It’s extremely uncomfortable and disgusting.

The consumers, of course, were horrified. It eventually led up to major companies like Disney and McDonald’s freezing their ad spending on the site days after the video was posted. YouTube quickly responded as well, but with rhetoric they’ve posted before; they have banned users, they’ll be stricter with their content, deleting accounts that post inappropriate comments on videos featuring minors, and so on.

This soft pedophile issue is not new news–not by a long shot. It has been going on for years, people knew about it, and nothing YouTube did stopped the problem.

Going back to 2017, the BBC and Times of London published reports showing the exact same problem Matt’s video showed. They published it two days after YouTube responded to the New York Times article about unfriendly children’s content. YouTube did not officially respond to those articles as their next blog post addressed extremism and bullying with similar fixes: removing videos, hiring more people to review content, etc.

The BBC reports back then showed that the issue was not that these inappropriate comments existed–it was the lack of support for reports being reviewed. Of the thousands of comments and videos being reported, very few were reviewed and taken down.

One volunteer says that he made more than 9,000 reports in December 2016 – and that none have been processed by the company.

Perhaps YouTube tried to address the issue behind closed doors and failed. More likely, perhaps not enough people (read: advertisers) noticed these articles for YouTube to care.

This problem, however, affects more than just children. It also affects adults.

In April of last year, YouTube faced yet another problem–conspiracy theories and fake news. Guillame Chaslot, a software engineer that used to work for Google, along with The Media Manipulation Initiative at Data & Society, showed how easy it was for a new viewer to have conspiracy and far-right videos pop up in the recommended section regardless of if you started on a left-leaning or right-leaning channel. Ben Popken, a writer for NBC news, was recommended a pro-Putin video after watching one video about the planet Saturn.

All of these issues come to the same source of the problem: the algorithm.

YouTube is a huge sea with billions of fish in it, and so far the tech giant’s responses hurt the content creators the most rather than addressing an issue we have yet to discuss: How is the tech giant supposed to create AIs and algorithms that address billions of people and trillions of never-ending issues?

YouTube boasts fancy new tech as a means to fix problem after problem, but these are minor band-aids on a dam waiting to burst. It might not burst now or in several years, but it will happen and it would leave YouTube as useful as MySpace is now, a shell of what it used to be. YouTube needs to find a way to fix minor problems now before it dies in 10-15 years.

This isn’t to say it will be easy. How can a pedophile ring be stopped online without overstepping the boundaries of an individual’s privacy? How can trolls and conspiracy theorists be thwarted without stifling the voice of the people? How can content creators be protected from large corporations killing their ability to create while also disallowing pirated, stolen, or copy-written content?

The largest issue here is, at this time, there’s no algorithm or AI capable of safely allowing the amount of content, views, and users that YouTube (and other tech giants) have. Not a single tech company right now has the capability to keep a good balance when there are billions of connected people using their website(s), and YouTube’s issues are showing this weakness the most. 

+ posts
Post a Comment

Time limit is exhausted. Please reload CAPTCHA.