YouTube has firmly held a place at the epicentre of society for over a decade. Now, it sits atop the throne of the ever-growing debate as to how to weed out offensive videos and content from the masses.
The reason behind the recent corporate lash back is simple; major advertisers are finding their ads on unsavoury and downright disturbing videos. This is prompting Google, the owner of Youtube, to review the way in which it identifies this content, and how to improve the process.
Major brands have pulled back advertising from YouTube recently.
But what does all of this have to do with IT jobs?
Reviewing the issue at large
The YouTube advertisement issue holds similarities with the problems Facebook has recently encountered with fake news. Both mediums have become such incredibly large repositories of user-submitted information that it’s becoming difficult to keep track of it all. YouTube alone receives nearly 600,000 hours worth of submissions each day, according to Wired.
At the core of the issue, advertisers want reassurance that YouTube’s automated ad placement system won’t inadvertently place an ad on an offensive video. But it’s not that simple. Algorithms are getting smarter with the help of artificial intelligence (AI), but they have little ability to discern what’s actually objectionable.
That responsibility falls on a group of temporary workers employed by Google known as ad raters. These employees are tasked with quickly skimming through videos to identify whether there’s profane language, nudity or other disturbing images.
There’s still a problem
In the wake of a number of high profile companies leaving the ranks of YouTube’s advertising list, Google’s business chief, Phillipp Schindler, told Bloomberg that the company would be deploying AI as a way to help identify even more offensive videos at a higher rate than ever before.
The way this would work is that ad raters would operate closely with the AI software, feeding it examples of what images or language would constitute grounds for removal from the video sharing website. But there’s an inherent issue developing with this tactic: user-submitted content is become more graphic and obscene by the day.
AI relies on human input to learn what’s right or wrong.
“The graphic stuff is far more graphic lately,” one ad rater told Wired. This presents a challenge, because AI software can only identify offensive videos if it has a previous example to go off.
What does this mean for IT jobs?
All of this leads us to the question: If YouTube is integrating AI into its ad reviewal process, will it replace ad raters? It also begs the question, if this works well, how will it affect the job market at large? They’re common questions spurned by the introduction of AI at large. We’ve talked recently about how a Japanese insurance firm introduced IBM Watson, thus making 34 of its employees redundant, as reported in The Guardian.
Robots, driven by AI, are expected to replace nearly half of all current jobs over the course of the next decade, Kai-Fu Lee, founder of Sinovation Ventures, told CNBC. It’s a very real issue, and some ad raters are already wondering if, by working with the AI software, they are essentially training their successors.
The one salvation in all of this is the aforementioned fact that new and disturbing videos are being created faster than ad raters can train the software.
“As people get more innovative about such gruesome activity, the system needs to be trained on that,” Ahbijit Shanbhag, CEO of Graymatics, told Fortune. This essentially means AI can’t exist without personnel feeding it information. Another issue surrounding the implementation is the natural subjectivity of offensive content, which is easy for humans to catch but much more difficult for software to pick up.
For instance, Shanbhag pointed to the fact that psychological torture provides a dillemma for AI as it’s not a clear cut, binary mode of dangerous activity. Similarly, if a news or history clip were to show images depicting violence from war, only to inform the audience, the program would classify it as offensive.
This brings us to a point that many in the industry are conceding, though from an IT jobs perspective may be difficult to believe. AI is enhancing these companies’ capability of doing the job – whether it be catching fake news or disturbing footage – rather than replacing the mechanism currently in place. Without human staff members to teach the program what’s right and what’s wrong, the software can’t learn or keep up with the speed in which new graphically violent images are being posted.
For now it seems as though IT jobs at large are safe – especially those who are working to create the algorithms behind AI. But, the fact that this program can’t learn on its own is nothing more than an error with the system. As we all know, coders tend to figure out ways to fix the bug.
For more artificial intelligence market insights, contact your local 920 office.