Automation has already proven to be the next stage of industrial and commercial expansion with manufacturers, food service operators, and distributors all adopting automation on some level in order to cut costs and speed up wait times. However, what happens when we allow automation to run unchecked? That’s the point, after all—to automate part of the process and streamline production.
So, what happens when those algorithms, or a neural network, or an artificial intelligence of some sort, begin producing products that are off-message or even harmful? YouTube just had to pull more than 2 million kids videos, closed 270 accounts, and removed ads from more than 50,000 channels in response.
Starting in Design
Algorithmic design really drew attention when Amazon began using its new generative adversarial network (GAN) that uses neural networks and machine learning to scan and analyze images in order to generate fashion that is in-line with trends that it identifies. Algorithmic fashion hasn’t been without its pitfalls, however, such as an adult romper with Kim Jong-Un or Guy Fieri’s face stretched comically huge to cover the entire article of clothing.
Algorithm-influenced design has already seen expansion away from clothing and fashion and into content generation. Recently, James Bridle’s viral article “There’s Something Wrong on The Internet,” hit the nail on the head for a lot of people—even I personally noticed some of the things he wants to draw attention to as I watched the YouTube Kids app with my two-year-old son. Sometimes he would stumble onto content that didn’t make sense. Floating heads, unlicensed characters animated doing odd things—even real-life actors creating scenes that reflected the title of the video, but not much else.
That’s not to mention the strange fascination my son has with watching other people—usually just a pair of hands and a voice—play with toys, instead of simply playing with toys himself. This is not limited to my personal experiences, either. With these videos, there is a sense of overt product placement coupled with an unnerving dose of not knowing where this content is actually coming from—which brings me to my next point:
Low-budget production companies are creating content based solely on terms that YouTube Kids’ algorithms are pushing to the top of recommended content in order to quickly make money. This creates nonsensical, but very lucrative, videos that have begun teetering over the edge of harmless and into disconcerting and disturbing. Part of this is that due to the low operational costs in order to maximize profit—the channels tend to use pirated footage, sometimes from video games, in place of actual program footage for licensed properties such as Peppa Pig or Paw Patrol. This is to inherit the halo of trust that these properties command, so that parents initially see the content and trust it, as well as to avoid the costs of actually licensing the properties. This creates a rift of trust between parents and the brands that are being mimicked because they begin associating the content type with the topic.
Human intelligence is more important than ever in the context of algorithms and automation. YouTube’s latest effort is adding 10,000 employees to better police its content, giving it a lens of human reasoning over the computational taste maker that is the YouTube content algorithm. This is to ensure that content with a familiar face, but very unfamiliar context, no longer reaches the eyes and screens of kids around the world.
Note: CultureWaves is what we call an HI machine. We have used human intelligence from the beginning to ensure that bots aren’t making our decisions as to what is meaningful about a trend. It’s interesting for us to see automation begin to recognize that a little human “weeding out” can be a good thing.