YouTube seems to prioritise engaging the public over curbing toxic videos

Major-Brands-YouTube

A year ago, Susan Wojcicki was on stage to defend YouTube. Her company, hammered for months for fueling falsehoods online, was reeling from another flare-up involving a conspiracy theory video about the Parkland, Florida high school shooting that suggested the victims were “crisis actors.”

Wojcicki, YouTube’s chief executive officer, is a reluctant public ambassador, but she was in Austin at the South by Southwest conference to unveil a solution that she hoped would help quell conspiracy theories: a tiny text box from websites like Wikipedia that would sit below videos that questioned well-established facts like the moon landing and link viewers to the truth. 

Wojcicki’s media behemoth, bent on overtaking television, is estimated to rake in sales of more than $16 billion a year. But on that day, Wojcicki compared her video site to a different kind of institution.

“We’re really more like a library,” she said, staking out a familiar position as a defender of free speech. “There have always been controversies, if you look back at libraries.”

Since Wojcicki took the stage, prominent conspiracy theories on the platform—including one on child vaccinations; another tying Hillary Clinton to a Satanic cult—have drawn the ire of lawmakers eager to regulate technology companies.

And YouTube is, a year later, even more associated with the darker parts of the web.

The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense.

Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

Inside The South By Southwest (SXSW) Interactive Festival
Susan Wojcicki speaks at SXSW in 2018.Photographer: David Paul Morris/Bloomberg

Wojcicki and her deputies know this. In recent years, scores of people insideYouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread.

One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity.

A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.

The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos.

Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement. 

In response to criticism about prioritizing growth over safety, Facebook Inc. has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors – and sometimes, to its own staff. 

Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure.

Within Google, YouTube’s inability to fix its problems has remained a major gripe. Google shares slipped in late morning trading in New York on Tuesday, leaving them up 15 percent so far this year.

Facebook stock has jumped more than 30 percent in 2019, after getting hammered last year. 

YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago.

New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.

The company’s lackluster response to explicit videos aimed at kids has drawn criticism from the tech industry itself.

Patrick Copeland, a former Google director who left in 2016, recently posted a damning indictment of his old company on LinkedIn. While watching YouTube, Copeland’s daughter was recommended a clip that featured both a Snow White character drawn with exaggerated sexual features and a horse engaged in a sexual act.

“Most companies would fire someone for watching this video at work,” he wrote. “Unbelievable!!” Copeland, who spent a decade at Google, decided to block the YouTube.com domain.

Micah Schaffer joined YouTube in 2006, nine months before it was acquired by Google and well before it had become part of the cultural firmament.

He was assigned the task of writing policies for the freewheeling site. Back then, YouTube was focused on convincing people why they should watch videos from amateurs and upload their own. 

A few years later, when he left YouTube, the site was still unprofitable and largely known for frivolity (A clip of David, a rambling seven-year old drugged up after a trip to a dentist, was the second most-watched video that year.) But even then there were problems with malicious content.

Around that time YouTube noticed an uptick in videos praising anorexia. In response, staff moderators began furiously combing the clips to place age restrictions, cut them from recommendations or pull them down entirely. They “threatened the health of our users,” Schaffer recalled. 

He was reminded of that episode recently, when videos sermonizing about the so-called perils of vaccinations began spreading on YouTube. That, he thought, would have been a no-brainer back in the earlier days.

“We would have severely restricted them or banned them entirely,” Schaffer said. “YouTube should never have allowed dangerous conspiracy theories to become such a dominant part of the platform’s culture.”

Somewhere along the last decade, he added, YouTube prioritized chasing profits over the safety of its users. “We may have been hemorrhaging money,” he said. “But at least dogs riding skateboards never killed anyone.”

Beginning around 2009, Google took tighter control of YouTube. It ushered in executives, such as sales chief Robert Kyncl, formerly of Netflix, for a technical strategy and business plan to sustain its exploding growth.

In 2012, YouTube concluded that the more people watched, the more ads it could run—and that recommending videos, alongside a clip or after one was finished, was the best way to keep eyes on the site. 

So YouTube, then run by Google veteran Salar Kamangar, set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal.

When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.

YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention.

It’s one that people on the political fringes have easily exploited, said Brittan Heller, a fellow at Harvard University’s Carr Center. “They don’t know how the algorithm works,” she said. “But they do know that the more outrageous the content is, the more views.”

People inside YouTube knew about this dynamic. Over the years, there were many tortured debates about what to do with troublesome videos—those that don’t violate its content policies and so remain on the site. Some software engineers have nicknamed the problem “bad virality.” 

Rather than revamp its recommendation engine, YouTube doubled down. The neural network described in the 2016 research went into effect in YouTube recommendations starting in 2015. By the measures available, it has achieved its goal of keeping people on YouTube.

“It’s an addiction engine,” said Francis Irving, a computer scientist who has written critically about YouTube’s AI system. 

Irving said he has raised these concerns with YouTube staff. They responded with incredulity, or an indication that they had no incentives to change how its software worked, he said. “It’s not a disastrous failed algorithm,” Irving added. “It works well for a lot of people, and it makes a lot of money.”

A YouTube spokeswoman said that, starting in late 2016, the company added a measure of “social responsibility” to its recommendation algorithm. Those inputs include how many times people share and click the “like” and “dislike” buttons on a video. But YouTube declined to share any more detail on the metric or its impacts. 

In November of 2017, YouTube finally took decisive action against channels pushing pernicious videos, cutting thousands off from receiving advertisements or from the site altogether virtually overnight. Creators dubbed it “The Purge.”

The company was facing an ongoing advertiser boycott, but the real catalyst was an explosion of media coverage over disturbing videos aimed at children.

The worst was “Toy Freaks,” a channel where a father posted videos with his two daughters, sometimes showing them vomiting or in extreme pain. YouTube removed Toy Freaks, and quickly distanced itself from it. 

YouTube had also wrestled with another debate around its programming for kids. Before the launch of a dedicated app for minors, YouTube Kids, several people advocated that the company only offer hand-picked videos in the service to avoid any content kerfuffles. Those arguments lost, and the app has since picked videos algorithmically.

YouTube did plow money into combating its content problems. It hired thousands more people to sift through videos to find those that violated the site’s rules.

But to some inside, those fixes took too long to arrive or paled next to the scale of the problem. As of 2017, YouTube’s policy for how content moderators handle conspiracy theories didn’t exist, according to a former moderator who specialized in foreign-language content. 

In February of 2018, the video calling the Parkland shooting victims “crisis actors” went viral on YouTube’s trending page. Policy staff suggested soon after limiting recommendations on the page to vetted news sources. YouTube management rejected the proposal, according to a person with knowledge of the event.

The person didn’t know the reasoning behind the rejection, but noted that YouTube was then intent on accelerating its viewing time for videos related to news. 

However, YouTube did soon address its issues around news-related content. Last July, YouTube announced it would add links to Google News results inside of YouTube search, and began to feature “authoritative” sources, from established media outlets, in its news sections. YouTube also gave $25 million in grants to news organizations making videos. In the last quarter of 2018, YouTube said it removed over 8.8 million channels for violating its guidelines. Those measures are meant to help bury troubling videos on its site, and the company now points to the efforts as a sign of its attention to its content problems.

This past January, YouTube followed former Google employee Zunger’s advice and created a new tier for problematic videos.

So-called “borderline content,” which doesn’t violate the terms of service, can stay on the site, but will no longer be recommended to viewers. A month later, after a spate of press about vaccination conspiracies, YouTube said it was placing some of these videos in the category.

 In February, Google also released a lengthy document detailing how it addresses misinformation on its services, including YouTube. “The primary goal of our recommendation systems today is to create a trusted and positive experience for our users,” the document reads. “The YouTube company-wide goal is framed not just as ‘Growth’, but as ‘Responsible Growth.’”

The company has been applying the fix Wojcicki proposed a year ago. YouTube said the information panels from Wikipedia and other sources, which Wojcicki debuted in Austin, are now shown “tens of millions of times a week.”

A 2015 clip about vaccination from iHealthTube.com, a “natural health” YouTube channel, is one of the videos that now sports a small gray box. The text links to a Wikipedia entry for the MMR vaccine. Moonshot CVE, the London-based anti-extremism firm, identified the channel as one of the most consistent generators of anti-vaccination theories on YouTube. 

But YouTube appears to be applying the fix only sporadically. One of iHealthTube.com’s most popular videos isn’t about vaccines. It’s a seven-minute clip titled: “Every cancer can be cured in weeks.” While YouTube said it is no longer recommends the video to viewers, there is no Wikipedia entry on the page. It has been viewed over 7 million times.

source: www.bloomberg.com


MARKETING Magazine is not responsible for the content of external sites.

An afternoon of conversations we never had, with leaders most of you never met.

Discover what’s possible from those who made it possible. Plus a preview of The HAM Agency Rankings REPORT 2024.

Limited seats: [email protected]

BOOK SEATS NOW



Subscribe to our Telegram channel for the latest updates in the marketing and advertising scene