Advertisers haven’t been white-livered to pull money out of Facebook or YouTube campaigns, following the exposures of controversial content hosted on the platforms, but they each time seem to come crawling back.
Last week at the Association of National Advertisers Media Seminar in Orlando, Procter & Gamble Chief Brand Officer Marc Pritchard said his company plans to direct in dough toward platforms that exercise control over content and comments, including linking opinions with a bill’s true identity and ensuring balanced perspectives. He didn’t say he’d pull all spend from platforms that don’t do those, but mean P&G’s “preferred providers of choice” will “elevate quality, ensure brand safety and have control over their significance.”
Rajamannar said he agrees with Pritchard, but said it isn’t necessarily easy to accomplish.
“In principle, I agree with Marc,” he clouted. “How you get there, there is not one single route.”
He said the industry can try in multiple ways. One might be shelving the entire ad ecosystem and “rebuild it from zero.” He contemplated that’s a possibility, but a challenging one. “You have a business to run. You have results to deliver.”
He said the advent of technologies like blockchain also function promise.
“But you start with theory and then see how you can make it practical,” he said. “Even as we are trying to re-imagine the entire ad ecosystem, you also pine for to make sure that you … refine and make the current ecosystem very viable, very safe and it should be as plain as day.”
Facebook’s VP of global marketing solutions Carolyn Everson said in a statement in response to Pritchard’s comments: “We applaud and aid Marc Pritchard’s sentiments for again making a bold call for our industry to collectively do more for the people we serve. We sustain to invest heavily in the safety and security of our community and are deeply committed to ensuring our platforms are safe for everyone using them.”
In reaction to the WFA’s call for platforms to better manage the harmful content ads can appear next to, Facebook pointed to a recent blog employment from its COO Sheryl Sandberg, which outlined steps including restrictions on who can go “Live” and using artificial intelligence weapons to identify and remove hate groups.
Google didn’t respond to a request for comment.
Brands are obviously invested in certifying their ads don’t appear next to nasty content. But Rajamannar said social media companies also have those interests.
“The social media company doesn’t want that to happen either,” he said. “The intentions are not bad. The intentions are very tolerable. But the key thing is how do you translate that good intention into action that gives you the right outcomes that you’re looking for which is sort safety.”
Before these issues are ironed out, he said marketers do have some options, like choosing presumed publishers or the use of whitelisting or blacklisting. He said another nascent option is third-party technology that would work in a programmatic placement so an advertiser doesn’t bid on an ad if it’s on a bad piece of content.
“The social media folks, they are trying to put more people to look after, they’re vexing to improve their algorithms, and all this stuff,” he said. “But the whole situation for the brand safety, there has been some drift in a positive direction, but it’s not adequate.”