Facebook, YouTube and Twitter have agreed to adopt a common set of definitions for hate speech and other harmful content and to collaborate with a view to monitoring industry efforts to improve.

The move follows follow 15 months of intensive talks within the Global Alliance for Responsible Media, a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) which brings together advertisers, agencies and platforms.

The first changes are set to be introduced this month and have been welcomed by senior figures at leading advertisers. “This is a significant milestone in the journey to rebuild trust online,” said Luis Di Como, Executive Vice President, Global Media, Unilever.

Four key areas for action have been identified, designed to boost consumer and advertiser safety, with agreed individual timelines for each platform to implement across the different areas.

1. Adoption of GARM common definitions for harmful content

Definitions of harmful content have varied by platform and that makes it hard for brand-owners to make informed decisions on where their ads are placed and to hold platforms to account.

Common definitions will create a common baseline on harmful content. These have been developed to add more depth and breadth pertaining to specific types of harm such as hate speech and acts of aggression and bullying.

All platforms will now consistently enforce these standards as part of their advertising content standards and consistently label and enforce the common definitions.

2. Development of GARM reporting standards on harmful content

Having a harmonised, reporting framework is a critical step to ensure that policies around harmful content are enforced effectively. All parties have now agreed to harmonised metrics on issues around consumer safety, advertiser safety, platform effectiveness in addressing harmful content.

Over the next two months will continue on harmonising metrics and reporting formats, with the system to launch in the second half of next year.

3. Commitment to have independent oversight on operations, integrations and reporting

An independent view on how individual participants are categorising, eliminating, and reporting harmful content will drive better implementation and build trust. The goal is to have all major platforms fully audited or in the process of auditing by year end.

4. Commitment to develop and deploy tools to better manage advertising adjacency

Advertisers need to have visibility and control so that their advertising does not appear adjacent to harmful or unsuitable content and take corrective action if necessary and to be able to do so quickly.

Platforms that have not implemented an adjacency solution will provide a development roadmap in Q4 2020. Platforms will provide a solution through their own systems, via third party providers or a combination thereof. In addition to Facebook, YouTube, and Twitter, there are firm commitments from TikTok, Pinterest and Snap to provide development plans for similar controls by year end.

The WFA believes that the standards should be applicable to all media given the increased polarisation of content regardless of channel, not just the digital platforms, and is encouraging its members to apply the same adjacency criteria for all their media spend decisions irrespective of the media.



Sourced from WFA