Read the latest from the Web Foundation

News and Blogs

Progress by social media platforms

Web Foundation · September 20, 2022

All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021. Most of the progress updates listed below are based on public announcements made by each of the four companies.

Summary of progress against the commitments across the four tech companies

Curation

  • More granular settings: 2 out of 4 tech companies announced they have introduced globally features making it easier to set who can see, share, comment or reply to posts (e.g. Limits on Instagram, Safety Mode/Twitter Circle/Unmentioning on Twitter)  
  • Notifications: 2 out of 4 have announced testing notifications about posts becoming viral with reminders about curation options (Twitter, TikTok)
  • Filtering specific sensitive content: 2 out of 4 has announced improved curation of potentially sensitive content via filter or mute words (Sensitive Content Control and Hidden words on Instagram)

Reporting

  • Changes to reporting: 2 out of 4 tech companies announced changes to reporting (Twitter: overhaul reporting process centering on a human-first design / Meta: more limited changes with more “prominent reporting features on Messenger”) 
  • Jigsaw (Google Unit) developed with Twitter and open sourced Harassment Manager, intended for journalists and public figures. 

Changes made by each tech company

Twitter

Twitter has made substantial progress against the commitments in both areas.

Read more

All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021. Most of the progress updates listed below are based on public announcements made by Twitter.

Twitter has made substantial progress against the commitments in both areas. On curation, it has announced the following features – now all available globally:

  • Safety Mode: temporarily blocks accounts for seven days for using potentially harmful language – such as insults or hateful remarks – or sending repetitive and uninvited replies or mentions (optional setting). Twitter is currently testing it directly with users and tracking adoption rates to understand its potential impact.
  • Twitter Circle: lets users add up to 150 people who can see their Tweets when they want to share “with a smaller crowd.” (It can be noted that Instagram had introduced a similar feature in 2018 (“Close friends”)
  • Unmentioning: allows users to remove themselves from conversations they don’t want to be a part of

On reporting, Twitter started testing in Dec 2021 anoverhaul reporting process aiming to make it easier for people to alert them of harmful behavior. This new Report Tweet flow is now available globally (Twitter communicated it was available “in all languages” – further research is needed to clarify this.). Building on human-centered design, this new approach lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead it asks them what happened. This method is called symptoms-first. Twitter communicated it has enabled an increase of 50% of actionable reports during testing.

Twitter’s updated policy in December 2021 was seen as an important change to previous reporting mechanisms by the sector which did not previously centre the experience of the victim/survivor.

Beyond their specific TPDL commitments around curation and reporting, one should note the following work done by Twitter over the past year:

  • Keep experimenting prompts to users to reconsider harmful tweets. This approach aligns with demand from civil society for platforms to intervene prior to offensive content being posted. Twitter communicated about “a shift towards more proactive strategies”. The prompt has now been developed in English, Portuguese, Spanish and Arabic. Peer reviewed research published in May 2022 concluded this feature led to 6% fewer offensive Tweets (based on the analysis of 200,000 tweets). This study also suggests that people who are exposed to a prompt are slightly less likely to compose future offensive replies.  
  • Communication on safety tools: Twitter created and published an updated safety playbook and have launched various promoted tweet campaigns featuring existing and new safety tools-related videos. They found innovative ways to educate people via partnerships to highlight recent features like removing followers, Safety Mode, and conversation settings.
  • Open API: Twitter is also unique among this cohort of tech companies with its open API that allows researchers to access data more easily, and entrepreneurs to build their own tools working on the platform including countering OGBV. Innovations in this space are available to the public beyond those produced by Twitter’s own product team.
  • Work on hate speech lexicons: Twitter was noted for its approach to hate speech lexicons. Which directly links to protected characteristics and therefore people who are marginalized and more vulnerable to online gender based violence including “promot[ing] violence against or directly attack[ing] or threaten[ing] other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Twitter also has a Trust and Safety Council with a relatively stable membership over the last year. However, civil society experts questioned whether the format of relatively short calls with Trust and Safety Council members constituted meaningful engagement with CSOs, particularly where OGBV is not a primary focus of discussion in these forums. Some Global South members of the Trust and Safety Council explicitly expressed frustration that they are not engaged as cocreators but are presented with policy recommendations after Twitter has produced them and are asked for advice, with no accountability on whether their input is integrated.

beenhere

Case Study: Tweets That Chill: Analyzing Online Violence Against Women in Politics in Indonesia, Colombia, and Kenya


Problem addressed: When attacks against politically-active women are channelled online, the expansive reach of social media platforms magnifies the effects of psychological abuse by making those effects seem anonymous, borderless, and sustained, undermining women’s sense of personal security in ways not experienced by men. Research, advocacy, and policy development must ensure that women are able to meaningfully participate in two key spaces – the political realm and the online world – and the areas where they intersect.

Approach: Working with in-country partners, NDI developed lexicons of both gender-based harassing language and the political language of the moment, in order to examine the online violence experienced by politically-active women. These lexicons, each developed in local languages (Bahasa for Indonesia, Colombian Spanish, and a mix of Swahili and English in Kenya), were then used to conduct data scraping of a sample group of Twitter accounts within the target population of college-aged women and men who took part in the research.

Twitter was selected as the social media platform for quantitative data analysis because the majority of Twitter interactions are public, thereby enabling NDI to analyze large data sets retrospectively. The timeframe for the Twitter scraping was set within a six-month window of a significant political event – an election, referendum, political scandal or crisis – in each country. This quantitative Twitter analysis was complemented by qualitative analysis of the workshop discussions and responses from surveys administered to the same populations. 

Stakeholders involved: A diverse range of civil society partners from each of the respective countries  – Indonesia (5 partners), Colombia (13 partners), and Kenya (16 partners). 

Impact: The research (conducted in 2021) called for similar interventions still echoed in 2021 as part of the TPDL including the following
• Contextually- and linguistically-specific lexicons of online violence must be created and then evolve
• Attention to minority communities and intersecting identities is essential
• If women’s rights initiatives are action- and solution-oriented, otherwise fatigued partners are eager and enthusiastic to engage
• Under-reporting of VAW-P in online spaces exists and merits investigation

On 28th July 2022, Twitter announced that it  “aims to continually evaluate and improve the way we share information with the public. This year, we are launching the Twitter Moderation Research Consortium (TMRC). Through the Consortium, Twitter shares large-scale datasets concerning platform moderation issues with a global group of public interest researchers from across academia, civil society, NGOs and journalism studying platform governance issues.”


Meta

Based on public announcements, Meta’s progress against the commitments over the past year seem to mostly relate to curation on Instagram.

Read more

All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021.

Based on public announcements, Meta’s progress against the commitments over the past year seem to mostly relate to curation on Instagram, with the following new features available globally:

  • Hidden Words: automatically filters DM requests which contain offensive words, phrases and emojis.
  • Limits: designed to help protect people when they experience or anticipate a rush of abusive comments and DMs, this optional feature automatically hides comments and DM requests from people who don’t follow the user, or who only recently followed them. This feature was developed mostly for creators and public figures, who expressed experiencing sudden spikes of comments and DM requests from people they don’t know. It complements existing tools such as Block and Restrict, which are actions that can be taken by the user after viewing hurtful or triggering content.
  • Increasing user agency to use filters on graphic content has also notably improved, including a “Sensitive Content Control”policy from Instagram which allows the user to decide how much sensitive content shows up on “Explore” pages (driven by the algorithm, although the algorithm itself hasn’t changed the amount of graphic content it feeds users).
  • Meta’s transparency centre’s policy on violent and graphic content shows which content the company is filtering on Facebook due to their ‘community standards’, an important first step in transparency around this content.

On reporting, Meta communicated “more prominent” reporting features on Messenger.

Beyond the commitments but linked to OGBV, Meta has been working closely with the UK Revenge Porn Helpline on the launch of stopNCII.org to support victims of Non-Consensual Intimate Image (NCII) abuse.

beenhere

Case study: Stop NCII – a tool to detect and block the sharing of intimate images online 

Problem addressed: Non-consensual intimate images refers to sexually explicit images and videos that are captured, published or circulated without the consent of one or more persons in the frame. They can have lasting and devastating impact on victims.  
For years, photo – and video – matching technology was used to remove non-Consensual Intimate Image (NCII). Victims and experts expressed the need for a stronger platform adopted across the tech company that puts the victims first and reduces the risk of further spread of an image or video.

Approach: Launched in December 2021, StopNCII.org is a free tool designed to help victims stop the proliferation of their intimate images. When someone is concerned their intimate images have been posted or might be posted to online platforms like Facebook, they can create a case through StopNCII.org to proactively detect them. 

The tool uses ground-breaking technology enabling the image to be identified and blocked without the user having to send the photo or link to anyone. It indeed creates hashes (or digital fingerprints) of intimate images directly on a user’s device. StopNCII.org then shared the hash with participating tech platforms so that they can detect and block the images from being shared online.

Stakeholders involved: This work is the result of a collaboration between Meta and “more than 50 civil society organizations from across the world”. Building on technology developed by Facebook and Instagram NCII pilots, the tool is operated by the UK Revenge Porn Helpline. 

Impact: Participating tech platforms are currently limited to Facebook and Instagram. The initial version was launched in English. Meta and UK Revenge Porn Helpline are in the process of working with partners across the globe to translate the tool and adapt it to local contexts. One of the first non-English languages the tool was launched in was Urdu. 



There is no data yet available on the impact of this tool. The operational partner – UK Revenge Porn Helpline – has successfully removed over 200,000 individual non-consensual intimate images from the internet. 

It is also important to highlight Meta’s Oversight Board as a relevant development around transparency for Meta. Meta’s Oversight Board is an independent group looking to work with civil society organizations internationally, while exploring particular issues around safety and privacy. This Board published its first quarterly update in June 2021. While civil society organizations noted the potential impact of such a Board, its effectiveness, particularly in relation to OGBV, is unclear and yet to be evidenced.


TikTok

TikTok have not publicly announced any changes to reporting processes or timelines. The development of clearer guidance on how to report different forms of content shows however progress in supporting users to navigate existing processes.

Read more

All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021.

TikTok have not publicly announced any changes to reporting processes or timelines. The development of clearer guidance on how to report different forms of content shows however progress in supporting users to navigate existing processes.

  • On curation, Tiktok is testing a “safety reminder” on curation options for when users appear to be receiving a high proportion of negative comments. They are also doing early testing of a “dislike” button, enabling users to dislike any comments they deem irrelevant or inappropriate (not visible to the author of the comment). The aim is “to help inform how it ranks comments and give creators a way to control which ones are most visible” and eventually help users “feel more in control over comments.” It is unclear if the dislike button will enable to reduce the amount of abuse that women can see. Although the “dislike” would not be visible to the author of the comment, it could also create further tension. Indeed, YouTube, which had public dislikes for years, has now made dislike counts private saying that the feature was contributing to targeted harassment on the platform.

Beyond their specific commitments, the following initiatives by TikTok over the past year are positive steps towards a better response to OGBV:

  • the update of its community guidelines to ban misogyny,  deadnaming (act of referring to a transgender or non-binary person by a name they used prior to transitioning, such as their birth name), misgendering, and content that supports or promotes conversion therapy programs;
  • an awareness raising initiative on Violence Against Women and Girls (VAWG), as part of the 16 Days of Activism against gender-based violence – through the launch of an in-app information hub in partnership with UN Women and NGOs.
  • As title sponsor of the Women’s Six Nations and the UEFA Women’s European Football Champions in 2022, TikTok created a new series of videos encouraging to #SwipeOutHate and keep the negativity off the pitch, and added a temporary public service announcement (PSA) to football-related hashtags reminding people to report harmful content.
beenhere

Case Study: TikTok Deadnaming Policy 


Problem addressed: Transgender or non-binary people face hateful ideologies, and explicitly dismissive targeting content “through misgendering or deadnaming,” according to TikTok guidelines. Deadnaming refers to the act of calling a transgender person by a name that they no longer use. 

Such content had already been prohibited, as allegedly mentioned by TikTok. However, criticism surged from creators and civil society organizations for TikTok to bring further clarity to their Community Guidelines.
 
Approach: a broader update designed to promote safety and security on the platform as well as to support the well-being of the TikTok community particularly users identifying as transgender and non-binary. In this regard, a feature was added to allow users to choose and highlight their preferred pronouns on their profiles. Additionally, and for transparency purposes, TikTok published its most recent quarterly Community Guidelines Enforcement Report. More than 91 million videos — about 1% of all uploaded videos — were removed during the third quarter of 2021 because they violated the guidelines.
 
Stakeholders involved: The policy update follows pressure from GLAAD, an LGBTQ media advocacy nonprofit, and UltraViolet, a US national gender justice advocacy group. According to GLAAD, the new policy incorporates recommendations that they made to TikTok for how they could better protect women, people of color and the LGBTQ community through an open letter signed by more than 75 stakeholders.
 
Impact: TikTok’s move to expressly prohibit this harmful content in its Community Guidelines raises the standard for LGBTQ safety online and sends a message that other platforms which claim to prioritize LGBTQ safety should follow suit with substantive actions like these.

Tiktok also recently set up its first Trust and Safety advisory council for the Middle East, North Africa, and Turkey (MENAT) region in February 2022. Previously TikTok set up similar councils in the US (March 2020), Asia-Pacific (September 2020) and Europe (March 2021). There is yet to be any form of TikTok advisory council across Sub-Saharan Africa and Latin America. 


Google

Google’s progress against the commitments is more difficult to assess based on public information, given the variety of entities within the company. For YouTube, we have not seen any announcements suggesting new positive steps on curation or reporting in relation to OGBV.

Read more

All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021.

Google’s progress against the commitments is more difficult to assess based on public information, given the variety of entities within the company. For YouTube, we have not seen any announcements suggesting new positive steps on curation or reporting in relation to OGBV. Very recently, YouTube announced its YouTube Research Program, providing access to its data and tools to external researchers. The potential of this program to support OGBV-related research is yet to be explored.

Jigsaw – which is a Google entity – developed the Harassment Manager tool in collaboration with Twitter and civil society.

beenhere

Case study: Google Jigsaw Harassment Manager Tool


Problem addressed: Women journalists, activists and politicians are facing disproportionate risks of online harassment. 63% of women journalists said they had been threatened or harassed online. Of those, roughly 40% said they avoided reporting certain stories as a result. 

Although reporting mechanisms exist in social media platforms, processes and language can make it difficult for victims of abuse to take actions. There was a need for a tool that helps users deal with toxic comments following an incident of harassment, and document their experience.

Approach: The open source tool Harassment Manager has been developed by Jigsaw (part of Google), as announced in March 2022. This tool aims to help women journalists document and manage abuse targeted at them on social media, starting on Twitter.

More specifically, it helps users identify and document harmful posts, mute or block perpetrators of harassment and hide harassing replies to their own tweets. Individuals can review tweets based on hashtag, username, keyword or date, and leverage a Perspective API to detect comments that are most likely to be toxic.

Stakeholders involved: This initiative is the fruit of a collaboration between many stakeholders, starting with two tech giants (Google and Twitter). According to Jigsaw, journalists and activists with large Twitter presences have also been involved throughout the whole development cycle. Many NGOs in the journalism and human rights space were also part of this work, including: Article 19, Code for Africa, European Women’s Lobby, Feminist Internet, Glitch, International Center for Journalists (ICFJ), Online SOS, Paradigm Initiative, PEN America, Right To Be (formerly Hollaback!), The Thomson Reuters Foundation.

TPDL may have played a role in the development of this tool. Patricia Georgiou, Director of Partnerships and Business Development at Jigsaw, referred to their post-TPLD commitments as an incentive for them: 

“Harassment Manager is the result of several years of research, development, and cross-industry collaborations to deliver on our commitment to tackle online violence against women.” 

Impact: The code is now available on Github, open sourced for developers to build and adapt for free. As a first implementation partner, Thomson Reuters Foundation announced in July 2022 the launch of TRFilter, which builds on Harassment Manager’s code.


For more updates, follow us on Twitter at @webfoundation and sign up to receive our newsletter and The Web This Week, a weekly news brief on the most important stories in tech.

Tim Berners-Lee, our co-founder, gave the web to the world for free, but fighting for it comes at a cost. Please support our work to build a safe, empowering web for everyone.

Your comment has been sent successfully.