Digital Safety Content Report
Digital technologies allow people across the globe to share information, news, and opinions that, together, span a broad range of human expression. Unfortunately, some people use online platforms and services to exploit the darkest sides of humanity, which diminishes both safety and the free exchange of ideas.
At Microsoft, we believe digital safety is a shared responsibility requiring a whole-of-society approach. This means that the private sector, academic researchers, civil society, and governmental and intergovernmental actors all work together to address challenges that are too complex – and too important – for any one group to tackle alone.
For our part, we prohibit certain content and conduct on our services, and we enforce rules that we’ve set to help keep our customers safe. We use a combination of automated detection and human content moderation to remove violating content and suspend accounts. Additional information is available on Microsoft’s Digital Safety site.
The Microsoft Services Agreement includes a Code of Conduct that outlines what’s allowed and what’s prohibited when using a Microsoft account. Some services offer additional guidance, such as the Community Standards for Xbox, to show how the Code of Conduct applies on their services. Reporting violations of the Code of Conduct is critical to helping keep our online communities safe for everyone. More information on how to report content and conduct is included below.
Protecting children online
Practices
Microsoft has a long-standing commitment to online child safety. We develop tools and engage with a variety of stakeholders to help address this issue. As specified in our Code of Conduct and on our content and conduct policies page on Microsoft’s Digital Safety site, we prohibit any child sexual exploitation or abuse, which is content or activity that harms or threatens to harm a child through exploitation, trafficking, extortion, or endangerment, including through the sharing of visual media that contain sexual content that involves or sexualizes a child or through grooming of children for sexual purposes.
Microsoft is a member of the WePROTECT Global Alliance, the multistakeholder organization fighting child sexual exploitation and abuse online. Microsoft also supports the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse and works closely with WePROTECT to promote them.
Microsoft is a founding member of the Technology Coalition, the tech industry’s non-profit association to combat online child sexual exploitation and abuse. We also support and/or hold leadership and advisory roles with numerous other child safety organizations, including the Family Online Safety Institute, INHOPE, the Internet Watch Foundation, and the National Center for Missing and Exploited Children (NCMEC).
Processes and systems
Child Exploitation Prevention and Detection
Detection and removal of child sexual exploitation and abuse imagery (CSEAI)
We deploy tools to detect child sexual exploitation and abuse imagery (CSEAI), including hash-matching technology (e.g., PhotoDNA) and other forms of proactive detection. In-product reporting is also available for services such as OneDrive, Skype, Xbox, and Bing, whereby users can report suspected child exploitation or other content. Microsoft developed PhotoDNA, a robust hash-matching technology, to help find duplicates of known child sexual exploitation and abuse imagery. We continue to make PhotoDNA freely available to qualified organizations, and we leverage PhotoDNA across Microsoft’s consumer services.
As a U.S.-based company, Microsoft reports apparent CSEAI or grooming to NCMEC via the CyberTipline, as required by U.S. law (Note that after this reporting period, in May 2024, the REPORT Act was enacted, which has expanded the mandatory reporting categories under U.S. law). We take action on the account(s) associated with the content we have reported to NCMEC. Users have the opportunity to appeal these account actions by visiting the Moderation and enforcement webpage and using this Account Reinstatement webform.
Outcomes – July through December 2023
During the period of July-December 2023, Microsoft submitted 60,749 reports to NCMEC.
For our hosted consumer services – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 61,348 pieces of content and 10,237 consumer accounts associated with CSEAI or grooming of children for sexual purposes during this period. Microsoft detected 99.2 percent of the content that was actioned through automated technologies, while the remainder was reported to Microsoft by users or third parties. Of the accounts actioned for CSEAI or grooming of children for sexual purposes, 0.8 percent were reinstated upon appeal.
For Bing, Microsoft works to prevent CSEAI from entering the Bing search index by leveraging block lists of sites containing CSEAI identified by credible agencies, and through PhotoDNA scanning of the index and visual search references when users upload images on one of Bing’s hosted features such as visual search. During this reporting period, Microsoft actioned 66,603 pieces of content which were confirmed as apparent CSEAI through content moderation processes and reported to NCMEC, with 99.1 percent detected through PhotoDNA scanning and other proactive measures.
Note: Data in this report represents the period July-December 2023 and includes Microsoft consumer services such as OneDrive, Outlook, Skype, Xbox and Bing. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.
Select previous Digital Safety Content Report to download
-
Digital Safety Content Report 2023 (July-December)
Digital Safety Content Report 2023 (January-June)
Digital Safety Content Report 2022 (July-December)
Digital Safety Content Report 2022 (January-June)
Digital Safety Content Report 2021 (July–December)
Digital Safety Content Report 2021 (January-June)
FAQ
Questions about Child Sexual Exploitation and Abuse Imagery
-
-
In 2009, Microsoft partnered with Dartmouth College to develop PhotoDNA, a technology that aids in finding and removing known images of child sexual exploitation and abuse.
PhotoDNA creates a unique digital signature (known as a “hash”) of an image which is then compared against signatures (hashes) of other photos to find copies of the same image. When matched with a database containing hashes of previously identified illegal child sexual abuse images, PhotoDNA helps detect, disrupt, and report the distribution of child sexual exploitation material. PhotoDNA is not facial recognition software and cannot be used to identify a person or an object in an image. A PhotoDNA hash is not reversible, meaning it cannot be used to recreate an image.
Microsoft has made PhotoDNA freely available to qualified organizations, including technology companies, law enforcement agencies, developers, and non-profit organizations.
More information can be found on the PhotoDNA site.
-
As explained by the National Center for Missing & Exploited Children (NCMEC), the CyberTipline “is the nation’s centralized reporting system” through which “the public and electronic service providers can make reports of suspected online enticement of children for sexual acts, extra-familial child sexual molestation, child pornography, child sex tourism, child sex trafficking, unsolicited obscene materials sent to a child, misleading domain names, and misleading words or digital images on the internet.”
As a U.S.-based company, Microsoft reports all apparent CSEAI to NCMEC, as required by US law. According to NCMEC, staff review each tip to work to find a potential location for the incident reported so that it may be made available to the appropriate law enforcement agency across the globe. A CyberTip report to NCMEC can include one or multiple items.
-
Microsoft complies with global regulations to take action against child sexual exploitation and abuse content it discovers on its services. For example, pursuant to 18 USC 2258A, we report apparent child sexual exploitation content to the National Center for Missing and Exploited Children, which serves as a clearinghouse to notify law enforcement globally of suspected illegal child sexual exploitation content. Microsoft also leverages the derogation permitted by European Union Regulation (EU) 2021/1232 as required for its use of PhotoDNA and other detection technologies in services governed by EU Directive 2002/58/EC.
Addressing terrorist and violent extremist content
Practices
At Microsoft, we recognize that we have an important role to play in helping to prevent terrorists and violent extremists from exploiting digital platforms, including by addressing terrorist or violent extremist content (TVEC) on our hosted consumer services. As specified in Microsoft’s Code of Conduct and on our Digital Safety site, we do not allow content that praises or supports terrorists or violent extremists, helps them to recruit, or encourages or enables their activities. We look to the United Nations Security Council’s Consolidated List to identify terrorists or terrorist groups. Violent extremists include people who embrace an ideology of violence or violent hatred towards another group.
Microsoft's approach to addressing TVEC is consistent with our responsibility to manage our services in a way that respects fundamental values such as safety, privacy, and freedom of expression. We collaborate with multistakeholder partners—including the Global Internet Forum to Counter Terrorism (GIFCT), the Christchurch Call to Action, and the EU Internet Forum—to work collectively to eliminate terrorist and violent extremist content online.
Microsoft is a founding member of the GIFCT and, in 2024, holds the Chair of GIFCT Operating Board. Via GIFCT, Microsoft participates in a range of activity, including GIFCT’s Incident Response processes. In the event the GIFCT activates a Content Incident or Content Incident Protocol, Microsoft ingests related hashes from the GIFCT’s hash-sharing database. This allows Microsoft to quickly become aware of, assess, and address content circulating on its consumer services resulting from an offline terrorist or violent extremist event consistent with Microsoft policies. For further information, reference GIFCT's annual transparency report, which includes information on the hash-sharing database.
Processes and systems
TVEC Prevention and Detection
Detection and enforcement related to TVEC
We review reports from users and third parties on potential TVEC, take action on content, and, if necessary, take action on accounts associated with violations of our Code of Conduct. Users have the opportunity to appeal these account actions by visiting the Moderation and enforcement webpage and using this Account Reinstatement webform. In addition, Microsoft leverages hash-matching technology to address the reappearance of online content that has been previously identified as TVEC in violation of Microsoft’s policies. Hash-matching technology uses a mathematical algorithm to create a unique signature (known as a “hash”) for digital images and videos. The hashing technology then compares the hashes generated from user-generated content (UGC) with hashes of reported (known) Terrorist Content, in a process called “hash matching”.
Outcomes – July through December 2023
During the period, for our hosted consumer services – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 345 pieces of content associated with TVEC. Microsoft detected 97.4 percent of the content that was actioned through automated technologies, while the remainder was reported to Microsoft by users or third parties. Of the accounts actioned for TVEC, none were reinstated upon appeal.
Note: Data in this report represents July-December 2023 and includes Microsoft consumer hosted consumer services such as OneDrive, Outlook, Skype, Xbox and Bing. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.
Select previous Digital Safety Content Report to download
-
Digital Safety Content Report 2023 (July-December)
Digital Safety Content Report 2023 (January-June)
Digital Safety Content Report 2022 (July-December)
Digital Safety Content Report 2022 (January-June)
Digital Safety Content Report 2021 (July–December)
Digital Safety Content Report 2021 (January-June)
FAQ
Questions about Terrorist and Violent Extremist Content
-
Microsoft both contributes hashes to and consumes some hashes from the GIFCT industry hash-sharing database. We have been contributing hashes since the database become operational in April 2016 and started ingesting hashes in the summer of 2017.
Microsoft leverages hashes to detect duplicates of known terrorist and violent extremist content on our hosted consumer services. Microsoft determines whether to action matching content according to our own Microsoft Services Agreement, Code of Conduct, and/or community guidelines.
For more information on the GIFCT hash-sharing database, including information on total number of hashes and breakdown by type, please refer to the annual GIFCT transparency report.
-
Our Bing search engine strives to be an unbiased information and action tool, presenting links to all relevant information available on the Internet. Therefore, we will remove links to terrorist-related content from Bing only when that takedown is required of search providers under local law. Government requests for content removal is reported as part of our Government Requests for Content Removal Report.
-
In addition to in-product reporting tools, users can report potential terrorist or violent extremist content on Microsoft services via this link.
Non-consensual intimate imagery
Practices
Microsoft takes seriously the harm caused by the sharing of non-consensual sexually intimate imagery. In many circumstances, sharing sexually intimate images of another person without that person’s consent violates their personal privacy and dignity.
Microsoft prohibits the distribution of non-consensual intimate imagery (NCII). Microsoft also prohibits content soliciting NCII or advocating for the production or redistribution of intimate imagery without the subject’s consent. This includes photorealistic NCII content that was created or altered using technology.
Processes and systems
NCII Prevention and Detection
Any member of the public can request the removal of a nude or sexually explicit image or video of themselves which has been shared without their consent through this Non-consensual Intimate Imagery Reporting web form. Once violating content is reviewed and confirmed, Microsoft removes reported links to photos and videos from search results in Bing globally and/or removes access to the content itself when shared on Microsoft hosted consumer services. This includes both real content or synthetic, “deepfake” imagery.
Outcomes – July through December 2023
Non-consensual intimate imagery removal requests
Requests reported | Requests actioned | Percentage of requests actioned | |
---|---|---|---|
TOTAL |
1,425
|
588
|
41%
|
Note: Numbers are aggregated across Bing and Microsoft hosted consumer services for which a content removal request was received during this reporting period.
Select previous Digital Safety Content Report to download
-
Digital Safety Content Report 2023 (July-December)
Digital Safety Content Report 2023 (January-June)
Digital Safety Content Report 2022 (July-December)
Digital Safety Content Report 2022 (January-June)
Digital Safety Content Report 2021 (July–December)
Digital Safety Content Report 2021 (January-June)
FAQ
Questions about non-consensual intimate imagery
-
In July 2015, when Microsoft announced its approach to non-consensual intimate imagery, also referred to as “revenge porn,” which is the sharing of nude or sexually explicit photos or videos online without consent, we said we would report the number of requests for takedown in transparency reports. A removal request is a request from an individual to have NCII removed from Microsoft services.
In previous years, we have reported this as “non-consensual pornography.” However, we have updated this term to “non-consensual intimate imagery” to ensure that the language we use to refer to this type of violation is respectful to victims and reflects the intrusive and damaging nature of this type of content.
-
-
Microsoft has a dedicated web form for reporting NCII, which gives guidance on what steps can be taken.
General questions about this report
-
This report addresses Microsoft consumer services including (but not limited to) OneDrive, Outlook, Skype, Xbox and Bing. Xbox also publishes its own transparency report, outlining our approach to safety in gaming. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.
-
When we refer to “hosted consumer services,” we are talking about Microsoft services where Microsoft hosts content generated or uploaded by credentialed users (i.e., those logged into a Microsoft account). Examples of these services include OneDrive, Skype, Outlook and Xbox.
-
For this report, “content actioned” refers to when we remove a piece of user-generated content such as images and videos, from our services and/or block user access to a piece of user-generated content.
For purposes of Bing, “content actioned” may also mean filtering or de-listing a URL from the search engine index.
-
For this report, “account actioned” refers to when we suspend or block access to an account, or restrict access to content within the account.
-
“Proactive detection” refers to Microsoft-initiated flagging of content on our services, whether through automated or manual review.
-
Microsoft uses scanning technologies (e.g., PhotoDNA or MD5) and other AI-based technologies, such as text-based classifiers, image classifiers, and the grooming detection technique.
-
Accounts reinstated refers to actioned accounts that were fully restored including content and account access, upon appeal.