As we continue our mission at Xbox to bring the joy and community of gaming to even more people, we remain committed to protecting players from disruptive online behavior, creating experiences that are safer and more inclusive, and continuing to be transparent in our efforts to keep the Xbox community safe.
Our fifth transparency report highlights some of the ways we are combining player-centric solutions with the responsible use of artificial intelligence to continue to grow our expertise in detecting and preventing unwanted behavior on the platform and ultimately ensuring that we continue to balance and meet the needs of our growing gaming community.
Between January 2024 and June 2024, we focused our efforts on blocking disruptive content from non-friend messages and detecting spam and advertising by launching two AI tools that reflect our multifaceted approach to player protection.
Key findings from the report include:
- Balancing security and authenticity in messaging: We implemented a new approach to detect and capture malicious messages between non-friends, contributing to a significant increase in disruptive content that was prevented. From January to June in total 19 million pieces Xbox Community Standards-infringing content has been prevented from reaching players through text, images and video. This new approach balances two goals: protecting players from harmful content sent by other friends while preserving the authentic online gaming experiences our community enjoys. We encourage players to use the new Xbox Friends and Followers Experience, which provides more control and flexibility when connecting with others.
- Security reinforced by player reports: Player reporting remains a critical part of our approach to security. During this period, players helped us identify an increase in spam and advertising on the platform. We are constantly evolving our strategy to prevent the creation of inauthentic accounts at source and limit their impact on players and the moderator team. In April, we took action against the increase in inauthentic accounts (1.7 million cases, up from 320,000 in January) that affected players in the form of spam and advertising.. Players have helped us identify this increase and pattern by reporting in the Looking for a Group (LFG) messages. Player reports doubled to 2 million for LFG reports, up 8% to 30 million across content types compared to the last Transparency Report period.
- Our dual AI approach: We’ve released two new AI tools built to support our moderator teams. These innovations not only prevent players from being exposed to distracting material, but allow our human moderators to prioritize their efforts on more complex and nuanced issues. The first of these new solutions is Xbox AutoMod, a system that launched in February to help moderate reported content. To date, it has handled 1.2 million cases and enabled the team to remove content affecting players 88% faster. The second AI solution we introduced launched in July and proactively works to prevent unwanted communications. We have targeted these solutions to detect spam and advertising and will expand to prevent other types of damage in the future.
Underlying all of these new improvements is a safety system that relies on both players and the expertise of human moderators to ensure consistent and fair application of our community standards while improving our overall approach through continuous feedback.
At Microsoft Gaming, our efforts to support security innovation and improve our players’ experience go beyond the transparency report:
Prioritizing player safety with Minecraft: Mojang Studios believes that every player can play their part in keeping Minecraft a safe and welcoming place for everyone. To help with this, Mojang has released a new feature in Minecraft: Bedrock Edition that sends players reminders about the game’s community standards when potentially inappropriate or harmful behavior is detected in text chat. This feature is intended to remind players on servers of expected behavior and create an opportunity for them to reflect and change the way they communicate with others before account suspensions or bans are required. Elsewhere, since launching the official Minecraft server list a year ago, Mojang has partnered with GamerSafer to help hundreds of server owners improve their community management and security measures. This has helped players, parents, and trusted adults find Minecraft servers committed to the safety and security practices they care about.
Call of Duty Antitoxic Tool Upgrades: Call of Duty is committed to fighting toxicity and foul play. To curb disruptive behavior that violates the Franchise Code of Conduct, the team deploys advanced technology, including artificial intelligence, to strengthen moderation teams and combat toxic behavior. These tools are purpose-built to help foster a more inclusive community where players are treated with respect and compete with integrity. As of November 2023, over 45 million text messages in 20 languages have been blocked and exposure to voice toxicity has dropped by 43%. With launch Call of Duty: Black Ops 6, the team introduced support for voice moderation in French and German, in addition to the existing support for English, Spanish and Portuguese. As part of this ongoing work, the team is also conducting research into prosocial gaming behaviour.
As the industry evolves, we continue to build a gaming community of passionate, like-minded and thoughtful gamers who come to our platform to enjoy immersive experiences, have fun and connect with others. We remain committed to platform security and building responsible AI by design, guided by the Microsoft Responsible AI Standard and through our collaboration and partnerships with organizations like the Tech Coalition. Thank you, as always, for contributing to our vibrant community and being with us on our journey.
Some other resources: