Xbox’s monitoring team is turning to artificial intelligence to help filter the flood of user content Ars Technica

An artistic interpretation of the creatures talking about your mother on Xbox Live last night.

Orish Lawson/Thinkstock

Anyone who has worked in community moderation knows that it becomes exponentially harder to find and remove bad content as a communication platform reaches millions of daily users. To solve the problem, Microsoft says it’s turning to artificial intelligence tools to help “accelerate” its Xbox moderation efforts, allowing those systems to flag content for human review without requiring a player to report.

Microsoft’s latest Xbox Transparency Report, the company’s third public look at its community standards enforcement, is the first to include a section on “advancing content moderation and platform safety with artificial intelligence.” And the report specifically introduces two tools that the company says will “enable us to achieve greater scale, enhance our human moderator capabilities, and reduce exposure to sensitive content.”

Microsoft says many of its Xbox security systems are now supported by Community Sift, a moderation tool created by Microsoft subsidiary TwoHat. According to Microsoft, among the “billions of human interactions” that the Community Sift system has filtered this year are “more than 36 million” Xbox player reports in 22 languages. The Community Sift system evaluates those player reports to see which ones need more attention from a human moderator.

However, this new filtering system has had no apparent impact on the total number of “reactive” enforcement actions (i.e., actions taken in response to a player’s report), although Microsoft has done so in recent months. The 2.47 million such enforcement actions taken in the first half of 2023 were down slightly from 2.53 million in the first half of 2022. But that execution number now represents a greater proportion of the total number of player reports, down from 33.08. million in early 2022 to 27.31 million in early 2023 (both numbers are down significantly from the 52.05 million player reports released in the first half of 2021).

A look at where the Bletchley system is "Safety scans" Get in line for Microsoft's Xbox image mods.
zoom in / A look at where Bletchley’s “Safety Scans” system ranks in Microsoft’s Xbox image modulation queue.

The drop in player reports may be due in part to increased “defensive” enforcement, which Microsoft does before any player has a chance to report a problem. To help with this process, Microsoft says it uses the Turing Bletchley v3 AI model, an updated version of the tool Microsoft first launched in 2021.

Microsoft says this “visual language” model will automatically scan all “user-generated images” on the Xbox platform, including custom Gamerpics and other profile images. Bletchley’s system then uses its “universal knowledge to understand the nuances of what images are acceptable by community standards on the Xbox platform” and moves any questionable content to a queue for human moderation.

Microsoft says the Bletchley system contributed to the blocking of 4.7 million images in the first half of 2023, a 39 percent increase from the previous six months, which Microsoft attributes to its AI investment.

Growth in non-valid accounts

However, the 16.3 million enforcement actions that Microsoft says are “focused on identifying accounts that have been tampered with or used in unauthorized ways, such semi-automated takedowns are a smaller picture.” This includes accounts created by scammers, spammers, friend/follower boosters and other accounts that “ultimately create an uneven playing field for our players or detract from their experience.”

Actions against these “non-valid” accounts have increased since last year, up 276 percent from the 4.33 million that were removed in the first half of 2022. The vast majority of these accounts (99.5%) are deleted before the player has a chance. “Often … before they can add harmful content to the platform,” says Microsoft.

Non-authentic accounts (eg, scammers, spammers) will account for the vast majority of Xbox enforcement actions in the first half of 2023.
zoom in / Non-authentic accounts (eg, scammers, spammers) will account for the vast majority of Xbox enforcement actions in the first half of 2023.

Elsewhere in the report, Microsoft says it’s still seeing the impact of its decision in 2022 to revise its definition of “vulgar content” on the Xbox platform to “include offensive gestures, sexual content and crude humor.” This definition led to 328,000 enforcement actions against “vulgar” content in the first half of 2022, a 236% increase from the roughly 98,000 removals of vulgar content six months earlier (which itself was a 450% increase from the previous six months). Despite this, running obscene content still ranks behind the old simple insults (886,000 enforcement actions), harassment or bullying (853,000), “adult sexual content” (695,000) and spam (361,000).

Microsoft’s report also contains bad news for players hoping to have their bans or suspensions lifted. Only about 4.1 percent of more than 280,000 such case reviews were returned in the first six months of 2023. This is slightly less than 6% of the 151,000 appeals in the first half of 2022.

Since the time covered in the last Transparency Report, Microsoft has introduced a new, standardized, eight-stroke system that assigns a sliding scale of penalties for different types and frequencies of violations. It will be interesting to see if the next scheduled transparency report shows a change in player behavior or performance with the new rules in place.

#Xboxs #monitoring #team #turning #artificial #intelligence #filter #flood #user #content #Ars #Technica
Image Source : arstechnica.com

Leave a Comment