Welcome Champions, to our second blog dedicated to the subject of chat moderation and in-game toxicity!
Last month, we told you about an anti-toxicity measure that has been implemented in the Predecessor - an AI-based solution to watch over the in-game chat and flag and take action against the offenders who say things or make comments that do not belong in Predecessor.
It was an important adjustment to add to the game. Complaints about in-game toxicity have been one of the most concerning to us and we knew that we needed to address it ASAP, especially with our highly anticipated console launch on the horizon!
In case you missed our last blog about it, allow me to bring you up to speed!
Predecessor uses an AI-based solution, a tool called GGWP, for in-game chat moderation. The AI reads through everything that is being said in the game to make sure the conversation is clean and pleasant for everyone involved. If it is, it takes no action. If it isn’t, it filters the problematic words and phrases and if it’s necessary, and players saying those words might get muted.
Since it’s AI, that means it’s not only phrase-sensitive, but also context-sensitive, and can’t be dodged as easily (people will try of course, but attempts to circumvent the system are flagged and we can rapidly adjust systems whenever it happens). The system is applying moderative actions in almost real-time, making the unwelcome behaviour in the game much less prevalent.
We want to stress that in general, we find our community wonderful, helpful, and capable of improving when we point out something is not okay. And we have statistics to prove that!
Since we launched this tool we have noticed a:
◆ 58.6% drop in harassment incidents.
◆ 58% drop in identity-based incidents.
◆ 55.8% drop in violence-related incidents.
And that’s amazing and exactly what we wanted to see!
Understandably, there’s a lot of misconception and confusion about how the tool works, and a handful of myths have started to float around in the community about what it does and what it doesn’t.
So I’m here to bust the most widespread myths for you, to clear up what’s going on behind the scenes.
So without further ado, here we go!
“The solution was released willy-nilly and without testing!”
GGWP as a chat moderation tool was actually implemented into Predecessor way back in March! It was simply observing for a while, as we wanted it to have ample time to learn the way that people talk in our game to help it build a strong understanding of our community and what kind of behaviour was to be considered undesirable. It has been working quietly in the background, learning the lingo, identifying what’s considered violent language and what’s just Pred talk.
When we announced its launch last month, that marked the point at which the AI was finally permitted to take action in game - but it has been working behind the scenes for a few months.
“AI has gotten a lot of people perma-muted!”
To be as clear as possible, AI has never gotten anyone permanently muted. It actually doesn’t have the power to do that. If someone consistently behaves so badly that such action needs to be taken, there will be a human who will review the accuracy of any and all flags raised by the AI tool and determine and apply the ban.
“AI only takes action against people using slurs!”
We call it an anti-toxicity measure, because it helps us weed out all sorts of undesired attitudes.
Slurs and insults are pretty high up on the list of things we don’t want in our game, but we also don’t want people threatening others, using excessive or grotesque profanities, extreme trash-talking, bullying or even spamming. Those are also mutable offences.
“People get wrongly muted all the time!”
Since its public implementation 6 weeks ago, a crack team here at Omeda Studios HQ have been checking the records of users who claim that have been muted wrongly. But how many reported mutes were actually incorrectly muted?
Roughly 3% of the cases we checked.
That myth likely originated from the fact that the moderation tool sometimes needs a moment to process the offence and apply the penalty, so the offending player might still send a few messages before any punishments go into effect. This might sometimes create an impression that the player got banned for the very last thing they said, which is not always the case.
“The AI thing is set in stone and won’t change!”
Since it launched, we’ve been continually tweaking the tool - several times even in the last few weeks alone.
Language and communication trends are living and ever-changing, and we’ll be here to react to those changes, as well as any mistakes our systems could be making.
“I said one thing and I was muted straight away!”
That’s only true for the worst offenders, for the very worst kinds of misbehaviours. Unless someone uses extremely violent or derogatory language, mutes on the first offence are very rare.
If you join a match, say something and receive a chat mute, it’ll most likely be because of historical behaviour records. The AI builds a picture of your behaviour over time meaning that - as previously mentioned - mutes are nearly always the result of patterns of misbehaviour rather than one off offences.
“Trash talking and bullying are consensual in my friend group, we should be able to do it!”
If that’s the case, please make use of external private chat systems where you can speak to your friends without being held to our community standards and expectations.
“My toxicity was in response to someone else’s toxicity - I shouldn’t have been muted! ”
We know it can sometimes be hard to stay calm when you see or hear nasty stuff. Being toxic in return, however, is not the solution. If someone in-game is acting inappropriately, breaking our Community Charter or is otherwise contributing to a toxic community environment and you think the system is not picking it up, please report and mute that person.
“Why have chat moderation when you can just mute people? It should be up to a player to mute the people toxic to them!”
The intended use of muting is for you to distance yourself from players that might not have done anything wrong, but that you simply don’t want to interact with in the chat. There can be a number of reasons for this, such as wanting to focus on the game without any distractions, creating content like a YouTube video or livestream and not wanting any chat messages cluttering your gameplay, or maybe you’re simply not in a particularly social mood.
Our intention is to never make it the player’s responsibility to remove potentially hurtful messages. Reactive, player-managed moderation means that players must first be exposed to slurs, insults and threats before they know to mute someone, which is not only unfair for those in our community who don’t use such language or behaviour, but it also goes against our goals for building a healthy, sustainable community.
“Trash talking is the normal way of talking in games!”
Does it have to be?
We’re not out here trying to remove trash-talking entirely. We know that it is - to some degree - a fact of life in games like ours. But we can strive to improve the quality of this kind of communication. By removing unnecessary slurs and threats you can focus on grilling your fellow players over what really matters, such as why the enemy Kallari picked the festive Peppermint skin in August.
With all of that being said, we hope that the way our chat moderation tool works is more clear for you now. As mentioned above, we’ve already seen a huge change for the better in the last month or so alone and have no doubts that we’ll continue to see positive results in the times to come.
We are also aware that toxicity isn’t just limited to just the in-game chat, and that players will sometimes abuse other systems to purposely harm the in-game experience for others. All that means that our mission to tackle toxicity does not end here! We’ll share more about other measures that we’re working on in due time.
One of the things we do want to mention is addressing toxicity in other languages. Currently Predecessor is available only in English with the majority of in-game player communication also being in English. Once we embark on the journey of localising Predecessor for multiple languages, we will make sure that our anti-toxicity measures develop further too.
Thank you very much for reading and don’t hesitate to ask questions if you have any!
~Zuzu, Community Experience Manager