Forget About Anti-Cheats, Here's How You Do It

These things that are plaguing most online multiplayer games—cheating and toxicity? I think that we could address them better.

CheatI never cheated, I swear!


In my opinion, there are two main areas that are preventing much improvement to the current situation: the detection process and offense handling.

Detection Process

Anti-cheat software solutions have some major drawbacks:

  • burdening for the developers: anti-cheats can not only be complicated[1] but they're also costly to maintain since resources need to be continuously invested into researching how to block new cheats as they arise.
  • burdening for the players: client-side anti-cheats can possibly affect the performance and cause issues (e.g.: freezes and crashes) to the whole player base only to fight a minority of players. This is like using a big hammer to kill a fly—legitimate players shouldn't have to pay that price due to a minority of troublemakers.
  • unreliable: the truth is that no sort of anti-cheat is fully reliable. It's a relentless battle against cheat developers (cheat-making is a big business[2][3]) and it simply is technically impossible to develop a perfect anti-cheat. Worse, they can produce false positives and end up banning legitimate players.

Which seems to me like a lot of negatives only for suboptimal results?

Offense Handling

Even more importantly than the detection process is how the offences are being handled.

Banning a game account is NOT enough when players can simply get back into the game through another account. It's like being put in jail only to be let free if you pay a small fine of $30—why even bother? It's not as if this amounts to anything when most cheaters are probably rich kids and the monthly subscription for these cheats can be more expensive than the game itself.

As long as this issue won't be tackled properly, any effort going into fighting toxic players and cheaters will just end up being a waste of time, resources, energy, and trust.


What I'm trying to say here is that we might be tackling these problems the wrong way around.

Empower Humans Over Softwares

Anti-cheat softwares are burdening and unreliable. Humans are much more reliable towards detecting cheats and don't have any negative impact on game performance and whatnot.

So how would a game developer go about building a detection process based around humans? It's simple, really—make better use of the community and reward them for their positive contributions!

When a player reports someone who is then confirmed by the game developer's staff to be a cheater, then the player receives some contribution points. When a given number of points is reached, the player gets some nice, exclusive, skins or whatnot. To prevent abuse of the system, any report targeted at legitimate players results in a loss of points—so you'd better get it right.

Reports can then be automatically prioritized to help the staff to first assess the reports that are more likely to be positives, while comforting the players that their valuable reports will be assessed sooner than later if, and only if, they have a good track record.

The priority formula could be as simple as the Python snippet that is to be found at the end of this post. The result of this formula, ran over a small data set, looks like this:

positive reportstotal reportspriority

As you can see, both the number of reports and their accuracy are being taken into account to provide a priority metric that makes sense—the more meaningful someone's contribution is, the higher priority their reports will have, and vice versa.

From there, client-side anti-cheats could be removed altogether, or at least be made less disruptive. This doesn't mean that there is no place for software detection at all. It'd always be helpful for a game developer to be able to find cheaters without requiring any assistance from the community—this can be implemented by flagging users with a high K/D ratio or win rate, or why not by adding a server-side deep learning system[4] that would learn from the ingame behaviour of players that were reported.

It is important to note that any detection software should NOT have an authoritative decision. It should never directly ban a player, only flag them for human review.

I might be biased but this seems like a bulletproof detection system to me—I bet that any community would be happy to contribute if they'd be rewarded accordingly, as long as they'd be guaranteed that their efforts won't go to waste by getting the offenders banned for good.

Keeping the Offenders Out, For Good

If I was developing an online multiplayer game, I probably wouldn't bother too much about crafting complicated anti-cheats. But the caveat would be: if you get caught, you're out of my game, for good.

How'd you do that? How'd you uniquely identify each player regarding on which game account they might be using? Not with an IP address, not with an HWID, but with an actual proof of identity and/or physical address.

For example, such a verification system could require either one or a combination of the following:

  • providing a scan of an official document from the government (national ID, passport, driving license, ...).
  • entering a verification code that was sent by the game developer through post mail to a given physical address.
  • taking a selfie next to a verification code that is displayed either on a phone or printed out on a piece of paper.

Of course you shouldn't have to provide a proof of identity only to play a game online. But I bet that the part of the community that truly cares about playing in a good, safe, environment, which I'm hoping is the majority of us, would understand the benefits of providing such a proof of identity and would be fine complying with it? It could even be handled through an external, independent, platform, and be made available to all games, so you'd have to go through the verification process only once.

Such a system would be optional.

You could decide not to opt-in and have access to the ‘normal’ matchmaking queues, or you could opt-in and also gain access the ‘verified’ queues.

Once a verified offender would get caught, it means that not only their game account would be banned but also their actual physical identity would be blacklisted. It wouldn't be possible for that person to join a ‘verified’ queue ever again, thus ensuring that these queues become safer and safer over time.

How Does That Sound?

I know for one that I'd be happy to be part of a community where we'd fight toxicity and cheating all together, hands in hands with game developers, to provide the best possible online multiplayer gaming experience. Hell, the concept could probably be extended to other type of online communities.

Would you be willing to join the fight against cheating and toxicity if such a concept were to be implemented?

Priority Formula Code

Here is the Python snippet used to compute the priority table from above:

#! /usr/bin/env python
# -*- coding: utf-8 -*-

"""Formula to compute a priority value according to the reports received.

The priority number fits in the range [-1.0, 1.0].

Everyone starts at 0, synonym to a neutral priority level.

    (0, 0),
    (0, 1),
    (0, 2),
    (1, 1),
    (2, 2),
    (1, 25),
    (3, 8),
    (6, 93),
    (7, 7),
    (7, 12),
    (25, 27),
    (47, 52),
    (52, 53),

# Magic number—the higher, the more weight will be given to the ratio
# `positive_count / total_count` as the number of reports increase.

def _compute_priority(positive_count, total_count):
    if total_count == 0:
        return 0

    x = float(total_count) / _THRESHOLD
    weight = 1 - 1 / (x * x + 1)
    ratio = (float(positive_count) / total_count) * 2 - 1
    return ratio * weight

def _print_results(results):
    print("| positive reports | total reports | priority |")

    for priority, (positive_count, total_count) in results:
        print("| {:16d} | {:13d} | {:+8.3f} |".format(
            positive_count, total_count, priority))


def main():
    results = sorted((_compute_priority(*x), x) for x in _DATA_SET)

if __name__ == '__main__':