Tracking High Score Momentum To Detect Suspicious Score Increases

Why Track High Score Momentum?

In games and competitive systems that have high score tables, detecting suspicious increases in top scores is important to maintain the integrity of the rankings. Sudden large gains in top scores can indicate cheating or exploitation of bugs rather than legitimate skill improvements. By tracking metrics over time including the momentum of top score changes, rapid outlier increases can be identified and flagged for further verification.

High score tables serve dual purposes – to give players achievable goals to strive for, improving engagement, and to provide competitive rankings so players can compare their abilities and accomplishments. If players feel the validity of top scores is questionable, it undermines these purposes and reduces trust and enthusiasm for the game’s ranking system.

Implementing smoothed tracking and validation rules for suspicious gains makes rankings more accurate and fair for skilled players. This keeps the competitive ranking ecosystem healthy and meaningful. Players invest substantial time improving at games they care about, so protecting the integrity of achievement systems keeps them motivated and interested.

Setting Baseline Score Metrics

The first step in detecting suspicious score spikes is gathering baseline metrics on the typical momentum and velocity of score changes over time. By analyzing the historical pace high scores improve at for a game, sudden accelerations can be identified objectively through statistical comparisons to the baseline.

Useful metrics to quantify include:

  • The standard deviation and variance of daily high score changes
  • Short, medium, and long term averages of the rate top scores improve by per time period
  • The absolute and percentage gaps between each position’s high score on the leaderboard
  • The pace it takes for new #1 overall records to be set over time

These baseline metrics capture patterns in the momentum of high score progression. Large deviations from these momentum patterns can then be analyzed when they occur to determine if they are suspicious or reasonable.

Implementing Smoothed Score Tracking

With baseline metrics set, the next key technique for detection is implementation of smoothed scoring – tracking scores as exponential moving averages rather than just daily snapshots.

With smoothed scoring, a score is tracked as a weighted blend of a player’s previous smoothed score and their newest score. More formally:

Where a is a weighting factor between 0 and 1.

This smooths out daily fluctuations, allowing more consistent tracking of players’ overall momentum improvements. By comparing today’s smoothed score vs a player’s previous smoothed score, sudden accelerations in rating velocity can be detected reliably even if they attempt to hide it with intentional dips.

Choosing an optimal smoothing factor

The a weighting factor determines how responsive the smoothed score is to daily changes compared to long term trends. A lower alpha discounts daily fluctuation more. Common values range from 0.2 to 0.5 depending on pace of improvement.

This can be optimized by analyzing the baseline metrics – the higher the typical variance between scores, the lower alpha should be to smooth appropriately. The goal is removing daily noise to track underlying momentum.

Detecting Rapid Outlier Increases

Once baseline metrics on score improvement rates have been set and exponential smoothing tracking implemented, the system can begin automatically flagging suspicious score activity.

Statistical Detection Methods

Simple statistical analysis of the size of recent score jumps compared to averages can identify outliers. Methods include:

  • Standard deviation thresholds: Flag smoothed score improvements exceeding 2-3x the typical variance as suspicious
  • Percentile thresholds: Flag smoothed jumps higher than 98th/99th percentile daily increases
  • IQR method: Smooth gains over 1.5 times the interquartile range from 75th percentile flagged

Statistical detection has fast performance, simplicity, and interpretability benefits. The challenge is manually coding specific rules and thresholds. Machine learning alternative potentially address limitations.

Machine Learning Classification

Machine learning models can automatically learn subtle patterns predictive of suspicious vs normal high score increases:

  • Collect labeled data flags of known illegitimate and verified legitimate score jumps
  • Input features: Size of smoothed gain, player experience, etc.
  • Output probability instance is abusive
  • Recurrent neural networks or random forest useful

Benefits include better detection accuracy and less manual rule tuning. Challenges include black box predictions, annotation costs, and overfitting risks.

Responding to Suspicious High Scores

Once a player has been flagged by the detection system for suspicious score increases, there are two key next steps.

Further Analysis Techniques

Before any accusations, more rigorous analysis is valuable to validate an outlier score. Methods include:

  • Manual inspection by expert players to assess plausibility
  • Video review for signs of cheating or bot use
  • Activity pattern analysis to flag volume or timing anomalies
  • Player interviews to detect unlikely skill gains
  • Applying an easier achievement path variant level to test scoring ability further

Due diligence analysis protects legitimate players and identifies false positives from over-eager statistical detection triggers.

Setting Score Validation Policies

After final determination of whether an outlier high score is abusive, score policy enforcement options include:

  • Reset the score to prior level if clear cheating
  • Add warning points towards account ban in case of repeated issues
  • Reset suspect score but recognize formally to discourage repeated behavior
  • No action for minimal evidence, but increased monitoring

Policy should aim to balance strictness to discourage manipulation with player experience impacts so legitimate players aren’t discouraged over false flags.

Example Code for Smoothed High Score Tracking

Here is Python sample code implementing smoothed score tracking with a classification model for detecting outlier gains as suspicious:

import pandas as pd
from sklearn.ensemble import RandomForestClassifier

# Smoothed scoring class

class SmoothScorer:

  def __init__(self, alpha=0.3):
    self.alpha = alpha
   
  def update(self, username, new_score):
    
    # Fetch previous smoothed score    
    df = load_smoothed_data() 
    prev_smooth_score = df.loc[username]['smooth_score']

    # Compute new smoothed score           
    smooth_score = self.alpha * new_score + (1 - self.alpha) * prev_smooth_score
    
    # Save to data store
    save_smooth_score(username, smooth_score)

    return smooth_score
  
# Get size of today's smoothed score increase
def get_smooth_gain(username):

  df = load_smoothed_data()
  prev_smooth = df.loc[username]['prev_smooth_score'] 
  today_smooth = df.loc[username]['smooth_score']

  return today_smooth - prev_smooth

# Random forest model for flagging outliers
model = RandomForestClassifier() 
model.fit(X_train, y_train) # Fit on labeled real/abusive cases

# Make predictions
features = preprocess_user_features() # Extract user activity patterns  

gain = get_smooth_gain(username) 
prediction = model.predict([[gain, features]])

if prediction == 1:
  flag_for_review(username, gain) # Flag as likely abusive for review

Leave a Reply

Your email address will not be published. Required fields are marked *