News
  News Archives
  Search
  Articles
  How To: RTCW
  Servers
  Scripts
  Staff

  Register a User


  Upload Stats
  Server OnDemand
  Clan List
  Top 10
  Ladders
  Clan Stocks
  Future Matches
  Recruiting Center

  Quakecon 2020


  Quakecon 2020


  Quakecon 2020


  Discord (EU)
  Discord (NA)
  Facebook
  Maps
  OSP
  Reddit
  RTCW Chile
  RTCW Cups 2010+
  Stats processor



5317311 hits

Aggregate Performance Metric, Pt. II - Breakdown

Posted by: source (Saturday, November 7, 2020)

For the previous article, see Aggregate Performance Metric, Pt. I - Overview.


Solution

Aggregate Performance Metric is a system that calculates a player’s individual impact on the game based on their performance in a number of weighted statistical categories, then factors in team success relative to the difference between the sums of each team’s rank points in order to account for quality of competition. A player’s rank before a match serves as an expectation for how they should perform, the expectation being that a player will retain their same rank after the match. The difference between a player’s pre-match (antecedent) and post-match (consequent) rank is a portion of the difference between how a player was expected to play and how they actually did.

Rank displacement is calculated using the following formulas:

ε’ = ε + δλ
λ = P' + T' - ε
P' = αΣ( εM )P
T’ = (( 1 - μ( T )) * β * Σ( εM ) / ( ⌈ T ⌉ * m + ⌊ 1 - T ⌋ * n )) + ( ⌊ T ⌋ * μ( T ) * β * Σ( εM ) / n

where...

ε is the antecedent rank;
ε’ is the consequent rank;
δ is the rank displacement scale;
λ is the rank displacement function;
P’ is the player’s impact points;
P is the impact value for the player;
T is the success value for the player’s team;
μ( T ) is the expected success value for the player’s team;
T’ is the player’s team’s success points;
n is the number of players in the match;
m is the number of players on the team;
Σ( εM ) is the total pool of points to be distributed (match rank points);
α is the ratio of a player’s rank points contributed to impact points = 1 - 2 / n ;
β is the ratio of a player’s rank points contributed to success points = 2 / n ;


Scale

The rank displacement scale is used to determine the percentage of points actually distributed after a match. The exact value is to be determined and adjusted for during implementation.


Player Impact

Inherently, a player’s expected consequent rank is equal to their antecedent rank since the value of their points is relative to match rank. Their expected impact value is their current rank divided by the match rank. Their team’s expected success value is the sum of their team members' ranks divided by the match rank.

Impact points are distributed by multiplying the player’s impact score by the total amount of match rank points to be awarded on the basis of that factor. The interesting part of impact point distribution is calculating a player’s impact score, the accuracy of which comes down to implementation. A number of statistical categories which may be either single-factor (e.g. kills) or multi-factor (e.g. kills-per-minute) receive category weights. Each player earns a portion of the weight in each category based on the amount that they contributed to the total in that category. Similarly, their score is the total weight that they earned in all categories divided by the total weight distributed.

The more categories which are accounted for, the more accurate this process can be. In order to prevent player impact score from skewing towards specific classes, there should be a wide range of categories whereby the total weight is distributed evenly amongst them such that each class has roughly equal opportunity to earn the same weight. Ideally, statistics relating to objectives (dynamite planted, objectives transmitted, etc) should be factored in. Determining category weights becomes more difficult with each added category, so there needs to be a balance between accuracy and complexity.

While player impact can be applied to OSP, the design heavily favours the development of Wolf PRO because of potential for a well-rounded set of statistical categories. In order to account for time played, individual statistics need to be normalized by multiplying each player’s contributions by the match time and divided by the player’s time before impact scores can be calculated.


Team Success

A team’s success score T is determined by:

T = { Win = 1, Tie = 0.5, Loss = 0 }

The table below shows the team point distribution based on the outcome of the match, where YOURS is your team’s contribution to the success points and THEIRS is the opposing team’s contribution.



The formula for distributing success points might look complicated, but the logic is simple:

> The winning team gains the other team’s contribution to the team pool and splits it amongst themselves; their contribution is distributed evenly amongst all players.
> The losing team loses their own contribution, but gains an even portion of the winning team’s contribution.
> In the event of a tie, each team gains only the other’s contribution.


Quality of Competition

An important factor that is implicit in the previous calculations is to account for quality of competition. Expectations placed on each individual grows as their team rank increases. The wider the gap between team ranks, the wider the contributions to the team success pool become. In essence, a favoured team has less to gain and more to lose. When the gap is wide enough, it’s possible to lose points even in a win.


Strengths & Weaknesses

Along with solving the two introductory problems, this system has a few significant strengths:

There is little to no incentive for excessively strong teams to face excessively weak ones, and as a result there is incentive for teams to be balanced.

The value of a win is determined by the number of players in the match as well as their combined strength.

Individual impact is not skewed towards high damage classes (ie. lieutenants and panzers).

Both the player impact and the team success portions of APM could be factored out to be their own complete individual ranking systems.

Setting the scale value higher initially and then lowering it over time could help normalize the rankings without the need for seeding.

The system can’t be applied to aggregate data - it only works when applied to individual rounds. The same is true for any system based on the relative strengths of teams.


More testing should be done to find the most desirable value for displacement scale as well as the default rank for a previously unranked player. Additionally, the value of α could be increased to enhance the importance of team success; I chose 2 / n because it consistently produced reasonable results and because it scales nicely such that when n = 2, α = 1 meaning that in a 1 v 1 game, winning is all that matters. This is reasonable because this system is designed for use with objective-based games, although it would be just as easy to modify it to work with strictly score-based games.




For examples of it working in 6v6 matches, see Aggregate Performance Metric, Pt. III - Examples.

Back to PlanetRTCW Articles

Please Login to PlanetRTCW to post a comment.
Please Login to PlanetRTCW


Team Special America. Very Special
Team Special America. Very Special

(Submit POW)

Irc Channel:
irc.gamesurge.net,
#PlanetRTCW

Top Stock Prices
US/flagrant/139.92
US/np./139.78
US/blatant/109.97
US/LLC/109.08
US/dont-blink/108.99

Recent Articles
Case for Sten
(368 views)
(0 comments)
Sten - cool, but ultimately useless weapon. Fun long range, but not good enough for real play. The problem is obvious - natural limitation - 10...

Aggregate Performance Metric, Pt. III - Examples
(401 views)
(0 comments)
For the previous articles, see Aggregate Performance Metric, Pt. I - Overview and Aggregate Performance Metric, Pt. II - Breakdown. Here...

Aggregate Performance Metric, Pt. II - Breakdown
(345 views)
(0 comments)
For the previous article, see Aggregate Performance Metric, Pt. I -...

Aggregate Performance Metric, Pt. I - Overview
(397 views)
(2 comments)
Introduction In anticipation of Wolf PRO being released sometime in the near future, I developed a new framework for ranking players which...

Movies 2008
(784 views)
(5 comments)
Here is a list I compiled of my picks for movies in 2008. There is nothing else really going on with this site, but I figured you guys would...

Which map is missing from QuakeCon?

Aldersnest
Password
Tundra
Cipher
Sub
Chateau
UFO
Keep

 


(Previous Polls)

 

printer-friendly Printer view
 

PlanetRTCW (planet-rtcw.donkanator.com) is 2003-2004 Mithun Balachandran and Robert Dyer.
Best viewed in Internet Explorer at 1024x768x32-bit color.
Return to Castle Wolfenstein is a registered trademark of id Software.
OSP and related material is property of rhea and Orange Smoothie Productions.
All other names or logos listed on this site are property of their respective owners.