ELO
One important factor that I will look at is ELO. Named after Arpad Elo and originally designed for chess, ELO is used to rank relative team strengths in zero sum games (i.e. win or lose). ELO tries to assign every team a rating based on past performance, adjusting those ratings up or down after each game depending on the opponent’s strength and the result. Win against a strong team, your ELO goes up. Lose to a bottom-feeder, your rating goes down. The best part is that the ELO system is self-correcting; teams rise and fall as the season progresses, and the ratings can be used to predict the likely winner of any future matchup.
The question is, how well does it work for footy? To test this, I ran a basic ELO algorithm over every AFL result since 2010. After each match, the winning team’s ELO increased and the loser’s dropped, with adjustments based on the margin and opponent’s current rating. Each season I did a soft reset and adjusted every team's ELO towards the average (but not completely). For the detailed breakdown of the model, please see the GitHub repo.
So how did ELO perform?
Model | Accuracy (%) |
---|---|
Home team only | 56.6 |
Older team only | 63.0 |
Player-based sum (Model v2a) | 63.7 |
Player-based sum (Model v2b) | 65.8 |
ELO (basic, margin-adjusted) | 67.4 |
ELO tipped the winner in 67.4% of games since 2011 (I used 2010 to build out the ratings for 2011 onwards). So, it performs a little better than the bottom-up model (though, not significantly).
I think I am now almost at the stage where I can start to try and put together all the pieces into one (hopefully) super predictive model. Before that, I will do some work to see if there is anything else to add.