Euro 2020 – The Final Modelling Results

Posted on

With a disappointing end to the tournament (for us England fans anyway!) here is the final conclusions of how the model performed.

Unfortunately a mixture of work commitments, and games coming thick and fast, I was unable to keep this website up-to-date with my predictions for each fixture, but I did carry on doing them offline.

(As a reminder: the model description is here and the other models were compare against are here.)

The Round of 16 gave us some truly surprising results, with every single model (including the bookies!) doing (a lot) worse than our Lazy prediction of 33% for Home, Draw and Away. This then seemed to be a sign of what was to come, the semifinal and final was worse than Lazy for most of the tested models. It made for an exciting tournament but a disaster for predictions!

The final performance analysis of the models was:

ModelBrier Score
HAL 90000.586
Bookies0.588
Average0.609
BDC0.655
Lazy0.667
SSC0.728
Final model results at the end of the tournament

Immediate things to note are that HAL won (just)! Clearly that is a great result. It is interesting though that SSC (small “smart” crowd) ended up doing much worse than Lazy, which is terrible! I would have expected that to do much better.

The poor performance likely comes from the fact that there were a number of “perfect” wrong answers. By that I mean that the model (my friends) all predicted something would definitely happen (e.g. England to beat Denmark after 90 minutes). That gave it a probability of 100%, when it didn’t happen, the maximum penalty of 2.0 was assigned.

Nothing in football is ever going to be 100% certain (the most one sided result that the bookies predicted was a Germany win against Hungary in the group stage [80.7%] – it ended up as a draw!). If we set a rule that SSC can never predict more than an 80% result then the Brier Score drops to 0.627 and ends up beating BDC and Lazy – more like I would have expected! Making SSC’s maximum prediction 60% further drops the Brier Score to 0.577 – making it the winning model! However this is likely because there was a large number of surprising results in the tournament, so this is really only a post-tournament model, and not a fair assessment. Setting the maximum to 80% (matching the Bookies maximum value) seems like a fair compromise though!

ModelBrier Score
HAL 90000.586
Bookies0.588
Average0.609
SSC0.627
BDC0.655
Lazy0.667
Updated final model results using a maximum 80% prediction for SSC

Anyway, a great result for HAL and I’ll look forward to getting the model back out for next unprecedented international tournament, a December World Cup!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts