/cdn.vox-cdn.com/uploads/chorus_image/image/71037407/usa_today_17918379.0.jpg)
Earlier this week, we discussed data at a meta level (metadata?) and talked about the impact that the introduction of player-tracking data to the space could have on the preparation methods of college football programs. Today, let’s talk about some of the work that’s already been done in the space that programs can use for schematic optimization.
To start our discussion, I want to go back to a thought I ended Wednesday’s piece with:
Every percentage point, every fraction, every significant digit: all of these matter when 1) the talent margins are tight in a competitive league and 2) a ball bouncing the wrong way at the wrong time results in someone (or multiple someones) losing their job. Every scrap of data is vitally important to optimizing for team success.
Given this philosophy as a guidepost, it should come to no surprise that the main benefactor in the player-tracking data space is the National Football League. The Shield has been at the forefront of making that data public (albeit at a drip) via their Big Data Bowl, an annual data analysis competition they’ve organized since 2019. So far, each iteration has focused on a different phase of the game:
- 2019: the inaugural competition centered around passing concepts and route identification.
- 2020: the Shield asked contestants to predict the outcome of running plays based on player-tracking data at the time of the handoff.
- 2021: contestants worked with tracking data to find “unique and impactful approaches to measure defensive performance”
- 2022: the most recent competition keyed on special teams performance and provided contestants Pro Football Focus (PFF) metrics along with the usual tracking data.
Let’s take a look at some projects I found extremely interesting from each year of the competition to get a glimpse at what sort of insights coaches can now work with. The winning projects for each year will also be linked below in case you want to check those out.
2019: Passing Concepts
Winners:
- College Entry: Matthew Reyers, Dani Chu, Lucas Wu, and James Thomson (Simon Fraser University) – Routes to Success
- Open Entry: Nathan Sterken – RouteNet: a convolutional neural network for classifying routes
Spotlights:
- Kyle Burris (Duke - College Entry finalist):
Burris’ work builds off the idea of “pitch control” from soccer, but accounts for the more instantaneous changes in direction in the American discipline. Burris summarizes the semantic value of his work very succinctly: “The fundamental idea behind our approach is that a space is owned by the player who can beat every other player to that space. This has broad applications to the evaluation of quarterbacks and receivers, since this model can identify open receivers well in advance of other models that have been previously proposed.” We’ve seen variations of this concept before: “speed in space” has been a common refrain amongst coaches running spread offenses at tempo, but Burris’ tools give users the ability to see where open space is in practice. Take a look at this chart from the paper, built using player-tracking data:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23665158/Screen_Shot_2022_06_30_at_11.23.32_PM.png)
You’re able to see (very clearly) open spaces on the field where quarterbacks can lead throws to hit targets in stride for big gains in each frame and if you let the player-tracking data roll like tape, you’d be able to see those spaces develop in real time based on the tactics of both units on the field.
How does this field control model improve pre-game prep? Well, coaches can now better evaluate quarterback decision-making based on the space available to receiving matchups across the field. The schematic reverse of this is also true: defensive coordinators can now analyze the space controlled by their defensive backs to evaluate and optimize coverages for specific offensive fronts or route combinations (Burris, 2019).
- Sameer Deshpande, Katherine Evans (Open Entry finalist):
Deshpande and Evans take a two-pronged approach to solving for counterfactuals in the passing game — that is, what might have happened if the quarterback threw to a different receiver on a play? We touched on this idea briefly in Wednesday’s piece: how do we solve for and evaluate for conditions that we didn’t directly observe? Well, Deshpande and Evans directly account for the uncertainty inherent to those unobserved conditions by randomly sampling the values of a specific unknown condition that do appear in the full player-tracking dataset and using those “simulated” conditions to model the hypothetical outcomes. They’re able to do this at every frame of tracking data on a play as well to determine how these hypothetical outcomes evolve over the course of said play:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23665181/Screen_Shot_2022_06_30_at_11.42.14_PM.png)
Much like the field-control model, evaluating hypothetical completions helps coaches better evaluate quarterback decision-making, providing important context on when and where optimal throws could have been made on a specific play. It’s possible that this data could be aggregated at scale, allowing coaches to analyze optimal throws and throw timing against different coverage patterns, pressure packages, and different route trees.
2020: Rushing Performance
Winners: Philipp Singer and Dmitriyy Gordeev (Open Entry) — “The Zoo” submission
To break Very Serious Data Science Lit Reviewer™️ character for a second: this project supercharged the Running Backs Don’t Matter analytics Twitter meme. The meme is (intentionally) very light on actual substantive detail, but this project (along with previous work by PFF) provide more evidence for the concept. Let me weave you a short tale of intrigue to explain.
Singer and Gordeev make up “The Zoo”, a veritable dynasty of Kaggle data science competitions. Both come from strong data science backgrounds, but were based in Austria and educated in Europe, meaning they “had very little background knowledge on football prior to [2020’s Big Data Bowl]”, as they explained to the Athletic and on the “Chilling with Charlie” data science podcast. Unbound by domain knowledge, the duo worked through the Big Data Bowl’s central problem of predicting rushing play yardage by simplifying each play into five sets of vectors:
- the velocity of each defender on the field
- the position of each defender relative to the rusher
- the velocity of each defender relative to the rusher’s velocity
- the position of each blocker relative to their nearest defender
- the velocity of each blocker relative to the nearest defender’s velocity
But you’ll notice that these are just numerical representations of the players’ positions and speeds — there’s no notion of the specific players that are on-field for each play. Singer and Gordeev note this in their writeup: “[n]othing else [on the play] is really important, not even wind direction or birthday of a player”, and Gordeev later expanded on this idea on Twitter:
My conclusion here is that judging at the moment of handoff, it is not statistically important who is the ball carrier. It is important what is the situation on the field, driven by starting formations and movements of the players prior to handoff, including rusher’s movement.
— Dmitry Gordeev (@dott1718) January 24, 2020
Once again, for emphasis: “[A]t the moment of handoff, it is not statistically important who is the ball carrier” — that is, the convolutional neural network that he and Singer designed did not find meaningful predictive signal in the individual skill or profile of a specific running back. To the model, the situation that the running back is in, rather than their individual skill, is paramount. If you turn your head and squint a little bit, you can connect the dots from that broad statistical notion into reemphasizing the refrain “Running Backs Don’t Matter”. It’s still a meme, but it’s now (well, now even more so) a data-driven meme.
2021: Defensive Performance
Winners: Wei Peng, Marc Richards, Sam Walczak, and Jack Werner — A Defensive Player Coverage Evaluation Framework
Spotlights:
- Asmae Toumi, Marschall Furman, Sydney Robinson, Tony ElHabr (Open Entry finalists): Weighted Assessment of Defender Effectiveness
- Jill Reiner (Denison University - College Entry finalist): Evaluating and Clustering Coverage Skill
Reiner establishes the need for better defensive evaluation metrics, touching on something we covered in Wednesday’s piece:
Sure, there’s completion percentage allowed, or receiving yards allowed, and passer rating allowed. What all of these stats have in common is that they are all dependent on the outcome of a passing play, completion or incompletion. There’s also interceptions or pass breakups, but for the most part, those are pretty rare. For all of these stats, everything that occurs from the moment the ball is snapped to the eventual completion or incompletion isn’t taken into account and the factors that actually cause these outcomes to occur are diminished.
Like we discussed Wednesday, outcome-based evaluation ignores the performance and effectiveness of players that didn’t touch or weren’t near the ball, and it makes it very, very difficult to key in on how good these players actually are. This struggle becomes consequential in a salary-capped league like the NFL, where roster spend has to be efficient: how are you supposed to pay players for proven performance if you can’t properly evaluate those performances? And for defenders, especially: how do you evaluate a defender’s skill and ability to take proactive approaches to deny space and prevent successful passing plays if all of your evaluation metrics are fundamentally reactive?
Reiner and Toumi et al both found that tracking data and the application of modern machine learning techniques allow for the development of an array of defensive metrics that isolate defender skill. Both groups developed metrics that account for defender performance during different stages of play development
Toumi et al’s WADE (Weighted Assessment of Defender Effectiveness) concept, named after longtime NFL defensive coordinator Wade Phillips, combined two underlying models — one for target probability and one for catch probability — to evaluate defenders’ coverage ability in space off the snap and their “contest skill” against a particular receiver as the ball nears its intended target. The group accounted for multiple defenders in the same coverage areas by allocating target or catch shares for each receiver to each defender on the field in a standard process.
Reiner developed similar metrics to measure target and catch probability, but added an extra layer of analysis: defenders were grouped into clusters based on their ability to prevent targets (measured via Reiner’s Targets Averted metric), close-out defenders (Closeout Skill metric), and break up passes (Passes Defended metric). This work revealed three strata of NFL defenders:
Cluster 1
1. Average in averting targets but better than Cluster 3
2. Not great at closing out on receivers
3. Not great at defending passes thrown their way
Cluster 2
1. All around very good at all stages of a passing play
Cluster 3
1. Not good at averting targets
2. Average at closing out on receivers but better than Cluster 1
3. Average at defending passes thrown their way
A coaching staff taking note of both of these projects may be able to put together their derived metrics to better evaluate the mix of defender profiles they have on hand and devise schemes that allow each defender to take advantage of their strengths while avoiding exposing their weaknesses. Based on these metrics, a coach could feasibly sub in defenders with better close-out skill and pass-breakup ability to more optimally jam receivers at the line of scrimmage in late/close-game situations, preventing an opposing offense from throwing — oh, I don’t know — eight straight bubble-screens en-route to a game-winning scoring drive? I’m sure that situation sounds a little unrealistic to some, but it’s possible such a fate might be able to be avoided if the work of Reiner and Toumi et al was applied at scale.
2022: Special Teams Performance
Winners: Robyn Ritchie, Brendan Kumagai, Ryker Moreau, & Elijah Cavan (Simon Fraser University) - Punt Returns: Using the Math to Find the Path
Spotlights:
- Zac Rogers (Open Entry honorable mention): PlayerTV
- John Miller & Uri Smashnov (Open Entry finalist): Augmented Reality for Kickoffs and Punts
Let’s end on a fun one and break Very Serious Data Science Lit Reviewer™️ character another time: all three of these projects come together to make a very specific childhood video-game memory of mine a reality (err, a virtual reality, to be specific):
At the confluence of these projects might sit the holy grail of practice methods (and importantly, football video gaming): imagine putting on a virtual-reality headset and practicing punt return snaps not just against AI, but actual humans from a previously recorded play (PlayerTV). Not only do you have the ability to see how these defenders see the field and control space (Augmented Reality for Kickoffs and Punts), you also have a “golden path” continuously drawn in front of you as you cut forward through the crowd of blockers and defenders (Punt Returns: Using the Math to Find the Path). And not only can you act as the returner in this simulation, you can take the reins of a defender and try to make the tackle yourself, seeing the returner’s constantly adjusting golden path.
The benefits of a composite VR system built around these projects are obvious: instead of preparing for games in live-contact practices against a scout team that may not be able to replicate the technique and athletic ability of the week’s opponent, a player could practice against more accurate simulations of players and situations. They could even run plays straight out of their opponent’s tape to find weaknesses as plays progress AND have preferred outcomes to follow drawn on-screen in order to emphasize specific concepts. Players could accomplish all of these goals to prepare more effectively for opponents while in the comfort of their own homes or a specialized training area with drastically reduced injury risk. Injured players could even use these systems to get back up to game speed. Playing what’s effectively a super-charged VR version of Madden powered by real-world situational data rather than modeled simulations seems a true game-changer (pun not intended when it comes to individual player prep for a game, and I hope it’s something teams invest more into in the future (and look, some already are).
Did any other Big Data Bowl projects catch your eye? Sound off in the comments below!
Loading comments...