The Matchup Zone Blog

Friday, March 3, 2017

Introducing Résumé REPORT

TAPE is a predictive ratings system. If I want to know how a team is likely to perform against any other team in Division I, it can do a pretty good job of telling you the chances of winning that matchup. By extension, it's also pretty good at sorting teams relative to how they'd be likely to perform against a hypothetical benchmark team.

Against that benchmark team, TAPE thinks only 33 teams in the whole country would be more likely to win a game than Clemson. On a neutral floor, TAPE thinks that there are only 32 teams which would be favored to beat the Tigers.

There are 42 teams which TAPE thinks would be more likely to beat that same hypothetical TAPE Index team than Maryland would. There are 42 teams which TAPE would favor in a neutral-site matchup with the Terps.

If Maryland and Clemson were to play each other on a neutral floor, TAPE predicts that Clemson would have about a 54% chance of winning the game, and would be favored by about 1.4 points. In other words, simply put, according to the parameters by which TAPE evaluates basketball teams, the Clemson Tigers are a better basketball team than the Maryland Terrapins.

That's great in theory, but here in the real world of the 2017 college basketball season, Maryland fans are having a whole lot better time than Clemson fans. Even with a tough Februrary, the Terps have lost only 6 times all season, are tied for second place in their conference, and are pretty close to a lock for the NCAA Tournament. The Tigers, on the other hand, are a game over .500 overall, 5-12 in league play, sit in 12th place in the ACC standings, and are, I think, generously listed with a 34% chance of earning a bid to the NCAA Tournament by the STAPLE algorithm.

It doesn't take a computer program to tell you that Maryland, even if they might not be as "good" as Clemson, has sure had a better season. And you wouldn't find many folks who would argue that Clemson would be more deserving of a bid to the NCAA Tournament than Maryland.



Louisville, Notre Dame, Duke, Florida State, West Virginia, Minnesota, VCU, and Valparaiso all have identical 23-7 records against Division I opponents. We know intuitively that not all of those 23-7 records mean the same thing, that the first six of those are probably more impressive than the other two. But how do we quantify the differences? How do we scale those records in order to make a true apples-to-apples comparison among those teams?

For nearly four decades the NCAA Tournament Selection Committee has used the RPI as a blunt instrument to compare teams. Long since discredited as a tool with the ability to rate or rank teams precicsely, its continued use has been justified as a "sorting tool" by which teams themselves are not judged so much as their schedules are judged against one another. Yet the RPI is inadequate even for this purpose, and its days, finally, appear to be numbered.

While the RPI is simplistic in its design, and that design was deeply flawed, I believe that the concept of the RPI--the thing that it set out to accomplish--is essential for its successor to strive for. Namely, in a sport where upwards of 350 teams of vastly differing levels of quality play a short season of 25 to 30 contests each, we need to find a way to translate every team's on-court wins and losses into something approximating one standard. We need to be able to say that Minnesota's 23-7 is better than Valpo's 23-7 (but maybe not quite as good as Duke's), and to be able to say, with confidence, just how much better that it is.



The good news is that the proliferation of predictive ratings systems for college basketball teams have made that possible. The concept of Wins Above Bubble (WAB), I think, gets us most of the way there. WAB is the difference between the number of games a team has won and the number of games that a bubble-quality team would have been expected to win against that team's schedule. It's simple to understand, relatively simple to compute, and a pretty elegant solution to the problem.

WAB has a couple of key shortcomings, though. The first is one of quantity. Since WAB is a "counting" stat, it's possible for a team's net wins relative to a bubble team to be, at least partly, a function of the number of games that team has played. Second, like RPI, WAB is agnostic as to who the wins an losses came against. The inputs are simply a team's record and its schedule strength; there is no accounting for who the wins and losses actually came against.



To that end, I've added a résumé feature to the site: Results Expressed as a Percentage Outcome Relative to TAPE index (or REPORT for short). Each team has been assigned a Schedule Factor--actually three different schedule factors, one each for home, neutral, and road games--which is the probability that said team would beat a TAPE index team in a given game. The Schedule Factor for neutral-site games is just a team's TAPE rating with the weighting for recent games stripped out; the home and away factors are the same rating with each team's home court advantage or road disadvantage applied.

For each game a team plays, a win will be multiplied by that factor. So a win against a team with a schedule factor of 600--in other words, a team which a TAPE index team would lose to 60% of the time--is worth 0.6 wins. A loss to that same team would be worth 0.4 losses.

Each team's schedule page now features this information for each game. Wins and losses are highlighed in red and green, with darker colors indicating more unlikely results--better wins and worse losses. Games against teams with a schedule factor greater than 500 (in other words, games against NCAA Tournament quality teams) are highlighted in green on the REPORT side of the ledger.

By way of explanation, here's Indiana's first 9 games:
 Note the dark green for the Kansas and UNC wins. Those were games that a TAPE index team would have lost 78.9% and 76.2% of the time, respectively, and so they're great wins for the Hoosiers. Since a TAPE index team would have won at Fort Wayne 62.6% of the time, that's a fairly dark red loss. The remaining games, all wins, are nearly white since they're all games that a TAPE index team would have won more than 90% of the time.

There's also a REPORT page, where all 351 teams are listed with their three Schedule Factors, total win quality, total loss quality, and net wins (WAB). In order to resolve the two shortcomings identified above, teams are ranked and sorted by the REPORT column, which is total win quality divided by the sum of win quality and loss quality.

Expressing the records in this manner obviates the issue of the number of games played; all teams have a rating between zero (all winless teams will be 0) and one (all undefeated teams will have a rating of 1). 

Furthermore, expressing the REPORT as a percentage has the advantage of giving teams extra credit for big wins, as well as penalizing them for bad losses. Think of two teams who had played identically difficult schedules. Each played 5 games against teams with 200 schedule factors, and one against a team with a 500 schedule factor. Team A won the first five games and lost the sixth, while Team B lost the first game and won the next five. Each would have 0.5 net wins, but Team A would have a REPORT of 0.667 (1 net win divided by the sum of 1 net win and 0.5 net losses), while Team B's REPORT would be 0.619 (1.3 net wins divided by the sum of 1.3 net wins and 0.8 net losses).

Team A won all the games it should have won and then lost a coin-flip game, while Team B lost a game it really shouldn't have and then won a coin-flip game. It makes sense to reward Team A for that.

Wednesday, November 9, 2016

Ready To Go

The first game between Division I teams tips off in a little over 50 hours from now, when UMass-Lowell and UMass proper get after it in Amherst. The last projections run has been completed, and everything here on the site is (I think) ready for action.

I've made a few improvements on the back end of things which should make the ratings update much faster than before. The program which does all of the heavy lifting for the ratings and adjustments has been completely re-written and its efficiency is much improved. What used to take about 45 minutes to run now takes a little over 2 minutes to complete (yeah, I was really bad at coding when I first started this thing), and will kick off automatically every time the database is updated with a new box score.

The next item on the punch list will be to re-design and expand the game pages to include a more extensive and easier to understand preview page and expanded box scores for completed games. After that, I'll work on making the team pages easier to read and add some functionality and visualizations there. Then it's on to the individual player cards.

Enjoy the season!

Tuesday, April 12, 2016

New & Improved Projection Model

The projections for the 2017 season are now live. My goal is to try as best as I can to keep up with all the transfer and early-entry decisions, and run an update at least 2 or 3 times a week through the end of the academic year, after which everything will calm down until the start of practice in September. 

For now, I've made the decision to remove any player from the rosters who has declared himself eligible for the NBA draft, whether or not he's hired an agent. As the inevitable returning-to-school announcements are made, I'll add them back to the rosters, and try to flag any big movers on Twitter.

This year's projections should be better than ever.

For the past three seasons, I've used a similar-player model to find comparable players for every returning player, and then used those players' collective year-over-year changes to make predictions for how the returnees would improve or decline. Those individual projections were then run through a program which combined them with all the other players on each team to generate a team projection.

This method worked pretty well, especially for the teams in power conferences and other top-100 type teams, but it had a nasty habit of overestimating everyone else's prospects. And while it did an okay job of predicting conference wins, the fact that I was missing so badly on the mid- and low-major teams' predicted TAPE ratings, and that other systems were able to consistently do better at predicting conference records was enough to send me back to the drawing board.

The result is a system which is similar to the old method in that it uses individual projections to build team predictions, but differs in a couple of key areas. The big difference is that the comparable-player model has been scrapped. My hypothesis when building this model was that it would be a good one for identifying potential breakout candidates. If Player X looks a lot like these other players, the thinking went, and a lot of them broke out, then this guy should, too.

Three years later, it's clear that this just didn't work. With short seasons, small  sample sizes, and the inherent unevenness of player development inherent in 19- to 22-year-old basketball players, there was just too much noise there, and it wound up just being a model that said, in essence, everybody's going to probably get a little bit better. Problem is, it did a pretty lousy job at even predicting the extent of that improvement, especially among the bottom 2/3 of Division I players.

In its place now is a simpler regression model in which the year-zero performance of a similar cohort of players--"high-major sophomore big men who played starter minutes," for example--forms the basis of every returnee's projection. But whereas the old system merely had a sanity-check at the end which would, for example, nudge up Shaka Smart's players' steal rate if it wasn't sufficiently high based on historical norms, the new model uses team and coach development history as a much bigger factor at the front end of the process. Likewise, incoming players have a projection built on how, say, other true freshman consensus top-30 shooting guards coming into high-major programs have performed in the past, there's a further refinement based on the numbers that others coming into that same program have posted. 

The end result is a system which winds up being one that's far more impacted by a program's own history than the previous one was, while still, I think, accurately reflecting roster quality. Most importantly, it's one that will just do a better job of predicting how teams will play in the upcoming season. The average error for predicted 2015-2016 conference wins was 2.20 under the old system. Re-projecting the season under the new method (using only information that was in the database as of October 2015, of course) yielded an average error of 2.04 conference victories.

Monday, March 14, 2016

2016 NCAA Advancement Odds

The 2016 NCAA Tournament Advancement Odds are up and running. Kansas is the favorite, with a 1-in-6 chance to cut down the nets. The long shot of all long shots is Fairleigh Dickinson, who would win the championship in 1 out of every 5.2 billion parallel universes.

Tuesday, January 19, 2016

(Re-)Introducing STAPLE

No college basketball-focused website is complete without a periodic prediction of the NCAA Tournament bracket. And this site is no exception. For the past several years I've done bracket watch posts whenever my scheduled has allowed me to. 

Unfortunately, as much as I loved compiling those posts--and I really do; looking at who's trending up and down on a weekly basis is a great way to stay engaged with the totality of what's going on in college basketball--with a wife, two kids, and a day job, I'm just not able to write them up with the regularity I'd like. So I did what any good nerd would do: I plowed the time I would've spent working on those posts over the past few months into writing a program which would do the work for me.

The result is the new-and-improved STAPLE (Simple Tournament Algorithm for Predicting the Likelihood of Election--because I loves me some backronyms). Now, with each run of the TAPE ratings, there will be an updated predicted NCAA Tournament bracket and sortable leaderboard of who's likely to be in or out. 

Unlike most other brackets out there, this one is not based on an if-the-season-ended-today model. It takes the long view of the season, and incorporates future schedules and projected future results into account. As such, it's a lot less volatile than most bracket predictions tend to be. It reacts to results, it just doesn't over-react to them.

At the top of the STAPLE page is the most recently updated bracket prediction. It's assembled programmatically and follows all bracketing principles set forth by the Selection Committee. Below that is a sortable table which includes all teams with non-zero at-large chances

As the name implies, STAPLE is pretty simple. Its inputs are few: RPI, TAPE, auto-bid, and scaled bonuses and penalties for good wins and bad losses, respectively. These are scaled into a model which attempts to match the always-moving target of the Selection Committee's at-large selection and seeding criteria. These are weighted into a single number--listed in the table below the bracket simply as "Points."

The second column unique to the STAPLE listings, labeled "STAPLE," is each team's chances, as of the most current system run, of qualifying for an at-large bid.

As always, if I'm missing anything, or you see a glaring hole in my program's interpretation of the bracketing principles, let me know. 

Tuesday, November 10, 2015

The Calendar Is Turned

I've turned on all of the lights for the 2015-16 season, and everything should default to this season without incident when navigating the site now. 

The final projection run has been completed, and the 2016 Projections are now locked. I'm still working on adding some new features to display--some visualizations to go on each team's projection page are in the works. One that I want to point out that's now working and I think is pretty cool is the lineup projections.

In the process of running the player and team projections, I've also run every combination of five-on-the-floor for each team. They're included under each team's projections page, and can be accessed by clicking "Lineups" in the sub-navigation bar. (Villanova's can be found here, for example. You can see there why Daniel Ochefu is probably one of the most important players in the country.) Each lineup has its own TAPE rating, as well as adjusted offensive and defensive efficiencies and normalized four factors. They're all sortable, too, so if you're curious about which of your team's lineups is going to force the most turnovers or be best on the boards, there you go.

Once we get into the season and the PAPER ratings are available (usually sometime in mid-December), I'm planning on using the same model to build on-the-floor ratings that will use the actual season data as well. 

Monday, April 13, 2015

2016 Projections Are Live

The 2016 Season Projections page is now live. Unless a player has formally announced that he will make himself eligible for the NBA draft, he is still included in his team's 2016 projection; Duke's projection, for example, does not include Jahlil Okafor but includes Justise Winslow and Tyus Jones. As players are added to and removed from next year's expected rosters, I'll re-run the projections and try to keep them as up-to-date as possible. With several high-profile players (both incoming and outgoing) expected to make their eligibility decisions known over the next couple weeks, there should be some shifting at the top before the roster news slows to a trickle around the end of the academic year.

These projections use historical player comparisons to generate a statistical projection for each returning and incoming player for every team in Division I. It then combines all the player projections into a team projection based on the strengths of that team's players and how their profiles tend to interact with each other to output a team projection.

If there's a player missing from a team's roster, or someone there who doesn't belong, please let me know (by email, twitter, or the contact form) so that I can make the necessary changes. If the player is there but you think his numbers look out of whack, feel free to inquire about that as well. The answer is probably as simple as, "The computer thinks your favorite player isn't as good at basketball as you think he is," but it never hurts to ask.

Sunday, March 15, 2015

Final Bracket Projection

The outcome of the Wisconsin/Michigan State game will not affect this bracket. The Duke pod in the Midwest region would be in Charlotte, not Portland.

Locks (51):

Kentucky, Villanova, Wisconsin, Arizona, Duke, Kansas, Iowa St., Virginia, Gonzaga, Notre Dame, North Carolina, Baylor, Utah, Oklahoma, SMU, Michigan St., Arkansas, Northern Iowa, Louisville, Maryland, Wichita St., Georgetown, Providence, West Virginia, Xavier, VCU, Butler, Ohio St., Iowa, Stephen F. Austin, Buffalo, Georgia State, Valparaiso, Harvard, Wofford, Wyoming, UC Irvine, Northeastern, New Mexico State, Eastern Washington, North Dakota State, Albany, Belmont, Coastal Carolina, Lafayette, Texas Southern, North Florida, Manhattan, Robert Morris, Hampton

Near-Locks (9):

N.C. State (99%), Davidson (99%), Georgia (99%), Oregon (98%), San Diego St. (98%), Texas (98%), Cincinnati (97%), St. John's (95%), BYU (93%)

Probably In (3):

Dayton (82%)
Indiana (82%)


On The Bubble (9 teams for 5 spots):

Boise St. (78%)
Temple (72%)
Purdue (59%)
UCLA (57%)
Stanford (43%)
LSU (28%)
Tulsa (26%)


Long Shots:

Florida (8%)
Richmond (8%)