FloRatings 2013

Reintroducing FloRatings for cross country

Reintroducing FloRatings for cross country

Sep 30, 2013 by Mitch Kastoff
Reintroducing FloRatings for cross country
About a year ago, Flotrack wondered if they could be more pragmatic with their cross country predictions. You see, a few weeks earlier at the London Olympics, trying to guess the outcome of each race was easy (getting it right was the hard part). If an athlete X ran Y time, then he or she will beat athlete A because he or she ran B time. It’s pretty systematic.

That sort of logic falls short for cross country. The difficulty for each course varies, as does the quality of the field, difference in weather, and basically anything that falls under the conditions in Murphy’s Law.

How do we compare cross country races? The solution is FloRatings. We’ll use an example to elaborate.

Last fall, Georgia’s Matt Cleaver (athlete X) ran 24:00 at the Wisconsin adidas Invitational. That same weekend, Louisville held their Pre-National Invitational. Across the country in Kentucky, Colorado’s Aric Van Halen ran 23:52.2. Which of the two performances was better?

If you’re going by track logic (ie: faster times equal faster runner), then it was Van Halen. But according to FloRatings, they were equal efforts.

FloRatings will convert one's time and place in a certain race to a point value. For those two races, Cleaver and Van Halen’s efforts were worth 205.6 points.

Note: We also like that the two are both 3000m steeplechasers. Last spring, Cleaver ran a personal best of 8:31.28 and Van Halen ran a seasonal best of 8:38.23. It's not like they're too far off from one another anyway.

Flotrack did not develop the model, but simply adapted it to the 6k, 8k, and 10k. The creator of the system was Bill Meylan over at tullyrunners.com. I remember Meylan telling me a story of how he wanted to see how Tully High School (hence the name) would do at their Sectional and State Championship. A year later, Meylan opened his website, continued the project, and found incredible success with his Speed Ratings. Even some coaches of very elite NCAA programs have mentioned that they’ve used his ratings for recruiting purposes. That’s not a bad reputation for what he regarded as a Sunday hobby.

We use FloRatings for the same reason. It’s hard to debate who will beat whom each week, especially when everyone races on a variety of cross country courses. FloRatings is something fun and when I told my seventh grade math teacher that I now enjoyed regression analysis, I assumed she burst out in laughter and showed her whole class my email because math apparently is cool.

The model isn’t perfect, but it is accurate. Sometimes. For example, here are the projections and results for the 2012 Men's South Central Regional Championship.

Team Projected Score Actual Score
Texas 27 38
Arkansas 53 53
McNeese State 83 157
Texas A&M 92 93

We can see that for the most part, the model was correct. We had rated Texas, Arkansas, and A&M pretty much as accurately as we could get without manually predicting each place for every runner (which some people do).

The downfall here is obviously our projection for McNeese State. Why were we off by so much? The answer lies in the individual projections.

Name Projected Finish Actual Finish
David Rooney 4 2
Alex Bruce-Littlewood 11 24
Jarrett Leblanc 13 85
Alex Kiptoo 27 47
Daniel Cliffe 28 29
Alex Evans 30 60
Stephen Kerr 53 N/A
Nicholas Kiprono 62 N/A
Lewi Manirumva N/A 70

If not for Jarrett Leblanc’s off race, then the projections would have been close.

Some may ask, “Why did you then rate him so high?”

Two weeks prior, Leblanc had finished third at the Southland Conference Championship. The trio of David Rooney, Alex Bruce-Littlewood, and Leblanc had run as a pack and finished 1-2-3, respectively. Easy running.

But that wasn't all. Earlier that season at the Pre-National Invitational, Leblanc finished 46th overall in 24:00.6. Rooney had run 23:20.7 (seventh overall) and Bruce-Littlewood took 108th in 24:33.9.

If Leblanc had finished in-between the two at Regionals, then he would have placed around 15th. That would have given McNeese State a total of 117, which is a lot closer to their projected score of 87.

Maybe Leblanc had a bad race (most likely)? Maybe the majority of the team had a bad race? Maybe we overrated the Southland Conference (also most likely)? Maybe the projections were totally wrong? It’s still fun to speculate.

If you skipped all of the logic and just want to know what the numbers mean, here’s our most simple explanation: FloRatings creates a universal scale to compare cross country courses.

If you're still confused, feel free to take a look here.