Leaderboard Details


The cool kids are saving energy

The leaderboard available here -- which shows schools and their consumption and savings relative to baseline -- is based on advanced analysts that attempt to take into account weather and occupant anomalies. The information below is intended to help shine some light on these analytics and the methodology used to develop each school's baseline. 

Overview

We’ve worked hard to ensure the schools’ baselines for the Sprint to Savings are as fair and accurate as possible by: incorporating individual regression analysis of each building and its response to weather, creating different baselines for occupied and unoccupied days, using the largest possible data set, and more. Though anomalies may still exist, we hope that by giving some of the context for how baselines typically work, and explaining what we’ve done differently (and, we hope, more accurately), our schools will have a better understanding of how their reductions are being measured. 

Traditional Baselines

The traditional method of creating baselines for such competitions is simple: Use the average consumption for the period prior to the competition as the baseline. For example, if the contest were to run for from February 10 – February 28 (19 days), the baseline would have been the average consumption during the previous time period (19 days from Jan 20 – Feb 6).  Many of the best-known and successful school-based competitions use this methodology.

Sprint to Savings Baselines

There’s often huge value in keeping things simple, but we felt traditional baselines were too simple. If the weather during the competition period varied substantially from the baseline period, or snow days created an imbalance between occupied and unoccupied days, the entire competition could be flawed.

So, we sought to make things more accurate — and ultimately more fair — by normalizing for various factors that can effect consumption. Three key actions were taken:

1. Large sample size (at least 100 non-summer days when data existed)

2. Building-specific regression analysis against heating degree days to determine each building’s response to cold weather (some buildings have a greater response with electric use than others)

3. Building-specific identification of use-patterns for occupied and non-occupied days (weekday, weekend, snow days, holidays, etc.) 

This information is then applied to the various conditions during the competition — with specific baselines created daily that account for weather, each school’s occupancy, etc.


There’s no doubt we have room for improvement (and we’ll learn from this year and improve next year), but we’re also hopeful that we’ve created a more accurate, equitable model for such competitions.

Anomalies in Initial Days of Competition / Leaderboard

Finally, it’s worth noting that, in the initial days of the competition, the sample size was very small. Anomalous things can happen from day-to-day, like heating malfunctions, kitchen heating equipment remaining on due to an extended lunch, the occurrence of a once-weekly late meeting or the like. With a larger sample size of days, day-to-day variation is expected to balance out.