Sketchpad

Metrics can tell us three things: What Is, What Will Be, and What Could Be

Broadly speaking, all metric dashboards are trying to do one of three things. The first and most basic is simply describing the current state of the world. The sum of daily sales last month, and their average price, are both examples of “just the facts” reporting. These numbers can be extremely valuable, because (as we saw with the estimation game last week) people can be remarkably bad at guessing them off the top of their head. This is particularly true of intermediate metrics, everything between the initial inputs and final outputs of a system. Very frequently, particularly when you first start exposing these metrics, at least one catches people by surprise. They may even realize that some aspect of the system has been broken for a long time, and be able to fix it.

Good thing you were measuring the ratio of water out to water in!

Just like with computer programming, many real world systems have bugs that seem obvious when you’re looking straight at them, but whose location, or even existence, may be difficult to discern by merely monitoring the system’s final output. For example, John and Jenny have always looked at their monthly bank statements, so they have some numeric sense of how the farm is doing. But they’ve never broken it down to look at income from specific cows, barns, or other intermediate variables.
The second is trying to predict the future, assuming status quo. Organizations that set yearly goals will often accompany their retrospective metrics with simple projections of whether they’re on track to hit goal. This can be valuable if you realize that progress that feels very impressive is actually going to be insufficient to hit your long-term goals. In the same way that operational metrics can make you realize that you need to dive in and debug some aspect of your operations, predictive metrics can make you realize that you need to take the effort to improve your plan. Detailed planning and design of complex systems is hard work, so people will often stop generating options as soon as they have one that feels sufficient. The value of goals for motivation during the execution phase is widely accepted, but they also serve a critical role in ensuring sufficient time is spent generating and researching options during the planning stage. In the same way many software projects go through iterations of design and execution, businesses regularly update their plans as progress becomes more clear. Predictive dashboards can be great for kicking users in the pants when an aspect of their plan needs updating.

 

The third and final use of data dashboards will be trying to infer causality. Predicting the future assumed that the plan wasn’t going to change. Some variables made randomly jitter within the parameters that you statistically identified based on historical data, but there will be no big discontinuities in either the inputs you control nor the ones from the outside world. Causality says “what if?”  What if you switched from raising cows to raising goats? What if you used twice as much fertilizer on your pasture? What if the region suffered a drought? In the “hard sciences”, the gold standard method for answering these questions is a randomized controlled trial. It’s what you think of when you hear the word “experiment” — take a bunch of similar things, divide them randomly into multiple groups, treat them the exact same way except for one variable, and see how the outcomes differ. That’s fantastic when you have hundreds of independent, fully controlled experimental subjects (whether their molecules, lab rats, or wildly unrepresentative psychology undergraduate students), but most businesses can’t chop up their processes in this manner. Even Facebook, which does do thousands of AB tests per year, can’t do fully independent tests of large functionality changes, because their users talk to each other. So if they tested 10 different interface colors, each on 10% of their user base, people wouldn’t just start logging on a bit more or less frequently – they would start posting screenshots and writing TechCrunch articles about which one the company should choose. It’s like your lab rats offering their opinions of how you should build their next maze. In this farm example, there’s simply a resource constraint. There are only so many cows, and it’s an logistically impossible to try to force a consistent subset of the herd to follow a different schedule than the test.
Needless to say, causality is by far the hardest thing to prove outside of a lab. Fortunately, there are some situations where some reasonable assumptions allow you to make causal statements without having to bend over backwards. The stock market is a classic example. Imagine I predicted, using trends in historical data, that the price of Apple is going to increase by 10% in the next week. Because the prediction is insensitive to my small purchases, I can extend that to the statement to: “If I bought $10,000 of Apple stock today, that would result in $1,000 of profit next week”. In contrast, if I wanted to make the statement “If I bought $10,000 of Apple stock today, and spread a rumor that they were about to release a levitating iPhone, that would result in $2,000 of profit next week”, I would need to figure out how much that rumor would cause the stock price to improve. Since the historical data hasn’t included that rumor, I would have to infer its impact from the jumps in the stock after previous rumors. But, of course, there were dozens of other factors pushing the stock both up and down during any period in the company’s history, so teasing out the impact of a specific action is exceptionally difficult. People write whole PhD theses trying to make such inferences in real-world data sets, so in this course we won’t dive too deeply into untangling such tricky situations.

 

It’s eTurkey weight before thanksgivingasy to slip into taking a dashboard which is designed for one thing, such as basic reporting, and rely too heavily on it for something else, such as predicting the future. As your eyes follow the graph to the left, it’s easy to imagine that you can project it forward to the future, but the final data point reminds us that a deeper understanding of the system’s dynamics need to be explicitly considered when converting a visualization from reporting to prediction.