While there are many examples of “ideal” dashboards that have been based on refined sets of data (for example, a fictional company) the truth is that projects don’t usually start with a blank sheet of paper. It is often the case that you have to improve an existing system, such as condensing an existing report into a single dashboard. This example is one such case.
A common problem with dashboards occurs when people place a higher value on presentation than content. Project managers often blame lack of eye candy as the reason people are not interested in a report. While this is sometimes the case, a much more common reason is that the data is irrelevant or is being presented to the wrong audience. People are interested in things that make their job easier, or tell them something they don’t already know; eye candy is no substitute for good content.
In this example we demonstrate the process of taking an existing 30+ page report pack and condensing it into a single visualization. The aim is to let viewers make comparisons and monitor progress much more easily than with their previous reporting pack.
A Real World Example
This example was implemented at a major global bank, and involves Operations data, which is reasonably generic, making it a good example for many different organizations. The data has been randomised.
When faced with a data visualization project, the first questions implementers typically ask are “what is the best chart for this situation?” or “what colour should I use for emphasis?” While these questions are important, they should not be the first questions you ask. Instead, start with “who is this dashboard going to be seen by and how? Is it in a boardroom on a printed sheet, or across a trading floor on a plasma screen? Are the consumers domain experts?”
This example features data about a bank’s operations processing. The audience is composed of clients of the Operations department and the respective Operations managers. The goal is to see how the Operations department is performing over time, are things getting better or worse, which areas are causing problems, etc.
The project began as an effort to record operational problems on a daily basis across different product lines. A reporting system was built and various generic reports were produced, showing disruption and detailed accounts of each incident. These reports were duplicated across all product lines.
Unfortunately the reports didn’t contain data at a granular enough level, and it was difficult for the product managers to see where the issues were occurring across the entire department and what the trends were. In many ways it was simply a record of events, rather than something which offered any insight. The report showed what the major problems had been – but this was already known. When something major goes wrong you remember getting shouted at!
Everybody knew when the major problems occurred and when systems weren’t working. What they needed was an overall picture to show patterns. Because the report was split over 30+ pages, making comparisons of this nature was very difficult.
What was requested
Our clients wanted a report showing where problems occurred across business lines (rather than operational units), as well as some history to track patterns. It had to fit into a single page for inclusion in another weekly MIS pack. As a first pass they manually entered the data into an Excel worksheet and produced the following report.
We felt this solution lacked clarity and it was very difficult to spot trends across different business lines / products.
What we proposed
We designed a solution using in-cell charting to show both headline totals and small multiples of sparkline charts to show the detail:
Click on the image to enlarge the view.
This solution allowed the user to view the number of problems by Product (columns) or by Root Cause (rows) and to look deeper into historical trends. For example, you will notice spikes in some of the historical data, but generally the overall trend has improved over time. By ranking the Products and Root Causes you immediately give some sense of scale to the data. For example, you can see that there are many more application failures than any other type of problem, but the majority of root causes are otherwise fairly evenly distributed.
Although initially it requires some thought to be able to intepret the data, the fundamentals are obvious. Rather than becoming fixated on a particular aspect we have laid the report out in a way that shows a top-level summary on the outside, and you can dig deeper if you want. We’ve also tried to use colour to indicate the key data, leaving secondary detail in grey.
Another point worth noting is that the original colour scheme was much more muted, but the client wanted this changed because it looked like a competitor’s corporate colours and they wanted it to be “louder”.
They were ecstatic! 1 page replaced 34, and they could see at a glance how the entire organisation was working. They were still able to quickly locate details for a particular area to identify trends.
Creating dashboards is inherently about compromise. You have to make choices about what information is displayed, both from a business and visual point of view. It’s no good having all the information crammed in so that nobody can tell what is happening. But it is equally pointless having six beautiful dials that only give you six numbers. Make sure that your dashboard designs reflect the concerns of your audience and provide details or pointers to make further investigation possible.
About the author
Neil Scanlon is a senior consultant at XLCubed, with a background in design, and a focus on data visualisation.
XLCubed provides client tools to extend Microsoft’s Business Intelligence products in terms of reporting, analytics, and business-focused dashboards both in Excel and on the web. For more information, visit their website at www.xlcubed.com, their blog at blog.xlcubed.com, and follow them on twitter @xlcubed.