E

Engaging Your Dashboard Users through User Testing

User testing is not common practice in internal KPI dashboard projects. Yet, it can bring much valuable insight, helping create a better user experience. On two occasions I applied simple task-based user tests to dashboard development, as part of an iterative design cycle. In this article, I explain how I implemented them, the choices I made, and how I find the practice beneficial.

Last year, I left my job in digital communication at an NGO and started working as a data visualization consultant. My work now consists of building interactive dashboards in Tableau for clients across different industries and fields of work. The dashboards are used by teams or management to monitor their activity’s key performance indicators (KPIs).

Twice in my previous work experience, I participated in website redesign projects, and I ran user tests, both moderated and unmoderated. I discovered how much valuable insight usability testing can bring to your work, and how useful they are to help prioritize the next steps.

Given the complexity, the density of information and the potentially high level of interactivity of a dataviz dashboard, I thought user tests could be a good companion in this type of project, too.

But, I soon realized that, more often than not, the only people from the client company that I interacted with were the project managers, or the people from the business intelligence (BI) and data department. No end users in sight…

These are a few thoughts on what I’ve observed so far. Other dashboard designers — perhaps in other countries (I work in France) or in other sectors — may have experienced something different, and practices in the field are also changing all the time. In this article, I share my own experience, and hopefully that’ll contribute to further exchanges on this fascinating topic!

End user, where art thou?

Unlike most websites and apps, most dashboards are meant for internal use only. They are normally intended for the team of a specific department (Sales, HR…) or only for the top management — which sometimes means a user base of as little as 10 people.

This looks like the perfect situation for user testing. You don’t have to worry about having a “representative” sample of testers. You can just include all the actual users!

The problem is that you won’t typically have access to those end users. Either they run on a hectic schedule, especially management and top management, or their participation was not factored into the plan beforehand and now the timeline won’t accommodate this additional step. As an external consultant in big organisations (that tend to be hierarchical), it will not be easy for you to change that plan.

Why is user participation not taken into account from the project’s inception? Sometimes stakeholders just don’t imagine the possibility of actively engaging users in the process of creating their dashboard. They may think it’s too complicated and they don’t see the potential benefits, or they’re afraid to fail to meet their timeline.

In my experience, quite often, the Tableau designer is a sort of all-round player, taking charge of data prep, dataviz, UX/UI, and project management. When the project has a short deadline, you just won’t have the time to involve users — or else, you’ll need extra time, that is extra budget.

Looking for end users. Illustration by author.

My step-by-step approach

On two occasions, I implemented user testing. In both cases, the client seemed quite open to a user-centered approach, and of course, I jumped at the chance! In one case, the client himself asked me to give regular demos to the end users to gather their feedback, so I suggested to add user testing on top of that. In the second case, the client mentioned a general interest in UX and the project was not time-boxed : the door was open to suggest a methodology.

KPI dashboards are often called exploratory dataviz because users explore them to examine the data and find their own conclusions. In order to do this, they’ll look at several charts, click, filter, go to more or less aggregated views then come back, compare periods, etc. The user flow is not linear (as it is in explanatory dataviz), but branching, as Susie Lu brilliantly explained in this blog post.

The user’s analytical process can be easily sliced into tasks, which each user may then assemble in different ways to carry on their own analysis.

That’s why task-based usability testing seemed particularly appropriate in my case. In a nutshell, users are given tasks to complete on the dashboard (find the value of a KPI for a particular item or date, find out if a value is higher or lower than another, etc.). The test results will show the success rate in finding the correct answers. If a notable proportion of users were unable to find the right answer, this could point to an opportunity for improvement in the design.

I’m fond of moderated tests (where a moderator guides the user through the test) as opposed to unmoderated tests (which are completed by participants on their own), but in both of my use cases that would have taken too much time. Instead, I opted for remote, unmoderated tests — I took, however, the time to recontact users when I felt I needed more information.

In practice, I sliced the project into sections and worked through iterative design cycles, each composed of the following five steps:

Step 1: Develop section one of the dashboard, with the help of the client

Step 2: Demo, briefly, section one for the end users

Step 3: Launch a user testing and feedback phase on section one
After the demo, I asked the users to complete an online form in a reasonable time frame. The form included:

A task-based usability test, with five questions, that the user can only answer by exploring the dashboard (for example, “How many clients used your product in January last year?”, or “What is the product with the lowest production costs since the start of the year?”).

A small feedback survey, including three questions :

  1. Does the dashboard so far provide all the information you need on the topic? If not, what is missing?
  2. Was the navigation smooth and clear? If not, why?
  3. Generally speaking, do figures seem correct to you? If not, please list the potential errors you have spotted.

The core of the form is of course the usability test, but it also provides a nice opportunity to gather feedback in an efficient way through the three-question survey.

It should be noted here that user tests are fundamentally different from surveys and focus groups. The latter provide information on people’s feelings and opinions, whereas usability tests are about learning how people actually use a product. Steve Krug explained this in his seminal book, Don’t Make Me Think, in 2000 (and, also in a quirky and funny video entitled, “You say ‘potato’, I say ‘focus group’”).

A few questions could also be included in the online form for user profiling, to gather information on their dashboard usage habits, context, and abilities. Just be careful to keep your form short. It will help you keep participation rate high all along the project!

Users’ profile and context of usage should be taken into account when applying user testing to internal dashboards. Illustration by author.

Sometimes prototypes are preferred for testing purposes. But, for internal dashboards, my preference is to use the real product at its stage of development, with the real data in it.

Why ?

A chart may look very differently depending on the data injected in it. Imagine a heat map with sparse data: you’ll get more holes than coloured squares. Or a bar chart with one category that always behaves as an outlier compared to the others: the value of its bar will flatten all the others, making them barely visible.

In a wireframe or prototype you can draw the perfect chart, but real data are often quirky and don’t fit an ideal format. What users will see in the final version is real data, so it’s better to use them in the test phase, too.

Internal dashboards are intended for experts in the field. They know the data and its context. It is a mirror of the job they do on a daily basis. Therefore, they can deduce a lot of information from few clues. Everything doesn’t have to be explained in the interface, as a good portion of the knowledge is already in their head and the design can leverage that (for example, a detailed definition of every indicator won’t always be necessary).

This also means that their behaviour will be affected by the knowledge they hold when they compare it to the data they see in the charts. For example, if two labels are unclear, they may interpret them (and maybe mix them up) based on the likelihood of the data shown under each label, according to their knowledge of the field.

In short, for internal dashboards intended for expert users, using real data in the test phase allows you to take into account the knowledge in the users’ head.

An internal dashboard is not a one-time product. Once it’s finished, users are supposed to use it again and again on a regular basis. Using the real dashboard in the test will help them get used to it, which could mean time saved in training, documentation, and support for later. Also, it will be interesting to see how their behaviour on the interface evolves from one test to the other.

How many people should join the test? The rule of thumb is that you need no more than five users to generate valuable insight. This applies even more when dealing with internal dashboards, where the user base is fairly small.

Finally, it is important to test often and early, cutting your work into small batches of development, quickly testing each of them in iterative cycles from the start of the project. If you start testing too late, you’ll expose yourself to the risk of realizing you’ve taken the wrong direction or you’ve developed useless features when you’ve already spent a considerable amount of time and energy on them.

Step 4: Feedback and findings report
With all test findings and feedback flowing in, a bit of tidying up is necessary, in order to make all this information actionable. To do so, I set up a report, listing all the findings and remarks, including for each one a severity rating and the actions that need to be taken to fix it. To determine the severity rating I followed the recommendations of the Nielsen Norman Group, and based the rating scale on three factors : frequency, impact, and persistence of the issue.

This report is also a great opportunity to work hand-in-hand with the client and to check regularly that you are on the same page. Together, review the report’s list, assign severity ratings, and choose the next steps.

Step 5: New demo to the end users, structured in two parts

  • First, a report of the actions taken following the users’ feedback and the test results on section one,
  • Second, a brief demo of section two…
  • And from there, you start another cycle!

Additionally, I set up an open discussion channel with all users and the client, to gather and reply to feedback in between tests.

All in all, the method described is fairly easy to put in place. No costly software is required. The process is not too time-consuming for users nor for the client, and it integrates well into a more general iterative project management approach.

Illustration by author.

A lesson I’ve learned is that I would run the usability tests in-person instead of remotely.

Directly observing users interacting with the dashboard and hearing them thinking aloud, allows you to not only see whether they manage or not to perform a task, but also why.

You can actually see people hesitating on a button, moving their mouse looking for the right place to click, going forward and backward in their navigation. I find moderated user tests much more valuable, but of course, they require a time investment from the dashboard designer, or an additional resource in order to reduce the impact on the project timeline. As a first approach with a client, an unmoderated, remote test can be a good start.

Of course, you could also choose to apply other testing methods. For example, tree testing and card sorting (both feasible in person and remotely) may be particularly appropriate when the volume of information to be shown is considerable and the content is complex.

Main benefits

The main goal of user testing is, of course, to produce a better user experience. The general benefits that it offers also apply to dashboard design. Usability testing allows you to identify bugs and problem areas early on in the process, saving development time later. The end product should then better fit the needs and context of usage of the target users, which should reduce the effort in training, documentation, and support to help them use the dashboard.

On top of these well-known general benefits, user testing can bring other interesting advantages in the specific context of internal dashboard development:

First of all, user testing can inspire a sense of community. As I said, a dashboard is often meant to be used by a small group of people with a similar role in the company. Being involved all along the process will help users embrace the dashboard as their own. They will feel heard, and if difficult choices have to be made — due to technical constraints, for example — they will more easily understand them.

If they were not used to charts and graphs before, this process will also be a chance for end users to gradually improve their visual and data literacy. Eventually, they will feel more comfortable with the dashboard once it’s finished — and they’re supposed to use it on a daily basis.

Bringing users onboard through user testing, listening, and exchanging creates the basis for the everyday adoption of the dashboard.

Finally, from a dashboard developer point of view, this process helps to structure and prioritize my work. Thanks to the test results, users and client’s opinions gain a new concrete value, and each iteration produces a clear prioritized to-do list directly pointing to the next steps.


A few tips

If you think user testing would be appropriate for your dashboard design project, the first condition to implement it is having access to the end users — or otherwise convincing the client to involve them. The client should support this approach. They are the link between you and the final users — at least at the beginning.

You need to consider the situation (deadlines, project timeline, client’s inclination toward a user-centered approach, etc.), when choosing a user testing method. For a first time with a client, you could start with something that is not too time consuming and see how it goes; start small, then grow.

Starting testing early in the process. The more you wait, the more difficult it will be to redesign the work you’ve done, if it is needed.

Last, but not least, dashboard user testing has a great value, but reveals its full potential when it’s integrated in a global user-centered design approach: regular demos and exchanges, open chat rooms with all users, and also user research if you have the opportunity to implement it. All this contributes to creating a global group dynamic and a sense of belonging to the project, bringing about useful insights, and reducing the risk of the dashboard going unused.

Silvia Romanelli

Silvia Romanelli is an Italian data visualization designer based in France.

Previously, she used to work in journalism and in digital communication in the non-profit sector. She’s interested in datajournalism, information design in general, UX/UI, and the use of dataviz for social good.