My three year-old son once told me, “I know how computers work: when you press a button on the keyboard, it pulls a string inside the computer. The string makes the images appear on the screen. If the string breaks, the computer doesn’t work anymore.”
The chances are that if my son owned a computer and it broke, whatever the problem appeared to be (a blue screen error, the audio system not working properly…) he would try to disassemble the laptop and look for that mysterious string. Unable to find it, he would probably revise his comprehension of how computers work.
In other words, his mental model about computers would evolve in order to correct a model mismatch between what he thought to be true and reality. This process would probably be accompanied by a spectrum of feelings, starting from suspicion and mistrust of computers and puzzlement, to hopefully a satisfying sense of understanding.
As the Nielsen Norman Group, a leading resource for UX concepts and research, explains:
“A mental model is what the user believes about the system at hand…A mental model is based on belief, not facts: that is, it’s a model of what users know (or think they know) about a system. […] Individual users each have their own mental model.”
In this case, the system can be any user interface, be it physical (e.g., a coffee maker or a dishwasher) or digital (e.g., a website, a mobile app, a data visualization chart, or a dashboard).
Users build mental models through their interactions with systems, but they also base them on their previous knowledge of similar systems, on exchanges with other users, on tutorials, or on documentation. Because of this, mental models are naturally dynamic.
“Understanding the concept of mental models can help you make sense of usability problems in your design.”
Nielsen Norman Group
In dashboard design, and data visualization in general, mental models can be applied to three areas:
- The user interface (UI)
- The data behind the UI
- The system behind the data
Mental models at the user interface level
Anyone working in data visualization has probably seen people struggling to understand how to read a chart or how to use a dashboard. Users select something on a map and expect to see a filtered results set, when in fact nothing happens. Or, users click on a filter and don’t realize that the function is actually applied much more broadly than they expected. I’ve also seen users press a button to navigate to a different view and, when the dashboard does not signal that data is loading in some way, they give up, thinking that the dashboard is unavailable.
In all these cases, the UI design probably doesn’t provide clear enough features to help users form a sufficient idea of how it works. For users, the results are a high cognitive load, confusion, and a mismatch between their mental models and reality. In his bestseller, Thinking Fast and Slow, Nobel Prize-winning economist, Daniel Kahneman explains that when experiencing cognitive strain, people tend to be more vigilant and suspicious.
In the case of a dashboard, a confusing interface may undermine trust in the visualization (and sometimes in the organization), even when data and calculations are accurate and relevant. The negative experience may also discourage users from visiting the dashboard in the future. On the contrary, a clear and user-friendly data visualization can cement the feeling of coherence for users, confirming their mental model at each interaction. This can foster cognitive ease and an impression of familiarity that builds trust. How can data visualizers help dashboard users construct more coherent mental models through UI design?
Of course, relying on common types of charts and standard practices can increase the chances of matching most users’ mental models. But standardization alone is not enough, because each person builds their own model. There’s no one size fits all–you can’t just say, “I’ll use only bar charts everywhere and everybody will understand.” Sometimes you may need to display information more specific to the topic or out of the ordinary, without losing users along the way. You can drive alignment with users’ mental models by paying attention to structure, interactivity, and the arrangement of the data visualization elements, as well as using a user-centered methodology. Here are some tips which I found useful from my own experience. Many more can be found in the resources cited at the end of this article.
How can we help users build coherent mental models?
Be aware of your dataviz structure
The authors of this article on mental models in information design define two types of displays: author-driven displays and reader-driven (or exploration-driven) displays. Author-driven displays are most often used in data journalism and visualizations meant for the general public, whereas corporate key performance indicator (KPI) dashboards are normally reader-driven.
Visualizations built upon a sequential narrative and storytelling do a great job guiding readers step-by-step into a topic. In this type of format, readers are often shown a single view or example first, then elements are added up, letting them gradually form a coherent mental model of how the visualization works. Reading follows a linear path imposed by the author.
“As opposed to such a linear narrative structure, interactive media already pose a challenge for this continuous construction and integration process [of a mental model]: The recipient is not led through the information, but navigates a multitude of decision points in his or her own way and processes the pieces of information in no specific order.”
– A Mental models perspective on designing infovis for political communication
As a consequence, in dashboards, which “do not prescribe any specific order or linearity but allow free interaction with the data,” a coherent design is extremely important to help users build their mental model throughout their navigation.
Our brains naturally seek coherence. Dataviz designers can provide coherence through consistent design. This can be achieved through a considered use of colour, size, shape, mark types, element arrangement, and interactivity, across all the views.
“By offering multiple views in parallel, complex topics or datasets could be presented from complementary analytical perspectives for synchronous contemplation. In contrast to narrative techniques the ‘connecting tissue’ between these views has to be presented in parallel to the observer’s eye. One basic technique to do so is the use of consistent visual variables or design choices across different views (e.g., the consistent use of color for the same data dimensions in different perspectives). Interaction methods subsequently allow the user-driven enrichment of between-view-connections with further coherence cues. The most common of these synchronous integration techniques are coordinated selecting and highlighting or linking and brushing, as well as synchronized panning, scrolling or zooming.”
A Mental models perspective on designing infovis for political communication
In some cases, mixed models (a hybrid of author-driven and reader-driven approaches) can be applied. A common structure in KPI dashboards is the drill-down: the visualization focuses on the main topics only, but users can click on them to explore sub-topics.
This visualization on COVID-19 by The Guardian uses a “martini glass structure.” It “starts solely author-driven, but opens up to free exploration after the user has been led through the narrative.”
Use natural mapping
Natural mapping, as described by Don Norman in his seminal book, The Design of Everyday Things, is an arrangement of elements where “the relationship between the controls and the objects to be controlled is obvious.”
He details an example involving the disposition of burners’ controls on a stove. It is apparently common for users to turn the wrong burner of the stove on or off because the position of each control relative to its burner is not obvious in the design.
In the same way, in a data visualization, interactive components, such as buttons and filters, should be placed next to the elements they activate. I’ve often seen dashboards where a filter applies to every chart except one or two, but no clue is offered in the layout to expect this behaviour.
Generally speaking, when it comes to interactivity, I try to use it in moderation. More interactions means a more complex conceptual model for users to understand.
Avoid pollution
Data visualization software may contain default features that are not always necessary, such as: automatic highlighting whenever an element is selected, default controls, the ability to click on every data point, etc.When these features don’t fit coherently in the chosen design, they end up creating “noise” for users trying to understand how the visualization works. Sometimes, dataviz designers are so used to them that we don’t see them anymore. We may even add unnecessary elements, without realizing we’re blurring the user experience.
“Polluting features” should be avoided or neutralized through appropriate constraints for that specific software or medium. These constraints help channel users’ attention to only what matters for understanding the visualization.
Use a meaningful structure rather than instruction sheets
When designing dashboards for clients, I’m often asked to create an “instructions” page containing everything users should know about what the charts show, where the data come from, how KPIs are calculated, and how to interact with the visuals. Users are supposed to read this page on their first visit and possibly read it again if they forget or get stuck.
Unfortunately, people don’t read pages on screen, but rather scan them. Very few people read online instructions, and those who do will probably forget what they read on their second or third visit. In fact, user experience research has found that users read only about 20 percent of the text on the average page. This finding is particularly important when designing corporate KPI dashboards (for other types of data visualization, such as data journalism or printed posters, other considerations come into play).
Instructions and documentation are not completely without merit, but the first instinct of a dataviz designer should be to use a meaningful structure in the interface and where needed, drop small bits of instructions directly into the visualization (for example, as brief subtitles or tool tips).
Users shouldn’t be asked to remember how the UI works, or as Don Norman puts it:
“The most effective way of helping people remember is to make it unnecessary.”
Apply UX methods
A data visualization does not live in a vacuum. It’s always used by specific people in specific contexts with specific purposes or expectations. That’s why user research, user interviews and testing, heuristic evaluation, or simple presentations to users are useful methods to make sure a data visualization does support the construction of an appropriate conceptual model. In other words, is it intuitive for its target user?
“Whether an information visualization system supports or hinders the construction of a mental model depends on many different factors. In addition to the design, the complexity of the dataset (its dimensionality and size) and user characteristics (prior knowledge, tasks or goals, visual literacy) also play a decisive role.”
A Mental models perspective on designing infovis for political communication
Mental models at the data level
Every data analyst has probably heard this phrase at least once: “Where do those numbers in the dashboard come from? I don’t have the same numbers in my files.The dashboard must be wrong.”
Often, the perceived misalignment stems from visualized data that may not cover the same scope as other internal reports. The area covered (e.g., categories, timeframe, etc.) should always be clear to the user. The disconnect may also come from the way a KPI has been calculated. Indeed, even inside the same company or the same department, a given KPI may be calculated following different methods.
In both cases, a mismatch between the data a user expects (i.e., their mental model) and the data they see can undermine their trust in the dashboard.
If several calculation methods are used, the dataviz designer has two alternatives:
- Use one method as the default, while giving the user the option to switch to another calculation method. This is a flexible solution, but it risks overcomplicating the dashboard.
- Choose only one method, but make sure users are aware of the calculation applied. This approach may also contribute to creating a shared vision of the KPI in the company.
Corporate KPI dashboard users are normally experts in their field. When they spot unexpected values in a visualization, they will probably shift to a more analytical approach of the dashboard looking for the reason behind those numbers.
At this point, it’s important to give them the ability to find the answer directly in the interface (and not in a separate instruction manual), for example as a tool tip or an annotation. This will enable them to adjust their mental model smoothly as they navigate the visualization.
Once, in a restaurant I was looking for the toilet. I walked towards the back of the restaurant, down a corridor, and I went straight to a door which looked exactly like a toilet door. It wasn’t. But on the door I found a post-it that read, “The toilet is right behind you.” The tool tip or annotation that the user should be able to find in the dataviz navigation has exactly the same function as this post-it. Imagine if the information about the toilet was written on the menu instead of the door!
One last thought: even before reaching the final stage of the design process, collaborative and user-centered methodologies allow you to co-construct the solution with the users which can help them gradually adopt the dashboard and its calculation rules right from the development stage.
Mental models at the system level
Where do the data visualized come from? Have they been blended, cleaned, and transformed? If so, how? Some users feel the need to answer these questions, others don’t.
Whatever the user’s profile, is it necessary for users to know all the details of the data modeling process? Probably not. They certainly don’t need to become experts in databases and data preparation (unless they wish to, of course!). But they do need the relevant information to correctly grasp the scope, the calculation rules, the sources, and the possible limitations of the data they’re consuming, thus building their trust in the visualization.
Once a foundation of trust has been established, whether their mental model of the data processing is incomplete or even partially incorrect doesn’t matter.
“Conceptual models are powerful explanatory devices, useful in a variety of circumstances. They do not have to be accurate as long as they lead to the correct behaviour in the desired situation.”
Don NOrman
In conclusion, mental models represent a rich and versatile concept, which can be applied to several aspects of data visualization, from UI, to data, and the system behind the data. Thinking in terms of mental models can bring dataviz designers closer to users’ perspectives to create more enjoyable visualization experiences. For organizations that pride themselves on living their values, these practices can reinforce concepts like transparency and self determination across the organization.
A coherent and meaningful structure reduces cognitive load for users, and helps them more efficiently find answers and insights in data. The smoother this process, the more likely they’ll develop trust in the dataviz. Data visualizations can be complex products – and the audience is not always familiar with their visual language. Capturing readers’ attention, engaging them, and giving them valuable information can be difficult tasks, in which trust plays a key role – be it in internal reports, external communication, or data journalism.
Resources and works cited
- Günther Schreder, Florian Windhager, Michael Smuc, Eva Mayr, “A Mental Models Perspective on Designing Information Visualizations for Political Communication,” Open Access eJournal of eDemocracy and Open Government (JeDEM), 2016, pp. 80-99.
- Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
- Nielsen Norman Group, UX research and consulting firm founded by pioneers of user experience Jakob Nielsen and Don Norman.
- Norman, Donald A. The Design of Everyday Things. New York: Doubleday, 1990.
Silvia Romanelli is an Italian data visualization designer based in France.
Previously, she used to work in journalism and in digital communication in the non-profit sector. She’s interested in datajournalism, information design in general, UX/UI, and the use of dataviz for social good.