T

Through a Partisan Lens: How Politics Overrides Information

AI-generated image of a brown dog wearing a suit with sunglasses. The glasses have lenses that are shaded blue and red.

As information designers, we don’t typically think of our work as political. Our first loyalty is the data. We help viewers understand the world around them by wresting big, complex ideas out of the platonic ether and squeezing them into two or three dimensions. Normally, that just means solving the usual challenges like information architecture, dimensionality reduction, or weaving seemingly disparate facts into a cohesive narrative. 

But for some of the most important issues of our day, politics is a crucial lens through which people see the world, and this can impact how they see data. 

For example, consider an influential study from researchers at Yale, looking at how political alignment can create blind spots, even for the most analytically savvy people. Participants were presented with two different data stories: one on the efficacy of a skin cream for curing a rash, the other on the efficacy of gun control policies for stemming gun violence. The trick: Both stories were based on the exact same underlying data. So if participants read the data to say the skin cream was effective, they should rationally also conclude that the gun control policies were effective. But that’s not what happened. Instead, even for this highly numerate crowd, when participants saw the politically charged topic, their responses became polarized along party lines. Instead of objectively following the data, they couldn’t help but interpret it as evidence supporting their prior political positions. 

To design effectively, it’s important to understand not just how to construct a clear chart, but how people will actually interpret it. Since politics can be so distorting, it’s worth understanding how it shapes our interpretations. To do this, we’ll unpack the social and political psychology that drive our attitudes and beliefs about big political issues. 

Why should data designers care about political partisanship?

Effective dataviz means designing for more than just the data on the page. The context that viewers bring to a visualization can shape how they respond to it. In our politically charged culture, the topics that need the most explaining are also often the most political. Whether we like it or not, the information that we present will be consumed through a partisan lens. By understanding these biases, we can at least address them consciously. This can help in a few ways:

  • Adapting to a fact-free universe. Information design is premised on information being helpful. But sometimes information can’t help. Understanding cases like these, when information isn’t actually useful because attitudes and prior beliefs cloud reality, can help us better pick our battles, prioritize our visualization efforts, and adapt our storytelling. 
  • Persuading people with people. When reasoning fails, people look to others for guidance. For political issues, we’re heavily influenced by the people around us. Understanding how attitudes can spread through dataviz can help us produce more persuasive visualizations. 
  • Minimizing the harmful side effects of well-intended dataviz. Information can do more than just inform. As we’ll see, partisan issue polling charts can increase political polarization. Understanding these unexpected risks can help us mitigate them.

In a partisan environment, if our ideas and decisions aren’t strictly based on information, where do they come from? To understand this we’ll dive into social and political psychology. 

Political information psychology

Understanding social and political psychology can help clarify the boundaries of information’s influence. As we’ve already suggested, the facts aren’t always as persuasive as they should be. 

On the other hand, some types of information can be influential in ways that it shouldn’t be.

Social Influences

Some city dwellers looking up, reenacting Stanley Milgram’s famous “Drawing Power of Crowds” experiment. Image made with Midjourney.

If you look up, I look up.

It’s almost a cliche to say that humans are social creatures, but that doesn’t make it less true. We are comically suggestible. For example, in a famous social psychology experiment from the 1960s, psychologist Stanley Milgram sent his research team out onto the busy streets of New York City. He instructed his team to find a crowded part of town, stop in the middle of the sidewalk, and just look straight up at the sky

The busy New Yorkers not only noticed the researchers’ upward gaze, they stopped to join them. The passersby followed the researchers’ example, deciding to stop and find out what was so interesting. Other silly experiments show similar social conformity effects.

Why are MBAs conservative, and social scientists liberal?

Our social surroundings also influence our theories about how the world works, what we believe in, and what we value. For example, one 1996 study followed 91 students (34 business majors and 57 social science majors) throughout their college careers. The researchers wanted to understand how the students’ majors influenced their beliefs, particularly whether they thought poverty and unemployment were caused by personal failings (e.g. laziness) or larger systemic factors (e.g. inequality).

During their first year, students’ majors were uncorrelated with their beliefs. But by the third year, business school students disproportionately blamed poverty on the impoverished, while social science students pointed to external, systemic factors. The embedded cultural values of their coursework and their environments influenced their beliefs about this fundamental question of social justice.

Group Influences

An expressionistic interpretation of youthful tribalism, inspired by Henri Tajfel’s classic study. Image made with Midjourney.

Expressionism’s divisive influence on our impressionable youths

The silliness continues when considering the special privilege we give to people who are like us. The classic 1971 experiment highlighting tribalism showed how a group of adolescent boys, with common histories as classmates, were transformed into opposing factions when researchers assigned them to different taste groups based on their self-reported reactions to the works of Paul Klee or Wassily Kandinsky.  Despite the boys’ shared history, when they were given a small pile of cash to divide amongst their classmates, suddenly their prior friendships meant very little.

Instead the boys shifted their allocations dramatically toward their newfound brothers-in-art. This is, emphatically, not because the nuances of Kleesian vs Kandinskian expressionism were a hot topic for these high schoolers (behind the scenes the researchers assigned their groups arbitrarily). Instead, this demonstrates how even the most arbitrarily constructed social groups can produce in-group favoritism or outgroup discrimination. In fact, other experiments showed similar results when the groups were based on nothing more than a coin toss.

We like people who are like us, even if all we have in common is mutual disdain for some other group of people. 

Common ground beyond politics

These social group effects are presumably stronger for political groups, where party members actually have real things in common. Political psychology research suggests that we share some very primal psychological traits and needs with our fellow partisans. 

Political psychologists suggest that Conservatives place great value on feelings of security and certainty (while liberals are comfortable with uncertainty, ambiguity and risk). Conservatives also value uniformity in their social groups, while liberals value differentiating themselves. Perhaps because of these low-level psychological needs, members of today’s political parties have a lot in common with their fellow partisans (e.g. particularly for U.S. Republicans, where this also applies to their white, Christian, rural demographics). 

This is the basis for the “identity stacking” theory of polarization. This theory observes that more and more of our identity traits have lined up with our political identity. For example, if you know that someone is a Democrat, then you’ve also got better odds at guessing their views on climate change, which parts of the country they live in, how long they spent in school, how confident they feel about the economy, and whether or not they’re armed.

If we have more and more in common with the people in our political party, then we’d expect our fellow partisans to be particularly influential.

Political Attitude Formation

An AI-generated image of cats on the right wearing red and dogs on the left wearing blue. Between them is an image of pug in front of the Canadian flag. The group of dogs have hearts above their heads while the cats have frowny face icons above theirs.
Do different parties have different attitudes on Canadian imports? Image made with Midjourney.

One thing we all have in common: We’re busy. And we’re tired. (So so tired.) Even if we have the interest, very few people have the time or energy to dive into  the guts of tax policies, environmental regulations, or the extended implications of Citizens United. These are big, complex and multifaceted policies.  

So, for very practical reasons, people form their attitudes and judgements by listening to other people that they trust. In particular, we look to our political parties to tell us which policies we should support and which ones we should oppose.

Do we choose our parties based on policies, or our policies based on parties?

One interesting study from 2003 told participants about one of two proposed welfare programs, either a severely “stringent” program that offers far less support than existing policies, or a “generous” program that’s almost shockingly extensive compared to any U.S. welfare programs to date. 

From an ideological perspective, you’d expect conservatives to favor the former and dislike the latter, and liberals the opposite. However, researchers found that the content of the policy itself didn’t matter nearly as much as who endorsed it. For example, conservatives were willing to support either program as long as they were told it was supported by “95% of Republicans and 10% of Democrats.” 

Instead of choosing political parties that match our ideas, the process seemingly happens in reverse. We’re flexible on our policies as long as they’re supported by our people. 

How can attitudes spread through dataviz?

As we’ve seen, our attitudes are influenced by the people around us. This is especially true for political judgements that are difficult to learn experientially. It turns out that this same influence can happen through charts. For example, public opinion polling is a popular topic in political data journalism. What influence might we expect from charts like these?

A chart showing 67% of U.S. adults believe it should be illegal to manufacture or disetribute camouflage-pattern Crocs.
This chart shows 67% of Americans oppose camo-crocs.
This is fake data, but seems like a reasonable guess?

Consider the chart above. This shows pretend results from a hypothetical public opinion poll seeking Americans’ views on camo-Crocs. Specifically, it shows that 67% of people would support a policy to ban these abominations of footwear. Since this shows that the policy is generally popular, we might expect viewers who see this chart to identify with their fellow citizens and adjust their own attitudes to match the social norm shown in the chart. 

  • For people who were previously opposed to the policy, social psychology suggests that they’d increase their support. 
  • On the other hand, for people who were already very strong supporters, they might actually decrease their support since they see that others are more relatively ambivalent.

This highlights an important concept: By showing that an idea is popular, charts can make the idea more popular. And vice versa. 

This chart shows that 84% of Democrats and 45% of Republicans support camo-crocs.
This chart shows that Democrats strongly support and Republicans slightly oppose camo-crocs.
This is totally fake data. Republicans surely also agree that camo-Crocs are ridiculous.

This chart shakes things up a bit. Now it shows the results from our hypothetical opinion poll split by political party. We can see the camo-Crocs ban is very popular with Democrats and less popular with Republicans. These are effectively party endorsements, they’re just quantified and visualized. In the last section, we covered several experiments where highlighting a party’s endorsement of a policy changed viewers’ attitudes toward the policy, so we’d expect charts like these to have similar effects. 

  • If a moderate Democrat sees this chart, we’d expect them to increase their support. 
  • If a moderate Republican sees the chart, we’d expect them to decrease their support. 
  • If a bunch of moderate Democrats and Republicans all see this chart, we’d expect their attitudes to diverge away from each other. 

This example shows one of the potential consequences of social conformity from polling results. For partisan-split polling charts like these, we might expect peoples’ attitudes to become more polarized. To the extent that polarization is bad, it implies that charts like these have an inherent social cost. These charts may be informationally valuable — or at least mildly entertaining — but they’re not without risk.  

Our research shows that both of these effects are very real. Political polling charts can very much influence viewers’ political attitudes. When viewers see a chart showing that a policy is popular, that chart can make the policy more popular. When viewers see a chart showing that attitudes are polarized across party lines, that chart can make viewers more polarized.

Great, so what? What should designers do differently?

Alberto Cairo offers a useful maxim for ethical data journalism: “The purpose of journalism is to increase knowledge among the public while minimizing the side effects that making that knowledge available might have.” He summarizes the goal as: “Increasing understanding while minimizing harm.”

As we’ve seen, attitudes can spread from person to person, regardless of their actual content. This means that visualizing attitudes from survey results can have the unexpected side effect of promoting those attitudes. This can be risky in the context of political polarization, as visualizing polarized attitudes can increase polarization.

The social conformity effect can also be harmful in and of itself. 

For example, imagine an interest group called “Dirty Handed Doctors of America.” Let’s say they survey their unhygienic-but-medically-credentialed members. Their main finding:“94% of MDs in our esteemed organization strongly agree we should stop washing our hands before treating patients.” That finding may in fact be totally accurate. Their opinion is wrong, but it could be true that 94% of them support it. Our research suggests that visualizing extreme attitudes like these might help them spread further (like the germs on their filthy, filthy hands). So even though their survey results might be technically true, publicizing them may reduce support for hand-washing among other sympathetic physicians. 

This means that we can’t just assume, by default, that visualizing polling results are a civic good, simply because they’re accurate and informative. As Cairo suggests, we have a stronger duty-of-care than simply conveying technically accurate information. Since visualizing attitudes comes with an implied risk, we need to consciously weigh those risks versus whatever benefits we expect from publicizing them.

What should we do?

Before publishing polling results, especially for political issues, designers, data journalists, and editors should ask: “If more people agreed with the attitudes in the chart, would that be good for the world?”

To be sure, this won’t always be an easy question to answer. Attitudes supporting “physicians shouldn’t wash their hands” are obviously silly. But political topics typically cover grayer areas, involving subjective values and morals. Deciding to publish political attitudes, then, is a subjective judgment call. The question above can only help you frame that decision. By at least attempting to answer the question, you’re forced to weigh the risks of spreading potentially silly ideas versus the benefits of sharing the information. Even if you decide that the information is worth the risk, you’ve at least made the judgment call consciously and thoughtfully, rather than taking its value for granted.

Takeaways

Viewers’ politics can influence how they see the world. This, in turn, influences how they take in new information. This has a few important implications for anyone visualizing social issues or otherwise politically-charged information.

  • Facts aren’t always as influential as they should be. If all of our attitudes and decisions were purely rational and information-based, the silly effects we highlight above wouldn’t exist. But in the real world, judgments about identical datasets can flip based on a person’s politics. And attitudes toward public policies are more influenced by endorsements than the policies themselves. Information is still influential, but the surrounding social context should be considered as well.
  • Survey results can be influential in ways they shouldn’t be. Information about others’ political attitudes (e.g. polling results) can unreasonably influence our own political attitudes. This influence can happen through simple partisan cues, like whether or not a party supports or opposes a policy, or by visualizing survey results. This also means that popular political data-journalism, such as election forecasts or issue polling, can have some toxic side effects like increased political polarization.  
  • Before visualizing political attitudes, weigh the risks versus benefits. Information designers should take these risks of attitude contagion into account when deciding whether to visualize and how to frame polling results. We can’t always objectively answer the guiding question (“If more people agreed with the attitudes in the chart, would that be good for the world?”) but by raising the question in the first place we can ensure judgment calls like these are made consciously and thoughtfully.

Dive deeper!

This writeup is meant as a primer for 3iap’s latest peer-reviewed visualization research, which we presented at this year’s IEEE VIS conference, in collaboration with Georgia Tech’s Cindy Xiong-Bearfield. If you’d like to better understand the pathway from polling charts to polarization — or see our 9 minute talk on the politics of cats and dogs — please check out our deep dive on the research project.

Dive Deeper: Polarizing Political Polls Design Research Project.

 | Website

Eli Holder is the founder of 3iap. 3iap (3 is a pattern) is a data, design and analytics consulting firm, specializing in data visualization, product design and custom data product development.

CategoriesData Literacy