From descriptive to prescriptive: How AI in data visualization is evolving from plotting the “What” to telling the “Why”.

Today’s AI systems can draw charts. Some can even summarize trends. But what they still can’t do well, and what’s becoming increasingly important, is explain. If you ask, “Why did revenue in the Northern region drop in Q2?” you are not asking for a bar chart. You are asking for a diagnosis.
You expect not just data, but reasoning.
A system that not only visualizes what happened, but most importantly helps uncover why it happened. This is the shift from descriptive AI to prescriptive visual intelligence. And building that kind of system is not just about better models. It’s about rethinking how language, data, and visualization are connected in a layered reasoning pipeline.
Let’s walk through what happens under the hood and what it takes to move from displaying numbers to constructing visual explanations.
From prompt to structured intention
When a user types, “Can you show why revenue dropped in the Northern region during Q2?” they’re not issuing a fixed instruction. They’re initiating a layered analytical request.
From a prescriptive analytical perspective, this sentence has multiple layers of semantic structure: including metric inference, temporal framing, comparative logic, and implied causality.
For example:
- “Why” triggers diagnostic mode. The system must look for explanatory factors.
- “Revenue” isn’t always a physical field. It’s likely derived from price, quantity, or subject to discounts.
- “Northern region” may map to a dimensional hierarchy or a filter that spans several locations.
- “Q2” may refer to a calendar or fiscal quarter, which must be resolved contextually.
- “Dropped” implies change over time, but the reference point, e.g. Q1, is unstated. It must be inferred.
These linguistic cues must be resolved into a structured analytical intent—a model of inquiry that determines what data is needed, how it should be interpreted, and how its meaning can be most effectively communicated through visualization.

Semantic layers
No meaningful analysis can begin without knowing what the user’s words refer to. In simple Business Object systems, like traditional BI platforms used for reporting, querying and analysing data – metrics and dimensions are often just labels in a schema. But in prescriptive systems, they must be interpreted through a semantic layer, a metadata-driven model that encodes definitions, relationships, and business logic. They exist, but the system doesn’t “understand” them.
But in more advanced, prescriptive systems, those terms must be interpreted through semantic lawyer – a metadata – driven model that defines what each concept means, how it relates to other fields, and how business logic should be applied.
When the system sees “revenue,” it assesses this layer to check:
- Is this a separate metric, standalone field, or a calculation?
- Does it exist in raw transactional data or only in summary views?
- Is it affected by other variables, like return date or discount policy?
For the “Northern region,” the system needs to resolve whether it corresponds to a field in a table, a label in a dimension hierarchy, or a mapping that requires joining multiple datasets. For “Q2,” it checks if the organization operates on a fiscal calendar. And if so, what date range Q2 actually covers.
These checks are not optional. Without them, the system might generate syntactically valid queries, but one that answers a completely different question than the user intended.
The semantic layer acts as the system’s vocabulary of meaning—connecting human concepts to technical structure. This is what triggers “the explorative analysis” and makes intent computationally actionable.

How the AI plans its queries
Once the question has been resolved into structured intent, the system must plan its data extraction. But here, explanation requires more than a single aggregation. It requires context, comparison, and a hypothesis space.
First, the system retrieves the primary metric—revenue across the relevant time periods, Q1 and Q2, for the target segment, Northern region. This validates that the drop is real and quantifies it.
Then it begins to gather possible explanatory data: channel spend, conversion rate, discount frequency, product category mix, etc.—anything that may have shifted between those quarters. Each of these drivers usually exists in a different table. Each query needs to be scoped, filtered, and time aligned.
For example: One query may compare marketing channel allocations between Q1 and Q2. Or another may retrieve average conversion rates by month. A third may analyze shifts in sales volume by product type, etc.
Each result is a piece of potential evidence. The system is not drawing conclusions yet. It is preparing a “diagnostic map”—a time-aligned and scope-matched set of metric slices that could help explain the observed change. This is where prescriptive systems diverge from traditional BI. They don’t just retrieve facts, they retrieve candidate causes.

How the system identifies what matters
Once this diagnostic map is assembled, the system faces the most non-trivial challenge—identifying which of these shifts are plausible contributors to the revenue drop, and which are incidental.
To begin, It performs a layer of statistical screening: calculating deltas, highlighting drivers that show significant shifts or change between quarters, and filtering out noise. These are first-pass candidates—variables that changed when revenue changed.
But correlation is not an explanation. To go further, the system needs to assess causal relevance. It uses several methods in combination:
- Domain heuristics, embedded in the semantic layer (e.g. “conversion rate often drives revenue”)
- Trained models, such as XGBoost or other classifiers, which learn from historical data which combinations of changes have historically preceded metric shifts
- Causal pattern models, often built on transform architectures, which generate hypotheses in natural language and evaluate them against temporal and structural data patterns
Each of these methods outputs a confidence score—a measure of how strongly a given variable’s change is likely connected to the revenue drop. The system then ranks its findings, forming a stack of “most likely drivers.”
The goal is not to prove causality in the strict sense that would require controlled experiments. The goal is to build a narrative scaffold from statistically and semantically defensible signals. This scaffold is what the system will soon express—through both visuals and text—as its explanation.

Designing the visual explanation
Once the system has identified likely drivers behind the stage—and scored them by relevance—it must decide how to present that insight. This is not a matter of generating a bar chart or plotting a trend line. In prescriptive systems, visualization is not decoration—it’s a rhetorical structure. The chart is part of the argument.
The system begins by selecting a visual template that aligns with the structure of explanation. If the insights involve temporal shifts, it will likely favor line charts with layered annotations. If the cause is distributional—say, a change in product mix—it may use a stacked area chart or a small multiple layout. If the explanation depends on relationships between variables (example, spend → conversion → revenue), it might choose a flow diagram or funnel.
But the most important decision is not the chart type, It’s the narrative logic the chart is meant to carry. An explanation chart must do at least three things:
- Present the main metrics and it’s change over time
- Overlay or juxtapose the candidate driver
- Provide visual emphasis (e.g annotations, highlights) to guide the user’s interpretation
This process is guided by a visual grammar “engine.” Which is usually built on top of systems like a custom D3 abstraction layer. The system translates the insights in natural language. Something like: “Revenue in the Northern region dropped by 17% in Q2. This coincided with a 22% drop in conversion rate beginning in April, following a shift in marketing channel allocation. The trend is visible in the chart below.”
Together, the visual and the narrative constitute the system’s answer. Not just a chart. Not just a sentence. A multi-modal explanation, assembled to help the user not just see the data, but understand what happened.

Why this changes the role of visualization…and the analyst
The implications of this shift are deep. In traditional BI workflows, charts are static outputs—visualized summaries of queries written by humans. But in prescriptive AI, charts become expressive artifacts—generated as part of the system’s attempt to construct meaning.
This changes both—how we build systems and how we interpret them. Analysts will no longer spend most of their time choosing chart types or filtering dashboards. Instead, they will work as editors of algorithmic reasoning: validating what the system inferred, refining its narrative framing and questioning its assumptions.
In turn, visual literacy itself evolves. It’s no longer just the ability to interpret axes and legends. It’s the ability to interrogate machine-generated visual arguments: “What did the system decide to highlight?” “What’s being compared?” “What was left out?”
In this new reality, charts are no longer the answers. They are hypotheses-drawn in real time, constructed by models, and subjected to judgment.

Conclusion: From charting to cognition
The future of AI is not about building dashboards faster. The future is about building systems that can synthesise patterns, identify causes and communicate explanations in ways we humans could easily grasp and understand.
Prescriptive visual intelligence doesn’t mean better chart automation. It means that when we ask questions like:
Why did revenue fall? We don’t just get a number on a trend.
We get a visual theory.
We get a structured answer
We get a system that tries to explain the reasons, what led to what…
And in that moment data visualization becomes more than visual design. It becomes a language for computational thought.

All images were made using Napkin.ai

Daria Voronova
With over seven years of experience in data visualization and analytics, I focus on transforming how organizations approach business—shifting from tactical execution to a forward-thinking, strategic approach. Collaborating with C-level teams and Fortune 500 clients, I design strategies to address critical challenges, enabling self-service analytics and data-driven solutions. Valuing knowledge sharing, I’ve mentored professionals at all levels, taught the principles and thinking behind data visualization, and emphasized solving real-life business problems. My work has reached over 85,000 learners globally, using data as a language to uncover clarity and achieve measurable results.