Article
·
April 8, 2026

The dashboard nobody opens: why 68% of compliance dashboards fail

Mohammad Ahmad
— Principal, Aeyth

I want to tell you about a dashboard I built early in my career that nobody used. I'm not sharing this because it's a fun story. I'm sharing it because the failure taught me something about analytics design that I now consider the most important principle in the field — and it's a principle that most data teams have never been taught.

The dashboard was technically excellent. It tracked 14 KPIs across 6 processing stages for a compliance program managing case flow across 8 business units. It refreshed daily. It had drill-downs. It had filters. It had conditional formatting that turned cells red when metrics crossed thresholds. I was proud of it.

Within three weeks, usage dropped to near zero. The program director — the person it was built for — told me, politely, that she found it "overwhelming" and had gone back to asking her managers for verbal updates in the Monday meeting.

I was frustrated. She had asked for a dashboard. I built her a dashboard. She wasn't using the dashboard. The problem, I assumed, was adoption — she needed training, or the data wasn't accurate, or the team was resistant to change.

I was wrong about all of it. The problem was me. I had built a dashboard for an analyst. She was a director. Those are fundamentally different users with fundamentally different needs, and I had designed for the wrong one.

The Design Failure

The mistake is so common in compliance analytics that I've given it a name: the Analyst's Mirror. It means building a dashboard that reflects how the builder thinks about data rather than how the user thinks about decisions.

An analyst thinks in datasets, dimensions, filters, and drill-downs. They want to explore. They want granularity. They want the ability to slice data seventeen different ways and find patterns.

A director thinks in questions. Specifically: What needs my attention? Is it getting better or worse? What should I do about it? They don't want to explore. They want to be told.

A dashboard built for an analyst shows everything and lets the user find what matters. A dashboard built for a director shows only what matters and lets the user investigate further if they choose. The first feels comprehensive. The second feels useful. They are opposite design philosophies, and the vast majority of compliance dashboards are built with the first because the people building them are analysts.

Research from multiple sources consistently estimates that only 30–35% of deployed business intelligence dashboards are accessed regularly by their intended audience. In compliance operations specifically, my experience suggests the number is even lower — because compliance dashboards are disproportionately built by technical teams for non-technical decision-makers, maximizing the Analyst's Mirror problem.

The Three-Question Fix

Every dashboard should be designed by answering three questions in order. If you can't answer all three before building, don't build.

Question 1: Who opens this dashboard, and when?

Not "who could use it" or "who should look at it." Who will actually open it, and on what day, at what time, in what context? "The program director opens it Monday at 8:45am before the 9am leadership meeting" is a real answer. "Leadership can use it for visibility" is not.

The answer to this question determines everything — the level of granularity, the amount of data shown by default, the refresh cadence, and the physical layout. A dashboard opened by a director on a laptop before a meeting is designed differently than one opened by an analyst on a dual-monitor setup during deep work.

Question 2: What decision will this dashboard change?

Not "what information does it provide." What decision. "If the director sees that Program X's cycle time exceeded 4 days, she will reassign two reviewers from Program Y" is a real answer. "It gives leadership visibility into cycle times" is not.

The answer determines which metrics appear on the primary view and what thresholds trigger visual alerts. If you can't identify a specific decision the dashboard changes, the dashboard is decorative. Build something else.

Question 3: What is the smallest amount of data that supports that decision?

This is the counterintuitive one. The instinct is to show more — more metrics, more programs, more history, more drill-downs. The discipline is to show less. Specifically: what is the minimum data that the decision-maker needs to see to make the correct decision with confidence?

For most compliance operations decisions, the answer is shockingly small. Three to five KPIs. One comparison dimension (this week vs. target, or this program vs. average). One time horizon (trailing 4 weeks or trailing quarter). One alert system (green/yellow/red or above/below threshold).

That's it. That's the dashboard. Everything else is a drill-down that exists one click away for investigation — but does not appear on the primary view.

Why This Matters

The reason 68% of dashboards fail is not technical. The data is usually accurate. The tools are usually capable. The refresh is usually reliable. The failure is in the design philosophy: the dashboard was built to display data comprehensively rather than to support decisions precisely.

The fix is not a better tool or a better analyst. It's a better design process — one that starts with the decision-maker, works backward to the decision, and builds only the visualization that decision requires.

I've rebuilt dozens of dashboards using the three-question framework. The pattern is consistent: the rebuild has fewer metrics, fewer views, and fewer features than the original. It is also used 3–5x more frequently. Because the director who opens a dashboard showing three numbers she needs to act on will open it every day. The director who opens a dashboard showing forty numbers she might find interesting will open it once and never return.

Less data. More decisions. That's the whole framework.

Ready to stop guessing?