Using AI to expose design demand

Exposing End-to-End Design Demand: From Figma > AEM > Web Analytics at the most granular levels

Enterprise design systems are often evaluated by what they contain: components, patterns, templates, tokens. Less often are they evaluated by what they reveal.

What do designers actually reach for when they’re solving real problems?
What do business units ultimately ship?
And what do users meaningfully engage with once experiences are live?

This post outlines a long-view initiative I’m working on to expose end-to-end design demand—connecting signals from Figma, Adobe Experience Manager (AEM), and web analytics into a single feedback loop. The goal isn’t just better reporting. It’s a clearer understanding of how design intent, organizational execution, and user behavior intersect inside a large enterprise design system—and how AI can help surface those insights at scale.

A long-view project
at the intersection
of AI and UX

This is not a quick win or a dashboard-for-dashboard’s-sake exercise. It’s a foundational effort to understand design demand across the entire lifecycle of an experience.

At a high level, we’re consolidating three disparate data sources:

  • Figma – what designers propose, explore, and iterate on

  • Adobe Experience Manager – what business units actually author and publish

  • Web analytics – how users interact with those experiences in the real world

By mapping usage analytics across these platforms, we can begin to trace a “through-line” from design exploration → production reality → user response, down to the level of themes, components, and layout patterns.

This kind of visibility is increasingly important as enterprise design systems grow in size, influence, and complexity—and as AI-driven analysis makes it feasible to reason over large, messy datasets that were previously siloed.

Understanding design
library demand
from designers

The first signal in the system is designer demand.

Figma gives us a rich view into how the design system is actually used day to day—not how it’s documented or intended to be used, but how it shows up on real artboards.

Some of the questions we’re exploring:

  • Which components consistently appear in designs across teams?

  • What component combinations are designers gravitating toward?

  • How do those combinations map to specific layout templates?

  • Where is there high design activity—new pages, major updates, frequent iteration?

This helps us understand what designers want to use when solving problems, which components feel flexible or expressive, and which patterns are becoming de facto standards—sometimes before they’re formally recognized as such.

Just as importantly, it can reveal areas of friction: components that exist but are rarely touched, or patterns designers routinely rebuild because the system doesn’t quite meet their needs.

Understanding design
library demand
from the business

The second signal comes from production reality.

AEM tells a different story—one shaped by business priorities, timelines, regulatory constraints, and content operations. It shows us what actually makes it into market-facing experiences.

Here, we’re asking questions like:

  • Which components and layouts are most commonly authored and published?

  • Where do we see divergence between what designers propose and what gets shipped?

  • Do high-performing components or layouts fall off during production?

  • How do authored experiences align with stated business unit goals and objectives?

This layer is critical because it exposes the translation gap between design and delivery. Sometimes that gap is healthy. Sometimes it signals constraints or tradeoffs worth addressing. And sometimes it highlights missed opportunities—patterns that perform well but aren’t consistently adopted across the organization.

Understanding design
library demand
from users

The final—and arguably most important—signal comes from users.

Web analytics allow us to move beyond page-level performance and begin understanding experiences as compositions of components and layouts.

Instead of asking “Which pages perform best?”, we can start asking:

  • Which component combinations consistently drive engagement?

  • Which layouts support specific user intents more effectively?

  • Where do users respond positively—or disengage—within shared patterns?

By isolating high-performing combinations and permutations, we can turn user behavior into actionable design insight.

And critically, those insights don’t live in isolation. They can be fed back to:

  • Design teams, to inform future exploration and standards

  • Business units, to guide content and experience strategy

  • The design system itself, to prioritize evolution based on evidence, not assumption

Over time, this creates a feedback loop where user behavior actively shapes how the system grows.

The
Proposed
Solution

To make this kind of end-to-end visibility real, we’re intentionally prototyping the solution in a lightweight, flexible way—one that favors speed, learning, and iteration over premature platform decisions.

At the moment, the approach looks something like this.

Vibe-Coded Dashboards with Cursor and Claude Code

At the core of the solution are vibe-coded analytics dashboards, built using Cursor with Claude Code. The idea is to use AI not just to write code faster, but to help reason about the shape of the data itself.

The current prototype includes:

  • Python scripts for data ingestion and transformation

  • A mix of lightweight visualization frameworks

  • A Streamlit server running locally (for now) to explore and iterate on insights quickly

This setup allows us to move fast, test assumptions, and continuously refine what “useful” analytics actually look like for designers and product teams.

Connecting to Figma

Figma will be connected via the Figma API to extract usage data across our design library.

Claude is used here in a very specific way: analyzing our design system documentation and helping propose which datasets we actually need to answer meaningful questions—rather than pulling everything just because it’s available.

This includes understanding:

  • Component usage frequency

  • Common component groupings

  • Layout-level patterns emerging across files and teams

The goal isn’t surveillance; it’s signal.

Connecting to Adobe Experience Manager

On the production side, Adobe Experience Manager data is being sourced through a combination of:

  • AEM scripts

  • CSV exports of relevant content and component “tables”

  • Structured inputs that describe authored pages, updates, and templates

Again, Claude plays a role in helping identify which data is necessary for analysis—effectively shaping the inputs required to understand business demand without over-collecting noise.

This allows us to connect what was designed to what was actually authored and published.

Closing the Loop with Web Analytics

The final step is user demand, captured through web analytics.

Google Analytics will be used to close the loop, allowing us to correlate component and layout usage with real user behavior. This is the third phase of the project, and we’re still actively evaluating options for analytics data retrieval and integration into the broader Claude Code workflow.

Some of the possibilities under consideration include:

  • Direct GA data exports into analysis-ready datasets

  • Scheduled data pulls feeding the same Python-based pipeline

  • Normalizing analytics events to map cleanly back to components and layouts

For now, our focus is firmly on steps one and two—designer and business demand—so that when user data is layered in, the system is already structured to support meaningful insight rather than surface-level metrics.

Why this
matters for
UX practitioners

At enterprise scale, design systems can easily become static artifacts—governed, documented, and enforced, but not deeply understood.

Exposing end-to-end design demand reframes the system as a living product:

  • One informed by designer behavior

  • Constrained and shaped by business reality

  • Validated by real user outcomes

For UX practitioners and digital product owners, this approach offers a way to:

  • Prioritize design system investment with confidence

  • Identify misalignment early between intent and impact

  • Reinforce patterns that demonstrably work

  • Use data as a partner in design judgment—not a replacement for it

This is where AI becomes especially promising: not as a generator of designs, but as an amplifier of insight—helping us see patterns across thousands of decisions that no single team could reasonably track on its own.

Bringing
it all
together

The real value of this work isn’t in any single metric or dashboard. It’s in the ability to close the loop between what we design, what we ship, and what users experience.

By making design demand visible end to end, we give ourselves a chance to build systems that are not just consistent—but responsive, evidence-informed, and continuously improving.

That’s the intent. And that’s the promise.

 

Next
Next

Cursor + AI to unlock Figma analytics [WIP]