Skip to main content
Back to all work

Designing the First AI Brand Perception Tool

Role: UX Designer Company: Jellyfish / BrandTech Group Period: 2023-2024

TL;DR

I led design for Share of Model (SOM), a pioneering AI brand analytics tool that monitors how large language models like ChatGPT and Gemini perceive brands -- a product category that did not exist before we built it. I orchestrated discovery research with C-suite stakeholders across 4 countries, designed complex data visualization dashboards where no UX precedent existed, and conducted 5+ hours of structured usability testing that produced 350+ insights. The POC launched in under 3 weeks and generated 1,500 visits, 300 leads, and 12 demos within 48 hours.

Context

Jellyfish is a global digital media company within BrandTech Group, serving enterprise brands through data-driven marketing services. As AI language models began reshaping how consumers access information, a strategic question emerged: how do ChatGPT, Gemini, and other LLMs actually perceive and represent brands? Brand strategists had tools for share of voice, share of search, and traditional brand tracking -- but nothing existed for measuring brand presence inside AI models.

The concept for Share of Model originated from a hackathon. It needed to become a real product -- one that could serve both internal Jellyfish consultants running client strategy and external enterprise clients making brand investment decisions. The data was multidimensional: brand awareness across multiple LLMs, sentiment analysis, attribute perception, competitive positioning, and spontaneous awareness trends over time. The stakeholders were senior -- VPs, Managing Directors, and Chief Strategy Officers across four countries -- and they each had different mental models of what brand measurement meant.

This was a zero-to-one design challenge in the truest sense. No established UX patterns existed for AI brand perception dashboards. I was designing a product where the very concepts users would engage with -- how an AI "perceives" a brand -- had no precedent in existing tools.

The Challenge

The difficulty of SOM was not any single design problem. It was the compound ambiguity.

The data itself was novel. We were not visualizing web analytics or campaign performance -- we were visualizing how large language models respond to prompts about brands. A user might want to understand whether ChatGPT associates their brand with "premium quality" or "affordability," how that perception compares across Gemini and Llama, how it changed over the last quarter, and how it stacks up against three competitors. This is multidimensional data that resists simple visualization.

The users were senior and skeptical. Brand strategists relied on established tools -- YouGov surveys, Google Analytics, share of search metrics -- and needed to trust AI-generated data alongside those validated sources. SOM could not be a novelty; it needed to earn its place in a sophisticated analytical workflow.

The timeline was aggressive. The business needed a proof of concept in under three weeks to capitalize on market momentum. Speed could not come at the expense of rigor -- the POC needed to demonstrate enough depth to convert interest into demos.

And there was the fundamental design challenge: how do you make AI-generated brand perception data comprehensible, trustworthy, and actionable for non-technical brand strategists who have never seen this kind of data before?

My Role

Title: UX Designer

Actual scope: Lead product designer -- end-to-end ownership from discovery through user testing, design strategy, stakeholder research, data visualization design, and UX recommendation roadmapping

Team: Cross-functional team including product owners, tech leads, and developers; I was the design authority

Stakeholder engagement: Direct research engagement with 6 C-suite and VP-level stakeholders across America, England, France, and the Netherlands

Despite holding a UX Designer title, I was conducting strategic discovery research with C-suite stakeholders, making product-level decisions about visualization approaches and A/B testing modules, producing prioritized UX recommendation roadmaps, and driving the design of a novel AI-powered product from concept to production. This was Lead or Senior-level scope in both ambiguity and stakeholder seniority.

Research & Discovery

Mapping the Brand Measurement Ecosystem: Before designing a single screen, I needed to understand how brand measurement actually works in practice -- and where SOM would fit. I orchestrated discovery interviews with 6+ senior stakeholders across four countries: a VP of Brand Strategy, Managing Directors, a Market Intelligence Director, a VP of Partnerships, a Senior Market Intelligence Director, and a CSO for AI, Planning & Insights.

These were not surface-level conversations. I mapped the complete brand measurement ecosystem: how SOM would relate to share of voice metrics, share of search data, YouGov surveys, brand trackers, and audience data. I needed to understand not just what these stakeholders wanted from SOM, but how it would integrate into workflows they already trusted.

The discovery produced critical strategic insights. Stakeholders did not want SOM to replace their existing tools -- they wanted it as a diagnostic complement. One stakeholder described it as a way to "start an analysis 3 months in advance" and make client conversations "more productive." Another noted that "good brand data tends to be survey based and expensive... clients would love essentially free data that isn't survey-based." The naming itself was strategic: "part of the success of share of model as a label is that it sounds like share of mind, share of voice -- things that clients are familiar with."

These insights fundamentally shaped the product direction. SOM would not be a standalone analytics platform. It would be a diagnostic tool for opportunity spotting, time savings, and competitive positioning -- positioned to complement, not compete with, established data sources.

Structured Usability Testing with C-Suite Stakeholders: Once the initial designs were developed, I conducted 5+ hours of structured usability testing with the same 6 senior stakeholders. I designed a 4-part testing methodology:

  1. User Sentiment: First impressions and overall reactions to the dashboard
  2. Task Completion: 9 specific tasks with quantified completion rates -- from reading share of voice percentages to identifying competitive outliers and interpreting brand attribute clusters
  3. A/B Testing: Systematic comparison of specific design alternatives (chart types, layout approaches, filter placements)
  4. Structured Feedback: Ease of use scoring and qualitative feedback on design direction

This methodology produced 350+ quantified insights. I did not rely on subjective preference -- every design decision was backed by data. Straightforward data reading tasks (share of voice percentages, mentions comparison, outlier identification) achieved 100% completion. More complex interpretive tasks surfaced where the interface needed refinement.

Terminology Workshop: I facilitated a dedicated Terminology Workshop to resolve naming confusion around concepts like "Spontaneous Awareness" -- a concept that was clear to brand researchers but opaque to other stakeholders. Aligning on shared language was essential before the product could scale beyond its initial users.

Discovery/Define FigJam board showing stakeholder mapping, HMW statements, and data availability assessment

Design Process

Inventing the Visualization Language: With no UX precedent for AI brand perception dashboards, every visualization decision required testing. I iterated through three major design versions (V1 through V3), systematically evolving the dashboard based on research findings.

Color schemes were reviewed by 3 designers across 5 separate tests to ensure accessibility during extended viewing of data-dense screens. Chart types were not chosen by preference -- they were tested with users:

Each finding translated directly into a design decision. Redundant charts were removed. Scatter plots were replaced with radar charts. Information density was reduced based on feedback that early versions were "overcrowded by information." A global filter system replaced section-level filters.

Building Trust Through Transparency: One of the most consequential design decisions was showing AI prompts directly in the dashboard. When users see AI-generated data, their first question is "how was this generated?" Rather than hiding the methodology, I designed the interface to surface the exact prompts used to query each LLM. This transparency was a direct response to stakeholder feedback about needing to trust AI data alongside their validated sources.

From Insights to Roadmap: After testing, I produced a 40-page UX recommendation report with an impact prioritization matrix (High/Medium/Low) and specific ticket-level implementation guidance. This was not a list of suggestions -- it was a development-ready roadmap that translated research findings into actionable engineering tasks.

Chart comparison artifact showing systematic A/B testing of visualization approaches -- radar vs. scatter, bar vs. stacked bar Dash_V3 showing the polished dashboard with brand awareness trends, AI prompt transparency, and recommendation cards

Solution

The delivered product was a comprehensive AI brand perception analytics dashboard that enables brand strategists to:

The dashboard design balances information density with accessibility. Senior brand strategists praised it as "visually appealing, sophisticated, and well-balanced between simplicity and clarity." The progressive disclosure approach -- summary cards with key call-outs, expandable detail sections, and configurable views -- allows users to scan quickly or explore deeply depending on their context.

Full SOM dashboard showing Brand Awareness section with LLM comparison, trend charts, and summary recommendations Prompts interface showing how users create AI monitoring prompts for brand tracking Homepage/Templates showing the analysis workspace with management and configuration

Results & Impact

<3 weeks

POC launch timeline

1,500

Visits within 48 hours

300

Leads within 48 hours

12

Demos within 48 hours

78%

Ease of use score

100%

Task completion (core tasks)

350+

Insights from testing

40 pages

UX recommendation report

Business Impact

Usability Outcomes

Research Rigor

Strategic Positioning

Reflections

SOM was the first time I designed a product where the category itself was being invented. There were no competitor dashboards to benchmark against, no established patterns for visualizing LLM brand perception, and no user mental models to anchor to. Every design decision required first-principles thinking grounded in research.

What I took away from this project is that the research methodology matters even more in novel product spaces. When there is no precedent, you cannot rely on convention -- you have to build evidence for every design choice. The 4-part testing methodology I developed (sentiment, task completion, A/B testing, structured feedback) produced the kind of quantified, actionable insights that converted subjective debates about chart types into clear design decisions. That methodology is now part of my standard toolkit.

I also learned the value of framing a product within an existing ecosystem rather than positioning it as a standalone disruption. The discovery insight -- that stakeholders wanted a diagnostic complement, not a replacement for their existing tools -- saved the product from a positioning mistake that would have created adoption resistance. Research does not just improve interfaces; it shapes product strategy.

Core patterns this project demonstrates

SOM is my bridge to AI. It demonstrates that I can design novel products in high-ambiguity environments where the product category itself is being invented -- and do it with the research rigor and stakeholder credibility that complex data products demand. Paired with my Center for Humane Technology certification and my Master's thesis on dark patterns, it reflects an ongoing commitment to designing at the intersection of AI and responsible technology.

Key Artifacts

Dash_V3 Dashboard

Dash_V3 hero image showing the complete dashboard experience

V1 User Flow Diagram

V1 user flow showing complete product information architecture

Chart Comparison Board

Chart comparison artifact showing multiple visualization alternatives tested side by side

User Test Report

Discovery/Define FigJam Board

FigJam board showing research synthesis and HMW statements

Design Update Presentations

Let's Connect

I am looking for a player-coach role -- Staff, Lead, or Senior Product Designer -- where I can combine hands-on design with team leadership and research practice development.