Skip to main content
Back to all work

From 72 Hours to 15 Minutes

Role: UX Designer Company: Jellyfish / BrandTech Group Period: 2022-2024

TL;DR

Two products, one repeatable pattern: I took complex, manual analytical processes and designed interfaces that compressed them from hours to minutes. J+IQ reduced competitive analysis report generation from 72 hours to 15 minutes, saving 1,000+ human hours in its first five months and training 130 users across five cities. J+Bidding reduced ad spend bid calculations from 5 hours to 1 minute, supporting $800K+ in revenue goals. Both achieved these results through the same approach -- systematic multi-country discovery research, followed by design that eliminates complexity rather than hiding it.

Context

Jellyfish, a digital marketing agency within the BrandTech Group, operated a global network of advertising practitioners -- account managers, data planners, media strategists, and paid media specialists working across dozens of countries. Their internal tooling needed to automate and simplify the manual analytical work that consumed hours of practitioner time every day.

J+IQ was an automated competitive intelligence tool that generated analysis reports comparing a brand against up to five competitors. Before automation, this work required analysts to manually gather data from multiple sources, build comparisons, format presentations, and deliver them to clients -- a process that took approximately 72 hours of analyst time per report.

J+Bidding was an ad spend optimization tool spanning two platforms -- Meta (Facebook/Instagram) and Google's DV360. Paid media practitioners manually applied bid multipliers through Meta's Graph Explorer API -- a process that required technical scripting knowledge most team members did not have, took approximately 5 hours per calculation, and provided no visibility into which campaigns had multipliers applied or how those multipliers affected spend.

I led UX research and design for both products. J+IQ was my earlier project (mid-2022), J+Bidding came later (early 2024) -- and the methodology I developed on J+IQ directly informed how I approached J+Bidding. Together, they demonstrate that dramatic workflow transformation is not a lucky accident. It is a repeatable capability.

The Challenge

J+IQ's challenge was adoption at scale. The tool had already automated the core task -- delivering competitive analysis reports in 15 minutes instead of 72 hours. But automation means nothing if users cannot actually use the interface. Jellyfish needed to validate that 130+ analysts ranging from complete beginners to power users across five global cities could navigate the tool effectively, identify pain points before broader rollout, and ensure the automated output met the quality standard that clients expected from hand-built reports.

J+Bidding's challenge was accessibility. The underlying optimization technique -- bid multipliers -- was powerful, but the process of applying them was so technically complex that most paid media practitioners simply could not do it. They lacked the scripting knowledge to make API calls, had no visibility into which campaigns had multipliers applied, did not understand the technical codes required for targeting (country and city codes, demographic parameters), and experienced overlapping campaigns causing inefficiencies they could not diagnose. The tool did not merely need to be faster. It needed to make a technical capability accessible to an entirely new population of users.

My Role

Title: UX Designer

Actual scope: Lead UX researcher and designer for both products. I owned the complete research and design lifecycle -- from study design and participant recruitment through execution, analysis, synthesis, and UI design. For J+IQ, I conducted the usability study that validated the tool for broad rollout. For J+Bidding, I led end-to-end discovery including multi-country interviews, persona development, journey mapping, workshop facilitation, shadow sessions, and high-fidelity interface design.

Team structure: For J+IQ, I worked alongside the product team responsible for the tool's development. For J+Bidding, I was credited as "UX Research -- Felicity" alongside Product Manager Alessandro, collaborating with data science operations, engineering, and paid media practitioners.

Research & Discovery

J+IQ: Rigorous Usability Validation Across Five Cities

I designed a multi-method usability study combining System Usability Scale (SUS) evaluation with task-based testing:

Participants: 7 users from 5 global cities -- Mexico City, Sao Paulo, Paris, Johannesburg, and Brighton. I deliberately recruited across three user profiles: 2 new users (never used J+IQ), 2 single-use users (used it once), and 3 frequent users. This distribution ensured the study captured both learnability and long-term usability.

Method: Each session consisted of two phases -- a 30-minute exploration and report creation task, followed by the standardized 10-item SUS questionnaire. I documented findings page-by-page, extracting user quotes, categorizing issues by type (ergonomics, legibility, hierarchy, wording), and providing specific, actionable improvement recommendations for every screen.

Key findings: The tool performed well. The SUS score came in at 83.9% -- rated "GOOD" on the SUS interpretation scale and above the industry average of 68. Individual scores ranged from 75 to 97.5, indicating consistent quality across user profiles. Task completion hit 100%. Users reported the tool was "very clear" and "easy to use."

But the study also surfaced specific improvement opportunities: the "Discover competitors" button was not immediately visible, URL copy-paste handling created friction, and the meaning of status indicator dots was ambiguous. These were the kind of targeted fixes that prevent small frustrations from compounding at scale across 130 users.

One user captured the tool's value simply: "100%. I am a beginner so it does more than I expected." Another said: "I love that I have a deck I can present to the client! That is my favorite thing."

J+IQ dashboard showing analysis list, competitor analysis detail, and generation progress tracker

J+Bidding: Multi-Country Discovery for a Zero-to-One Product

J+Bidding required a fundamentally different research approach. There was no existing interface to test -- users were making raw API calls. I needed to map the entire workflow landscape before a single pixel could be designed.

Discovery interviews across three countries: I designed a bilingual interview guide (English and French) with distinct question protocols -- 9 categories for stakeholders and 7 for users. I conducted interviews with practitioners across Spain, the United Kingdom, and the United States, covering both strategic directors who wanted to scale bid optimization across their teams and hands-on campaign managers who needed simplified, guided workflows.

Persona development: From the interviews, I developed two detailed personas representing distinct user archetypes:

User journey mapping with emotional tracking: I mapped the current five-stage workflow (Review, Initial Extraction, Modification & Implementation, Sending the Bid, Post-Implementation) with emotional sentiment at each stage. This revealed that frustration peaked during the modification and implementation phase -- the point where practitioners had to manually write API calls and guess at technical codes.

Shadow sessions: In February 2024, I conducted shadow sessions observing actual bid management workflows. Watching practitioners struggle through the Graph Explorer API in real time provided insights that interviews alone could not surface -- the hesitation before each API call, the multiple browser tabs open for reference documentation, the copy-paste errors that introduced risk.

Workshops: I facilitated collaborative brainstorming workshops that produced a categorized feature roadmap across four domains: bid management, dashboard/overview, simulation/prediction, and documentation/tooltips. The workshop outputs included workstream prioritization with duration estimates and a five-level user type responsibility matrix mapping how C-Level executives, Paid Media Heads, Client Partners, Paid Media Specialists, and Data Science Operations each needed to interact with the tool differently.

Stakeholder ecosystem mapping: I mapped the complete organizational context:

This mapping ensured the design served both strategic oversight and tactical execution -- not just the most vocal user group.

Design Process

J+IQ: From Validation to Targeted Refinement

The SUS study confirmed that J+IQ's core design was sound -- the challenge was refinement, not reinvention. I documented findings page-by-page with specific, actionable recommendations that preserved the tool's strengths (clear step progression, useful slide deck output, simple configuration process) while addressing targeted usability issues.

My recommendations focused on three areas:

  1. Discoverability: Making key actions more visually prominent
  2. Error prevention: Improving how the tool handled URL inputs and edge cases
  3. Status communication: Clarifying the meaning of visual indicators during the generation process

The approach was deliberately conservative. When a tool is already scoring 83.9% SUS and users are saying "it does more than I expected," the design strategy is to protect what works while surgically addressing what does not.

J+IQ configuration flow showing brand, region, category, competitor, and attribute selection

J+Bidding: Designing Intuitive Interfaces for a Technical Domain

For J+Bidding, the design challenge was translation -- making a process that required API scripting knowledge accessible to practitioners who had never written a line of code.

The discovery synthesis informed both the product strategy and the interface design:

Audience set configuration: I designed an interface where practitioners could build bid adjustments through intuitive audience segments -- selecting age ranges, locations, and gender categories through nested dropdown menus with default values. The category system was designed to "respond to 90% of needs" based on discovery research, reducing the infinite flexibility of raw API calls to a manageable set of meaningful choices.

Spend visualization: The critical insight from journey mapping was that practitioners had zero visibility into bid multiplier impact. I designed a daily delivery chart showing three overlapping data series: base bid amounts, actual spend with multipliers applied, and projected optimal spend. This gave practitioners at-a-glance understanding of whether their bid strategies were performing -- information that previously required additional API calls to retrieve.

Automated script generation: For the DV360 workstream, I designed an auto-generate flow where practitioners could configure algorithm parameters (name, type, insertion order, floodlights) and the system would generate the necessary script -- eliminating the manual coding that was the primary barrier to adoption.

Progressive disclosure: I structured the interface with the strategic overview (spend chart) above the tactical controls (bid configuration), giving different user levels appropriate entry points. Justin the Strategist sees performance trends first. Alexandra the Campaign Optimizer sees configuration options first. Both can access the full interface.

J+Bidding Bid Multiplier UI showing audience set configuration with nested demographic categories and the Send to Meta action J+Bidding future-state design with daily delivery chart showing actual spend vs. projected optimal spend alongside bid set configuration

Solution

J+IQ -- Validated and Scaled

The usability study validated J+IQ for organization-wide deployment. The tool automated competitive analysis report generation from 72 hours to 15 minutes -- a 99.65% reduction in time. My specific contributions:

J+Bidding -- From API to Interface

The new interface transformed bid management from a technical process accessible only to API-literate specialists into a self-service tool usable by the entire paid media team:

J+Bidding DV360 script setup interface showing auto-generate and manual creation paths

Results & Impact

72h to 15min

J+IQ report generation

1,000+

Human hours saved (5 months)

SUS 83.9%

Usability score (above 68 avg)

130

Users trained across 5 cities

5h to 1min

J+Bidding bid calculation

$800K+

Revenue goals supported

100%

Task completion rate

3

Countries in discovery research

J+IQ -- The Numbers

J+Bidding -- The Numbers

Combined Impact

Both projects delivered 99%+ efficiency gains through the same fundamental approach: understand the workflow deeply through multi-method research, identify the specific points where complexity creates barriers, and design interfaces that eliminate those barriers rather than simply wrapping them in a nicer skin. The combined time savings across both tools -- thousands of practitioner hours redirected from manual processes to strategic work -- represent measurable business value at the organizational level.

Reflections

These two projects, separated by roughly eighteen months, confirmed something important about my practice: the approach works across domains, user types, and complexity levels. J+IQ served data planners building competitive analyses. J+Bidding served paid media specialists optimizing ad spend. Different users, different technical contexts, different business constraints -- but the same methodology produced the same category of result.

What I learned about workflow transformation: The biggest efficiency gains do not come from making existing steps faster. They come from eliminating steps entirely. J+IQ did not make the 72-hour manual process faster -- it replaced it. J+Bidding did not make API calls easier -- it removed the need for them. The research investment that enables this kind of transformation is in understanding the workflow deeply enough to know which steps are essential and which are artifacts of technical limitations.

What the discovery research enabled: For J+Bidding, the multi-country discovery research was not optional preparation -- it was the foundation that made the design possible. Without understanding that practitioners across three countries shared the same fundamental pain points (lack of visibility, technical complexity, overlapping campaigns), the design could easily have optimized for one market's workflows while creating new problems for others. Research at this scale is not a luxury. It is how you design tools that work globally.

What I would do differently: For J+IQ, I would have expanded the study to include more beginner users. The 83.9% SUS score was strong, but the scores ranged from 75 to 97.5 -- suggesting that the lower end of the range (newer users) had a meaningfully different experience that deserved deeper investigation. For J+Bidding, I would have built a longitudinal measurement plan into the design brief to track adoption and efficiency gains over time.

Core patterns demonstrated

Key Artifacts

J+IQ Dashboard

J+IQ dashboard with analysis list showing completed reports and the 15-minute generation process

J+IQ Configuration Flow

Configure analysis screens showing the step-by-step setup and the 15 minutes processing message

J+IQ SUS Study Results

J+Bidding Discovery Presentation

J+Bidding Bid Multiplier UI

Bid Multiplier interface with age, location, and gender configuration

J+Bidding Future-State Design

Future-state UI showing spend visualization alongside bid set configuration

J+Bidding DV360 Script Setup

DV360 Set-up Script interface with algorithm configuration

Workshop and Research Outputs

Let's Connect

I am looking for a player-coach role -- Staff, Lead, or Senior Product Designer -- where I can combine hands-on design with team leadership and research practice development.