top of page

GLOBAL PAYMENTS OPTIMIZATION RESEARCH

Project Overview

Role

Lead UX Researcher

Timeline

1 Month Jan 2026

Methodology
In-depth Interviews, Journey Mapping, Systems Thinking

DOMAIN
Payments & International Experiences

Business Context

  • Global Scale & Investment: A major global payments network invested heavily in a portfolio optimization program to deliver actionable insights for issuer clients across five regions (Asia Pacific, EEMEA, Latin America, Europe, and North America).

  • High-Stakes Impact: These high-quality insights were designed to help clients optimize interchange revenue, reduce fraud, or increase authorization rates critical metrics where inaction leaves millions on the table.

  • The Scaling Problem: Despite strong data infrastructure, regional analytics teams, and well-received data, the program failed to scale, suffering from inconsistent activation rates across regions and accounts.

  • Variable Outcomes: Equally powerful insights yielded drastically different commercial results depending purely on who was involved and when.

  • The Core Challenge: Leadership needed to diagnose exactly why a best-in-class insights program was failing to drive consistent, reliable commercial action.

The Challenge

The Surface Assumption: The problem initially appeared to be a standard tooling or user adoption issue.

The Underlying Reality: Early stakeholder conversations revealed a structural anomaly: the teams producing the insights were highly credible, experienced, and genuinely valued the program.

The Variable Outcome: The drastic inconsistency in commercial outcomes could not be explained by a lack of skill, effort, or motivation.

The Core Question: If the insights are strong and the teams are capable, why does activation succeed in some accounts and stall in others?

The Strategic Approach: I actively resisted the easy explanation (poor tool engagement) to investigate the deeper, system-level forces that shaped outcomes regardless of individual effort or intent.

Research Approach

I designed a qualitative study focused not on evaluating tools or measuring adoption metrics, but on mapping how optimization insights actually moved through real workflows from the moment of data to the moment of decision.

Study Design:

  • 24 interviews across all 5 global regions 

  • 21 individual in-depth interviews (60 minutes each)

  • 2 dyadic interviews (90 minutes) with paired Account Manager and Service Business Lead teams

  • 1 triadic interview (90 minutes) with AM, SBL, and Customer Solutions Center team together

Key Research Questions:

  • How and when do optimization insights enter real workflows?

  • What happens between key conversations and formal moments like QBRs?

  • Where is responsibility for follow-through clear versus implicit?

  • How are tools, dashboards, and decks actually used in practice?

  • What enables activation in some accounts and causes it to stall in others?

  • Methodological Decision: Rather than evaluating individual performance, I focused on the activation journey itself specifically how insights move from generation to commercial action across different roles, moments, and regions. I was looking for structural patterns, not user failures.

Key Findings

The activation gap was not caused by poor insight quality, inadequate tools, or low motivation. It was caused by the absence of three structural mechanisms that reliable activation requires.

Finding 1: Ownership Dissolves After the Meeting

Across every region and role, a consistent pattern emerged. In the room, there was alignment. Direction was agreed. People left feeling like progress had been made. But the conversation was treated as the endpoint rather than the starting point.

No single role was structurally accountable for carrying decisions forward. Without explicit ownership, follow-through defaulted to whoever had bandwidth or initiative.

"Everyone contributes their piece, but there isn't a single owner to carry it from insight to execution."
— Account Manager, North America

This wasn't a motivation problem. It was a design problem the system had no mechanism for assigning and sustaining ownership beyond the moment of decision.

Finding 2: Optimization Is Episodic, Not Embedded

Engagement with the optimization program was almost entirely reactive. Teams engaged during formal triggers QBRs, planning cycles, escalations, leadership requests and disengaged in between. The aspiration of "always-on optimization" existed in name only.
This meant decisions made in one QBR rarely informed the next. Teams re-diagnosed the same issues, re-debated the same opportunities, and rebuilt the same narratives from scratch not because of poor memory, but because the system had no mechanism for carrying prior decisions forward.
"Each time it feels like a new exercise—the questions come back, but nothing is really built on from the previous work."
— Customer Solutions Center, Latin America

Finding 3: Self-Service Assumes Conditions That Don't Exist

The program's tooling was designed around a self-service model: give teams access to dashboards and they will engage proactively. In practice, this assumption failed under real working conditions.

Three structural barriers blocked self-service adoption:

  • Time: Interpretation required bandwidth that frontline teams rarely had

  • Credibility: Centrally produced insights needed to be re-validated locally before they could be used in client conversations

  • Intermediaries: Activation flowed through people who could translate data into narrative, not through tools directly

"If I can't explain where the number comes from, I won't use it, even if it's easy to access."

— Customer Solutions Center, Asia Pacific

Finding 4: Learning Does Not Accumulate

Without structured outcome capture, the program's institutional knowledge lived entirely in individuals' heads. When those individuals changed roles, went on leave, or weren't involved in the next cycle, the prior context was lost.

Outcomes were inferred rather than tracked, and tools remained static regardless of what teams had learned from prior use.

"A lot of the learning sits with the person who worked on it before. If they're not involved, we basically start again."
— Account Manager, North America

The Strategic Reframe

The most important output of this research was changing the strategic conversation.

OLD FRAME

The insights aren't landing. Improve delivery, tool adoption, and user behavior.

NEW FRAME
The insights are landing. But activation isn't designed into the operating model. Build the system.

This reframe was the result of resisting the easy explanation. The easy explanation was user behavior:
People weren't engaging with the tools. The structural explanation required sitting with the data across all 24 interviews and identifying the consistent pattern underneath.

Activation consistently failed at the same point the transition from conversation to owned commitment because that transition was nowhere designed in.

Strategic Recommendations

Based on the research, I recommended redesigning the program around four activation principles that would need to be true for consistent outcomes at scale. These were framed as design requirements, not specific features.

01  OWNERSHIP: Must Be Explicit and Persistent
A single accountable owner must be named at the moment of decision and remain accountable until outcomes are reviewed or closed. Ownership should not shift implicitly or depend on individual goodwill.

02  INTEGRATION: Must Be Explicit, Not Assumed
Decisions must remain active inputs beyond the meeting. Work should resume from prior state, not restart from raw data. Each moment should build on the last, not replace it.

03  SIGNAL: Must Remove Guesswork
Explicit triggers must tell the system when work should move forward, be reviewed, or be re-engaged. Signals should be defined at the instant decisions are made and should not rely on individual memory.

04  LEARNING: Must Accumulate and Feed Forward
Outcomes must be captured in reusable form and used to inform future prioritization. The system should get smarter with each cycle, not reset. Institutional knowledge must live in the system, not in people.

The recommendation was deliberately not a tool or product spec. It was a set of activation design requirements that the program could use to evaluate any proposed solution. This ensured the findings informed architecture decisions rather than feature requests.

Impact & Outcomes

The research shifted the program's leadership team from diagnosing a tool adoption problem to designing an activation operating model. 

Specific outcomes:

  • Strategic reframe adopted across global GPO leadership team, shifting investment priorities from dashboard development to activation design

  • The moment-led activation model was piloted in the LAC region using QBR preparation cycles as the primary activation moment

  • The four-principle framework became the evaluative lens for assessing future product and process decisions across all five regions

  • The research design 24 multi-regional qualitative interviews conducted in 15 days was cited internally as a model for rapid global discovery

Research Reflection: What Made This Study Work

Resisted the Easy Answer: Pushed past initial stakeholder expectations of simple "tool usability" or "training gaps" when the data pointed to deeper issues.

Held the Ambiguity: Maintained the discipline needed to look past surface-level symptoms to uncover the true, underlying structural patterns.

Built an Analytical Anchor: Created a current-state activation journey map spanning 5 stages, 3 roles, and 5 regions to make a complex systemic argument easily legible to a non-research audience.

Executed a Strategic Reframe: Translated messy qualitative signals into a defensible, specific, and actionable claim, shifting the focus from an "insight problem" to a "system design problem."

Drove Decision-Making: Demonstrated the most valuable research skill—not just discovering the pattern, but knowing exactly how to frame it to change how a team operates.

bottom of page