GLOBAL PAYMENTS OPTIMIZATION RESEARCH
Project Overview
Role
Lead UX Researcher
Timeline
2 Month Nov 2025
Methodology
Qualitative In-depth Interviews, Decision-Making Analysis
SCOPE
End-to-end study framing, research design, moderation, synthesis, executive reporting
Business Context
App promotion tools have rapidly shifted toward automation. While these systems promise efficiency and scale, adoption and trust remain inconsistent even among experienced advertisers who understand the technology and have budgets to invest.
-
Product stakeholders framed the problem as:
-
Education gaps
-
Feature discoverability issues
-
Insufficient onboarding
Research risk: If this framing was incorrect, any solution built on it would fail to address the real barrier to adoption.
The Challenge & Thought Process
Quantitative data already existed: spend allocation, performance metrics, adoption rates. What it couldn't explain was why "successful" tools were still being avoided, why advertisers reverted to less efficient workflows, and why confidence broke down at scale.
Core question reframed:
Is the challenge really about how tools work or about how safe they feel to choose?
This shifted the research objective from evaluating product features to understanding how advertisers make high-stakes decisions in automated environments, specifically:
-
How automation fits into their mental models of control and accountability
-
What drives confidence versus hesitation in scaling spend
-
How internal pressures (finance, leadership, performance reviews) shape product usage
-
Why certain tools are trusted, tolerated, or avoided independent of performance
This was not an evaluative usability study. It was a decision-making and trust study.
Research Approach
I designed a qualitative study focused not on evaluating tools or measuring adoption metrics, but on mapping how optimization insights actually moved through real workflows from the moment of data to the moment of decision.
Study Design:
Method: 1:1 semi-structured interviews with flexible moderation based on role seniority and domain depth.
Design rationale: Participants varied widely across business maturity, funnel ownership, industry constraints, and regional/regulatory context. A rigid interview guide would have flattened insight. The study needed to follow decision logic, not feature flows.
Participants represented:
-
In-house and agency roles
-
B2C, B2B, and hybrid app models
-
Early-stage to enterprise-scale organizations
-
Varying technical and analytical sophistication
To preserve confidentiality:
-
No company names disclosed
-
No participant counts shared
-
No attributed quotes
-
Insights synthesized at the pattern level, not individual level
Key Research Questions
-
How and when do optimization insights enter real workflows?
-
What happens between key conversations and formal moments like QBRs?
-
Where is responsibility for follow-through clear versus implicit?
-
How are tools, dashboards, and decks actually used in practice?
-
What enables activation in some accounts and causes it to stall in others?
-
Methodological Decision: Rather than evaluating individual performance, I focused on the activation journey itself—specifically how insights move from generation to commercial action across different roles, moments, and regions. I was looking for structural patterns, not user failures.
Using Hypotheses to Guide Listening
Rather than validating assumptions directly with participants, I used hypotheses internally to guide listening and synthesis.
Key hypotheses included:
-
Adoption decisions are driven by defensibility, not preference
-
Automation fails when users can't explain outcomes upstream
-
"Lack of control" often masks lack of visibility
-
Upper-funnel value is understood but operationally risky
-
Product usage reflects org incentives, not just UX quality
These hypotheses shaped the research lens, not the interview script. They helped me know what to listen for without leading participants toward predetermined answers.
Key Findings
Finding 1: Automation Is Valued Until It Becomes Indefensible
Participants were not anti-automation. They were anti:
-
Unexplainable performance shifts
-
Black-box recommendations
-
Inability to answer "why" when challenged
Key insight: Automation fails when it breaks narrative ownership.
When users couldn't explain outcomes internally, trust collapsed even if results were positive. The issue wasn't performance. It was the inability to defend the decision to use automation when questioned by leadership or finance.
Finding 2: Adoption Is a Risk Decision, Not a Preference Decision
What appeared externally as low adoption often reflected rational self-protection.
Participants optimized for:
-
Career safety
-
Predictability
-
Internal credibility
Translation: Adoption behavior mirrored organizational incentives, not product quality.
Using automation meant accepting personal accountability for outcomes you couldn't fully control or explain. In high-stakes environments, this was a rational reason to avoid adoption regardless of how well the tool performed.
Finding 3: Upper-Funnel Value Is Accepted Conceptually, Rejected Operationally
Participants articulated strong belief in awareness and consideration. However:
-
Measurement ambiguity
-
Attribution gaps
-
Long feedback loops
Made these investments hard to defend under budget scrutiny.
Result: Upper-funnel initiatives were often deprioritized not due to disbelief, but due to risk. The problem wasn't understanding value it was proving value in a system that rewarded short-term, attributable conversions.
Finding 4: Control vs. Automation Is the Wrong Framing
Participants did not ask for:
-
More levers
-
More configuration options
They asked for:
-
Clearer signals
-
Better visibility
-
Stronger causal explanations
Insight: Trust was driven by understanding, not control.
The product narrative positioned automation as "set it and forget it." But users didn't want to forget it they wanted to understand it well enough to defend it. The framing of control versus automation was a false dichotomy.
Synthesis: The Core Insight
The primary UX problem is not usability. It is decision confidence under uncertainty.
Advertisers don't optimize for performance alone. They optimize for explainability, defensibility, and internal alignment.
Any product that ignores this will struggle to scale trust regardless of performance.
Implications for Product & UX
This research suggests that effective automated systems should:
-
Treat explainability as a first-class UX concern
-
Surface system logic, not just outcomes
-
Support internal storytelling and reporting needs
-
Reduce organizational risk, not just operational effort
Reframe of success: A system succeeds when users feel confident choosing it again under scrutiny.
Impact & Value
This study:
-
Reframed the problem from "education" to "decision safety"
-
Provided a clear lens for evaluating future product directions
-
Created shared language across product, research, and design teams
-
Informed how automation should be positioned, not just built
Research Reflection: What Made This Study Work
This project reinforced a core research principle:
UX research is not about validating ideas. It is about reducing uncertainty so better decisions can happen.
The most meaningful insights were not about features they were about how people navigate pressure, accountability, and ambiguity. That's where real product decisions live.
The discipline of holding hypotheses internally rather than testing them directly with users created space for unexpected patterns to emerge. The insight about defensibility didn't come from asking "do you feel like you can defend this?" It came from listening to how people described their decision-making process when stakes were high.