/

Increasing developer CSAT

Increasing developer satisfaction through tactical, data-driven UX.

JPMorganChase·June 2025
RoleLead designer
Team4 designers30 developers3 PMs
MethodsService blueprintingUsability testingLongitudinal researchCustomer workshops
ToolsFigmaLucidAdobe Analytics

Increasing developer satisfaction through tactical, data-driven UX.

JPMorganChase·June 2025
RoleLead designer
Team4 designers30 developers3 PMs
MethodsService blueprintingUsability testingLongitudinal researchCustomer workshops
ToolsFigmaLucidAdobe Analytics

Increasing developer satisfaction through tactical, data-driven UX.

JPMorganChase·June 2025
RoleLead designer
Team4 designers30 developers3 PMs
MethodsService blueprintingUsability testingLongitudinal researchCustomer workshops
ToolsFigmaLucidAdobe Analytics

Delivering a 42% CSAT improvement in six weeks

Delivering a 42% CSAT improvement in six weeks

Delivering a 42% CSAT improvement in six weeks

From 2024 to 2026, I worked as a UX designer on a team building an internal developer platform for JPMorganChase's software engineers. Before general release, usability testing and user interviews showed developer satisfaction with the tool was critically low, with consistent themes around confusing navigation, unclear terminology, and difficulty understanding how different parts of the product related to each other. With only a few weeks until launch, the question became: how do we remove as many barriers to adoption as possible, as quickly as possible?

Approach

Approach

This project ran over a period of two months, with my contributions starting in early May. Working back from a hard deadline of late July, we were left with about a month and a half to ideate, design, test, validate, implement and release. We followed the below approach:

Interviews

36 software engineers, engineering managers and architects were interviewed to give research a clear idea of adoption barriers.

Ideation

Concepts were produced to address the most critical adoption barriers. Once validated with internal stakeholders, they were worked into a high fidelity prototype to be tested with engineers.

Moderated usability testing

17 developers, engineering managers and architects performed a series of tasks in the new prototype. Customer satisfaction and ease of use scores were taken and compared against the baseline.

Terminology survey

Updated terminology, which elicited a promising reaction from usability testing participants, was then surveyed with a wider audience to ensure the new language resonated.

AI-assisted research

Early thematic analysis was AI-assisted to surface patterns across interview transcripts. A parallel human extraction exercise validated and challenged those outputs — reframing several themes before sign-off. AI was then used in a second pass to pull verbatims from transcripts, anchoring each validated theme in the research playback with participant voice.

Adoption barriers

Adoption barriers

Adoption barriers

Key barriers to adopting the developer portal could be classified in three categories.

Disorientation

Disorientation

Internal communications were encouraging developers to adopt the portal, but most had no idea what it actually offered. When they landed in it for the first time, they felt immediately lost — their questions went unanswered.

When people first come in, the reaction is 'I don't know how to use this right'.

What I don't see is the projects I have in my application. We jump straight to repositories.

What I don't see is the projects I have in my application. We jump straight to repositories.

Fuzzy contextual relationships

Fuzzy contextual relationships

Fuzzy contextual relationships

The relationship between repositories, projects and applications was communicated inconsistently across the product. This hierarchy mattered to engineers, who performed different tasks depending on which level they were working at — and the portal made it hard to keep track.

Cumbersome navigation

Cumbersome navigation

Cumbersome navigation

User interviews showed that the existing landing page — a dashboard intended to surface personalised activity data — was falling flat. It was often empty, sometimes unhelpful, and generally not worth visiting. On top of that, the majority of beta testers failed tasks that involved navigating across projects.

Finding my projects is difficult. The arrow next to the context selector wasn't intuitive. I didn't get that right away.

Solutions

Solutions

Solutions

Introducing the product with a tour

Introducing the product with a tour

Introducing the product with a tour

We rolled out a short product tour to help developers get their bearings. It covered key concepts: contexts, search, self-service, and the help centre. We started with around 13 stops, then refined it through testing — asking participants what felt essential, what could be grouped, and what could be skipped. The sweet spot turned out to be 7 steps. While some participants found the tour really valuable, others said they'd normally skip it but might want to revisit it later. So we made it optional, with a clear entry point to replay it at any time.

Outcomes

94% of participants felt strongly that the welcome screen and tour were helpful when it came to getting oriented. When participants completed the tour, they were able to navigate more effectively within the prototype.

Switching terminology to match user mental models

Switching terminology to match user mental models

Switching terminology to match user mental models

We enhanced the breadcrumbs to surface the application → project → repository hierarchy, making structural relationships clearer and navigation across contexts easier. Through testing, we also replaced proprietary terminology with more familiar language — swapping in terms like 'project' and 'maintenance' that better matched how developers already thought about their work.

Outcomes

81% of participants felt the new terminology resonated better with their mental model and preferred this to the terminology used in the current production version of the developer portal.

Landing first-time users in a more meaningful location

Landing first-time users in a more meaningful location

Landing first-time users in a more meaningful location

The dashboard landing page wasn't working for most developers — it often had too little data to be useful. Our primary persona, the code-committing developer, lived mostly in a project context anyway. So we swapped the default homepage to a 'My Projects' view, putting the most relevant content front and centre from the moment they logged in.

Outcomes

All participants were more easily able to navigate to critical tasks, such as viewing CI/CD pipelines.

Tradeoffs

Tradeoffs

Tradeoffs

Our initial research showed that the context selector — the main tool for navigating between applications and projects — had low discoverability but high learnability. In other words, developers struggled to find it on their first visit, but once they'd used it, they were comfortable with it. I explored a first-time experience that would walk users through selecting their initial context with an empty UI prompt. But given our timeline, and the fact that the product owner had flagged the context selector as something to be reconsidered in future, we decided not to build a solution that depended heavily on it.

×
Select a context

Welcome to the developer portal!

To get started, select a context.

1
2
3
1
Context selector All content is context-dependent. This is the first action every user must take — so we made it the most prominent element on first load.
2
Visual emphasis without copy Radiating tick marks draw the eye to the selector. We explored tooltip copy here but found the visual treatment alone was enough in testing.
3
Empty state annotation A hand-drawn arc connects the blank content area back to the selector — reinforcing where to go without a wall of instructional text.

With more time, I would have liked to explore a more thorough onboarding experience, which could allow users of the developer portal to personalise their experience while introducing them to key concepts such as their projects and applications. An onboarding journey which allowed users to configure their active projects, define their roles and select 'top tasks' would ensure a more relevant dashboard and navigation menu from the outset.

Decisions

Reducing the onboarding tour from 13 to 7 steps

Reducing the onboarding tour from 13 to 7 steps

Reducing the onboarding tour from 13 to 7 steps

We showed the UI to first-time users and asked two things: what they considered their key tasks, and which information felt critical on a first visit. After completing the tour, we asked which steps felt redundant. Once thematic consensus emerged, we cut or condensed accordingly. Removal addressed steps that were genuinely low-value; condensation grouped related utility items — previously spread across multiple stops — into single ones.

One assumption that didn't survive testing: 'Preferences' had been included as a dedicated stop on the assumption users would need guidance. In practice, users consistently flagged it as redundant, citing it as self-explanatory. That finding is only available if you ask users directly rather than designing the tour from the inside out.

'My Projects' as a context selector mitigation

'My Projects' as a context selector mitigation

'My Projects' as a context selector mitigation

The context selector was inherited IA — designed approximately a year prior, modelled on a similar navigational pattern in the Harness developer portal. It served a genuine purpose: JPMC's ecosystem operates across both applications and projects, with different personas working at different levels. But two problems were clear.

Discoverability

First-time users landing on a populated dashboard had no immediate understanding of what the selector was or why it mattered.

Interaction cost

Switching context required opening the selector, navigating to Projects, filtering or searching, then selecting — too many steps for what should be a quick action.

A full redesign was out of scope given time constraints. Instead, the team introduced a 'My Projects' landing page that gave users a direct, low-friction route to project navigation — reducing dependency on the selector without touching it. Post-launch, 95% of participants navigated to their projects via the landing page rather than the context selector, validating both the utility of the new page and the discoverability problems with the existing pattern.

Iterating mid-study

Iterating mid-study

Iterating mid-study

Where emerging themes were strong enough to carry consensus, we iterated mid-study rather than waiting for a full round to complete. Each iteration was documented — what changed and when — so that shifts in participant sentiment could be correlated to specific design decisions rather than attributed to general familiarity or external factors.

Results

Results

Results

Both metrics were measured through usability testing, where participants completed a set of tasks in the redesigned prototype. Satisfaction was rated 1–5 ('very dissatisfied' to 'very satisfied'), with 4 or 5 counting as satisfied. Ease of use followed the same scale, from 'very difficult' to 'very easy'. Following the redesign, developer satisfaction increased by over 40%. Ease of use rose by 8%, a meaningful gain that suggested the changes had made the product genuinely easier to use, not just better liked.

+42%

+42%

+42%

Developer satisfaction

Developer satisfaction

Developer satisfaction

+8%

+8%

+8%

Ease of use

Ease of use

Ease of use

Lessons learned

Lessons learned

Lessons learned

Iterating mid-study

Iterating mid-study

Iterating mid-study

Working within a tight delivery window taught me that usability testing doesn't always have to be a linear, sequential process. Once thematic consensus began to emerge across participants, we were able to iterate on the prototype mid-study — allowing us to validate improvements faster and make better use of the time we had.

Prioritising delivery based on impact

Prioritising delivery based on impact

Prioritising delivery based on impact

Testing a large number of changes simultaneously also sharpened how we interpreted findings. With engineering capacity limited, we needed to know which changes were moving the needle most. Tracking task completion rates, observing navigation behaviour, and noting which improvements participants surfaced unprompted gave us a clear, evidence-based basis for prioritising what got built first.