JPMorganChase's cloud platform had grown organically since 2018 into something that was powerful but still somewhat fragmented. Developers adopting to public cloud touched multiple disconnected products, raised requests across separate systems, and had limited visibility into where they were in the journey. The second iteration of the platform, and its integration into the internal developer portal, created a genuine opportunity to change this. I was brought in as experience lead to define what consolidated, end-to-end cloud onboarding journeys should look like — and to make sure that opportunity wasn't wasted.
Approach
Service design was the means through which the problem was understood and communicated across organisational boundaries.
Service blueprinting
Six core journeys were mapped end-to-end: account creation and mutation, OU and VPC preparation and mutation, infrastructure provisioning, and SDLC tool provisioning. Each blueprint captured the frontstage steps engineers experienced, the backstage approval chains and processes around them, and the handoffs between products and teams. Quantitative baselines were captured as part of this process from the outset: products touched, requests raised, and approvals required per journey. Setting a measurable before state was a deliberate choice, ensuring impact could later be demonstrated in concrete terms rather than qualitative claims.
Cross-functional workshops
Blueprint outputs were synthesised in structured cross-functional workshops. Because the cloud platform and the developer portal sat in different lines of business, building shared understanding of the problem required a deliberate forum — a document alone wouldn't cross that boundary. A standing steering group of portfolio architects — convened by LOB leads and Product Management — provided a regular validation channel with the users of the most complex journeys.
Shadowing
I observed early 2.0 adopters navigating the platform live, and sat with support teams to understand the questions they routinely fielded. Support teams absorb friction that users don't report directly — shadowing them gave me a ground-level view of where the product was falling short that usability testing couldn't replicate.
Across all six journeys, the picture was consistent: long approval chains, unclear ownership across teams and products, and no in-context guidance for engineers navigating unfamiliar territory. Cloud Chaos Resiliency onboarding was the most telling example — a single journey required touching multiple disconnected products and raising several requests, each needing approval from the same person. Because the cloud platform and the developer portal sat in different lines of business, this friction couldn't be owned by any single product team. Findings had to be framed for portfolio architects and cross-LOB leadership — people with the authority to act across that boundary.
Redundant input and cognitive overhead
With existing journeys, we saw that engineers were repeatedly asked to provide the same information — account selections, certificate-to-certificate credentials, functional IDs — in every new onboarding flow, regardless of whether those inputs had already been defined upstream. Engineers had to remember or manually locate the IDs of artifacts they'd created in earlier flows — a significant cognitive burden in an environment where individual steps were already technically demanding.
Expertise assumed, not scaffolded
Engineers were expected to arrive at each journey already knowing their full cloud architecture and network setup — entering account configurations, VPC parameters, and network topology directly, with no system guidance. The platform made no use of what it already knew. Simple inputs like region, environment type, and network size could have been used to surface suggested configurations, reducing the knowledge burden on developers without deep infrastructure expertise. Instead, every user was treated as an expert at the point of entry.
Outcomes
The service blueprints gave me a clear foundation for redesign. Across Cloud Chaos Resiliency onboarding, account creation, SDLC tool provisioning and infrastructure provisioning, the core intervention was consolidation — bringing what had been multiple disconnected products and redundant approval requests into single, unified journeys. This reduced the approval workload for application owners as much as it simplified the experience for engineers.

A consistent challenge across all six journeys was designing for a wide spectrum of users — from experienced cloud architects comfortable creating OUs and VPCs, to developers with limited infrastructure knowledge attempting the same tasks. Rather than cluttering the interface with persistent instructional content, I defined a reusable in-context help pattern that governed not just the components — tooltips, popovers, in-context drawers, and links to longer-form documentation — but when each should be used. Experienced users got a clean interface; less experienced developers were never left without the information they needed.
Decisions
Flexible, modular wizard architecture
Research showed that users often didn't have all the information they'd need for infrastructure provisioning or tool onboarding at the point of account creation. Some developers created accounts before knowing what infrastructure they'd need to deploy; others couldn't proceed with tool onboarding until their team's tooling decisions were made. Rather than enforce a single linear path through all setup tasks, I designed the wizard to support both approaches: completing everything in a single flow, or deferring infrastructure provisioning and tool onboarding to complete separately. A dashboard checklist was designed to surface outstanding deferred tasks — a natural re-entry point that would have prompted users toward completion. Post-creation notifications were also designed to carry the same prompt. Both were cut due to development constraints; in the shipped experience, deferred tasks lived in the modular entry points with no structured prompting toward completion. The gap is documented, and the designs exist.
Progressive disclosure over external documentation
Research showed users were cross-referencing external documentation while completing wizard steps — navigating away to find information needed to make selections, then returning to the form. I opted for in-context help over documentation links to remove that context switch entirely. A three-tier progressive disclosure hierarchy — tooltips, popovers, drawers — meant the interaction cost of accessing help scaled with how much help was actually needed. In practice, content always mapped clearly to one tier without debate, a sign the hierarchy was well-calibrated to the real range of content across these journeys.
Pre-population over redundant entry
Mapping the resiliency tooling journey alongside other onboarding flows revealed inputs that engineers were being asked to provide that had already been defined elsewhere — in the project creation flow, for example. Rather than treat each journey as a discrete form, I identified datapoints that could be pre-populated from upstream flows, reducing redundant input, cognitive load and shortening the journey without removing any meaningful decision from the user.
CCR onboarding was the most friction-heavy of the six journeys — the clearest benchmark for what redesign could achieve. Redesigned journeys delivered measurable reductions across the board:
Cloud infrastructure is genuinely complex, and pretending otherwise doesn't serve engineers. The challenge was exposing that complexity in a way that felt manageable — breaking long journeys into clear stages, surfacing only the information relevant to each step, and providing in-context support for unfamiliar concepts. Engineers didn't need the complexity removed; they needed it structured.
Shadowing early 2.0 adopters and documenting their experience across documentation, UX and support channels gave me a ground-level view of where the product was falling short that no amount of usability testing could replicate. Support teams absorb the friction users don't report directly — watching what questions they fielded, and how often, pointed me towards the highest-impact improvements and gave me a way to communicate those opportunities credibly to leadership.
In large engineering organisations, UX often has to justify its presence. Qualitative findings — however well-evidenced — rarely move stakeholders the way numbers do. Building metric capture into the service blueprinting process from the start meant I could tell a before-and-after story in concrete terms: fewer products, fewer requests, fewer approvals. That quantifiable story gave UX a seat at the table it might not otherwise have had.
