Developer Experience 

Objective

Early user research revealed that development managers spent excessive time piecing together disparate data sources to understand team health, bottlenecks, and long-running pull requests.

To address these pain points, the developer experience dashboard consolidates essential metrics into a single interface. Our primary objective is to boost returning team weekly active users (WAU) from around 20% to 25%, with a further stretch goal of reaching 30%.

In one of our interviews, a development manager shared how time-consuming it was to identify stalled pull-requests, “I have to jump between tools and dashboards just to get a complete picture,” she said. By integrating the four core metrics—deployment success rate, PR cycle time, review time, and waiting time for review—the solution provides a holistic view, enabling teams to ship software faster and more reliably.

Target Audience

Primary – Engineering and development managers looking for quick insights into team productivity, operational health, and trends over time.

Secondary – Individual contributors benefiting from clearer metrics and prompts to expedite code reviews and deployments.

Role & responsibilities

Role

Lead Product Designer

Responsibilities

  • Conducted user research to pinpoint managerial pain points and goals for productivity tracking

  • Led cross-functional discussions with product managers and developers to define core metrics

  • Created user flows, wireframes, and interactive prototypes for the MVP dashboard

  • Organised and analysed usability testing sessions, integrating feedback into iterative design refinements

  • Ensured design consistency with the overall team dashboard and newly introduced developer experience dashboard

Challenges & constraints

Technical complexity
Aggregating data from multiple repository and deployment systems to deliver real-time insights.

Stakeholder alignment
Balancing the needs of engineering managers, product managers, and end-user developers.

Resource limitations
With parallel projects in flight, our team had to operate under tight timelines for prototyping, testing, and iterating.

Research

Methods

User Interviews
Engaged with over a dozen Engineering Managers to understand their specific pain points, from identifying long-running PRs to calculating deployment success rates

Analytics
Mined data from existing dashboards and tools, confirming that lack of integrated insights was a major frustration

Stakeholder workshops
Brought together product owners, design, and engineering to brainstorm which metrics would deliver the most immediate value

Key findings

Need for consolidated data
Many managers cobbled together homemade dashboards, signifying a large gap in available tooling

Desire for actionable insights
Merely presenting metrics wasn’t enough—managers wanted in-dashboard prompts to help them take the next step (e.g., addressing an overdue PR).

Focus on trends
Manager level users crave insights on how to remove bottlenecks and improve developer well-being, not just raw metrics.

Ideation

Conducted brainstorming workshops to prioritise features that would yield MVP customer value, e.g. shipping the feature quickly but effectively.

Aligned the new feature with the existing team dashboard, ensuring synergy for both managers and developers.

Designed initial wireframes to visualise how the four key metrics would be displayed and how managers could drill down from high-level stats to detailed views.

User flow

Design

Challenges

Mixed granularity
Some managers wanted an at-a-glance overview, while others needed a detailed breakdown for each team member

Prompting action
Moving from passive reporting to an interactive experience that highlights potential bottlenecks (e.g. 3 pull requests stuck for over 7 days).

Data integration
Ensuring the new feature could ingest and interpret data efficiently for real-time metrics

Solutions

Layered information architecture
The feature uses a tiered view, top-level metrics, plus drill down reports for deeper insights (e.g. PR review time).

Action cards
Each metric tile includes prompts that link directly to the relevant section or tool, reducing friction and accelerating intervention.

Consistent design language
Reused patterns from the team dashboard to keep the user experience cohesive and intuitive.

High-fidelity design

High-fidelity design demonstrating the unified look and feel.

User testing

Testing methodology

Usability tests
Conducted remote testing sessions with development managers to gauge dashboard clarity and discoverability of insights.

Feedback integration

Expanded filtering options after managers asked for more granular filters by team, repository, and date range.

Incorporated additional textual descriptions to clarify each metric’s calculation, helping users trust and act upon the displayed data.

Impact

Quantitative results

Returning team weekly active usage (WAU) indicated change from ~20% to ~25%, meeting our initial goal. Early adopters showed signs of pushing toward the 30% stretch goal. Metrics around review bottlenecks indicated quicker resolutions, with some teams halving their average waiting time for review.

Qualitative feedback

Managers praised the dashboard for offering clear, actionable insights. Users noted that being able to see “how we’re trending over time” gave them confidence to proactively address productivity issues before they became systemic problems.

Customer value

The four initial metrics, deployment success rate, PR cycle time, review time, and waiting time, provided the core insights managers need to enhance team productivity. Clear visual cues and action prompts transform previously static data into a springboard for continuous improvement.

Lessons learned

Early and frequent testing is crucial
Rapid feedback loops ensured design decisions stayed aligned with real-world workflows.

Context and action is key
Merely listing metrics doesn’t drive engagement, pairing them with practical recommendations does.

Iterate in MVP stages
By shipping the feature iteratively, we balanced speed-to-market with the need for a robust, user-validated solution.

Conclusion

The Developer Experience Dashboard demonstrates how targeted, actionable metrics can meaningfully enhance team productivity and overall operational health. By focusing on the four MVP metrics and integrating user feedback, we not only achieved a measurable uptick in returning WAU among development managers but also empowered teams to identify and address bottlenecks proactively.

This feature underscores the principle that design elevates data into insight, transforming metrics into meaningful actions that drive continuous improvement. Through consistent testing, stakeholder alignment, and a user-centric lens, the Developer Experience Dashboard set the foundation for an evolving suite of features that will further improve how teams ship software.

Get in touch