Zendesk

Advanced Scorecards

2024

B2B

Admin UX

Information Architecture

Product Design

Visual Design

Designing a flexible, scalable scorecard builder for Zendesk QA.

  • Delivered a flexible scorecard builder used by large enterprise teams

  • Cut scope from 4–5 development cycles to 2 without compromising on core needs

  • Reduced customer complexity: one went from 10,000 to just 30–40 categories

  • On-time delivery and strong internal and customer reception despite mid-project transitions

Role

Design Lead

Collaborators

Product Manager (new joiner)


Engineers
Designers (previous contributors, rotating in/out)

Tool stack

Figma
FigJam

I took over a highly complex QA initiative mid-project and drove it through to successful delivery:

  • Defined vision and scope after PM and designer turnover

  • Synthesised existing discovery and led new research with PMs and stakeholders

  • Designed and prototyped all core UX flows

  • Supported onboarding of a new PM and coached junior designers

  • Ran prioritisation and design reviews to protect core outcomes from scope creep

Background

Zendesk QA’s legacy scorecard system was functional but rigid, cluttered, and increasingly unscalable for enterprise users. Teams struggled with massive category lists, inconsistent workflows, and a lack of clarity.

Customer complaints — and internal requests — made it clear: the scorecard experience had to evolve.

We set out to build an advanced scorecard builder: flexible, accessible, and ready for the future of AI-assisted evaluations.

Challenge

This was one of the most complex projects in the product’s roadmap:

  • Initially scoped for 4–5 shape ups (6-week development cycles)

  • Inherited unclear vision, scattered discovery, and shifting PM/design ownership

  • Needed to support power users managing hundreds or thousands of QA categories

  • Had to work across teams with different use cases and preferences

  • Required accessibility and long-term design system alignment

Approach

Rapid re-discovery

I started by gathering what was already known — feedback, customer data, and technical constraints — and revalidating with key stakeholders. I tried to understand what problems we’re really solving here, and what’s actually a must-have versus nice-to-have.

There was a lot of data in the form of FigJam notes, Confluence pages, and interview recordings. It was quite a mess and information was scattered in too many places.

Clear scope and iteration

I reframed the problems around core user needs: clarity, manageability, and flexibility. I helped focus the scope and trimmed delivery to just 2 shape ups, reducing delivery time by over 50%.

As I was the team lead for our design team, I knew we had also other priorities and teams that could potentially take over some of the work we were going to leave undone. Finding ways to collectively improve the product was the key.

I wrote a brief spec for what we’re trying to achieve to get all stakeholders on the same wavelength. Stepping in for transitioning PMs to bring clarity to the team and to scope work effectively.

Close collaboration

Worked side-by-side with engineering and PMs to ensure feasibility, viability, and mutual understanding. Constant async Slack messages, impromptu Zoom calls, and weekly syncs were needed to keep the project moving at a high velocity.

I believe in sharing often and early, no matter how uncomfortable that may feel.

I wrote a brief spec for what we’re trying to achieve to get all stakeholders on the same wavelength. Stepping in for transitioning PMs to bring clarity to the team and to scope work effectively.

Key design solutions

  • Flexible category builder: scalable for large orgs but simple enough for small teams

  • Clean, focused layout: supports dense input while avoiding clutter

  • Accessible by design: follows keyboard, screen reader and contrast best practices

  • Customisation without chaos: teams can tailor scorecards without breaking standardisation

Results

  • Enterprise teams went from 10,000 categories to just 30–40

  • Significant time savings and usability gains across QA teams

  • Internal teams cited it as a model for how to run large redesigns

  • Positioned QA to better support future AI-powered evaluation tools

Learnings

  • A clear vision unlocks momentum, especially in inherited or high-ambiguity projects

  • Complex admin tools need strong guardrails and flexibility in balance

  • Effective leadership means knowing when to scope down and where to push for better outcomes

  • Shipping complex re-works should always be started in beta. We made the mistake by shipping straight to production, which created some technical bugs in the first month and dissatisfaction.

© 2025 · Estonia

© 2025 · Estonia