An Engineering Leader's Guide to "Measuring What Matters"

Table Of Contents

Imagine your team burning the midnight oil, fueled by coding passion. But hold on — the roadmap is still delayed, and tasks keep bleeding into the next sprint. Sound familiar? Missed deadlines can lead to frustrated customers, lost revenue, and a team morale dip.

This article is designed for engineering leaders like you, seeking to optimize their software delivery process using your sprints. It talks about a data-driven approach to identify hidden bottlenecks and inefficiencies. Let’s ensure your team consistently hits its targets and thrives.

Success — smoothly delivered work on time, every time

It’s essential to establish why you should even spend your next 5 mins reading this article. The outcome of this exercise is:

  • predictable delivery of your team’s work,
  • enabling a better planning of roadmap, organisational resources, staffing, and
  • enabling your focus on strategy by freeing you up from unnecessary operational hustle.

In short, your team will have:

  • High say-do ratio
  • Work flowing like water

First speed bump — what to measure?

You’re already knee-deep in standups, planning, and burndown charts. But for hidden friction, we need a broader view. Here’s what matters:

  • Committed vs. Delivered: How much of what you planned actually ships? Aim high (100%!), but missing it is okay. Analyze the “why” and relentlessly improve next time. (Refer to the image below showing the sprint tickets divided basis their genesis(planned or adhoc) and delivery(successful or unsuccessful).
Retrospective grid. Source: MiddlewareHQ

Friction often eats into this target:

  • Ad-hoc Tasks: Buffer time and reject out-of-scope requests.
  • Underestimates: Work with developers to minimize surprises.

Analysing multiple sprints over time helps identify these hidden issues, empowering you to continuously improve your team’s delivery.

The trick — Make it uniform

Imagine trying to compare apples and oranges. That’s what happens when you analyze work with different units. To avoid this, we need a standardized unit of measurement.

Choose your unit: While I recommend “story points” or “T-shirt sizes” (S, M, L, XL), any consistent system works. Remember, it’s about effort, not actual time spent.

T-Shirt Sizing Example:

  • Small (S): 1 day of effort
  • Medium (M): 2 Small tasks (roughly 2 days)
  • Large (L): 2 Medium tasks (roughly 4 days)
  • Extra Large (XL): 2 Large tasks (roughly 8 days)

The Estimation Trap: Avoid overly optimistic “large” estimates. Break down complex tasks into smaller, “S” sized sub-tasks for more accurate estimation.

By the end, all your tasks will be measured in the same unit, allowing for meaningful comparisons and data analysis.

Magic — Finding the bottleneck

Imagine your software delivery process as a conveyor belt. To identify bottlenecks, we need to map it out.

Let’s break it down: We’ll use a simple sequence: Plan -> Develop -> Test -> Release

For each stage, consider these factors:

  • Input: The amount of work entering the stage (e.g., story points).
  • Capacity: The team’s ability to process that work (e.g., story points per sprint).
  • Output: The amount of work completed and delivered to the next stage.
Work flowing through stage

By analyzing these metrics, we can identify potential bottlenecks:

  • Input > Capacity: More work is coming in than the stage can handle. This creates a bottleneck on the left.
  • Output > Capacity: The stage is overloaded, potentially due to underestimation or rework.
  • Input < Capacity: The stage isn’t working at full capacity, which could indicate a bottleneck in a previous stage or excess capacity in this one.

Example: Testing Stage Bottleneck

Let’s say your QA can handle 20 story points per sprint (2 points per day, 10 working days). Here are three scenarios:

Scenario 1: Bottleneck Detected

  • Input: 25 story points
  • Output: 20 story points
  • Analysis: Input exceeds capacity, creating a bottleneck. The Test stage can’t handle the workload, impacting delivery. Consider adding more capacity to this stage to unlock delivery to your team’s potential!

Scenario 2: Underestimation

  • Input: 20 story points
  • Output: 15 story points
  • Analysis: Input equals capacity, but output falls short. This suggests underestimation during testing.

Scenario 3: Bottleneck or Excess Capacity

  • Input: 15 story points
  • Output: 15 story points
  • Analysis: Input and output match, but capacity isn’t fully utilized. There might be a bottleneck before testing or potentially excess capacity in this stage.

This analysis helps identify where your process is slowing down, allowing you to focus improvement efforts on the most impactful areas.

Smart work — Use a tool to assist you in this process

As a leader, your time is precious. Focus on the high-value activities that drive your team forward:

  • Technical Roadmap Decisions: Guide the direction of your software development.
  • Coaching and Mentoring: Empower your team to excel.

Delegate the Reporting: Don’t get bogged down in manual data analysis. Tools like MiddlewareHQ can streamline the process, helping you:

  • Effortlessly Spot Bottlenecks: It automatically pulls your sprint data and presents actionable insights like sprint flow to identify issues quickly and easily.
Sample Sprint Flow. Source: MiddlewareHQ
  • Close the Feedback Loop: As a leader, once you spot any friction in your process, you’ll likely do experiments to improve productivity. MiddlewareHQ helps you track progress and measure the impact of your improvements.
Improving trend of completed tickets for a team. Source: MiddlewareHQ

Next steps?

What does your gut say about the bottleneck in your delivery?

Wait, that was a trick question. Don’t rely on your gut 😄, use data to take the right decisions!

Do share your learnings in the comments when you do this exercise for your team.

PS: Getting started with MiddlewareHQ is as easy as plugging and playing with your sprints within a few clicks! So measure and focus on what matters: leading your team to success.

Recent on Middleware blog