The Truth & Misconceptions & FAQs About DORA Metrics

Table Of Contents

What’s different about 2024 is that everyone is talking about DORA metrics.

Everyone's tracking them, shouting about their improvements, or stressing about hitting targets.

But as with everything else amidst the hype, there's confusion, misconceptions, and some questions nobody seems to have a straight answer for.

Well, we’ve got your back.

We’re going to have a look at a few common misconceptions and then also answer some commonly asked questions about DORA.

We also have an intro guide to DORA metrics that you might want to go through before this one!

Myth #1: DORA Metrics are the Ultimate Measure of Success

Hold your horses, engineering managers!

While DORA (Deployment Frequency, Lead Time for Changes, Change Failure Rate, Mean Time to Restore) offers valuable insights, it's not a magical scoreboard determining if your team is winning.  

What about code quality, user experience, or one of the most overlooked factors – team morale?

Focusing solely on DORA can lead to risky shortcuts and burnout.

Myth #2: Higher Deployment Frequency = Better

The "ship it faster, ship it often" quote is tempting, but it's a recipe for disaster if quality suffers.  

Frequent deployments filled with bugs or half-baked features frustrate users more than slightly slower rollouts that deliver real value.  

DORA metrics need to be considered in the context of customer impact, not just velocity for its own sake.

Here is an example, let’s say to boost the deployment frequency we reduce test suites. How do you like that? You see how it can be dangerous, right?

Well, I hope you do!

Myth #3: There's a "Perfect" DORA Benchmark

There are DORA benchmarks that were recently published and we’ve seen them being referred to everywhere. 

In fact, you bet we are also going to talk about them in our own blog.

BUT…

What works for one company might be disastrous for another.

Industry, product maturity, team size, product roadmap & budget all play a role.

Instead of chasing someone else's number, focus on continuous improvement tailored to your own context.

Myth #4: Change Failure Rate Should Always Be Zero

We are all competitive people in this industry, so striving for zero failures is superb, but unrealistic.

In complex systems, some failures are inevitable.

Obsessing over this metric can stifle innovation and lead to a fear of experimentation.

Focus on reducing the impact of failures through a great delivery process and strong incident response.

Becoming the terminator of all feature/product failures won’t work, it didn’t work for Arnold Shwarzenegger, it won’t work for us.

Myth #5: DORA is Just for Engineering Teams

DORA's true power lies in alignment across the development pipeline. 

Metrics like lead time show us bottlenecks throughout the entire development process, not just within the coding phase.

Bringing in product, design, ops, and even leadership aligns everyone on what  "delivery" truly means, leading to substantial broader output than siloed efforts.

FAQs: The Questions about DORA metrics Everyone's Asking (and Some Real Answers)

  1. "My DORA metrics are awful.  Help!"  Don't panic.  Start by identifying the root cause.  Are your processes overly complex?  Do you lack automated testing?  Is the team spread too thin? Targeting the source of the problem is the solution.
  2. "DORA seems to conflict with reliability.  What’s up?"  Balance is important.  You win some, you lose some. Can you accept slightly slower lead times for a lower change failure rate? This builds a culture of thoughtful development, not just blind speed.
  3. "We're improving DORA, but feeling burnt out.  What's wrong?"  DORA metrics won't fix a broken team culture. If constant pressure is the usual, you're not building sustainably.
  4. "How do we track DORA if our tools are a mess?"  You don't need fancy tools, but some form of consistent data collection is essential. Middleware can help you with that! Even a well-maintained spreadsheet can provide baseline insights. Focus on what you can measure before overhauling everything.
  5. "Leadership only cares about deployment frequency. How do I change that?"  Show them the cost of bad DORA. Tie frequent production fires to customer churn, delayed feature launches to lost business. Make it about business outcomes, not just tech stats. Every company builds to run a business, it matters.
  6. "DORA feels too high-level for day-to-day work."  That's true! Break it down. Is your team's change failure rate the main bottleneck? Let them focus on quality-focused initiatives then.
  7. "When is DORA just not the right focus?"  Small internal tools teams, or those heavily focused on experimentation may be better skipping on DORA short term. DORA works great when delivering reliable software for the market is the priority.

TL;DR

DORA metrics are a powerful tool, but like any tool, they can be misused.

We must pair them with common sense, a focus on impact, and the most important: human element.

Recent on Middleware blog