Engineering managers are drowning in metrics. Story points, cycle time, lead time, DORA metrics, deployment frequency, change failure rate. Somewhere in that list are 3 or 4 numbers that would genuinely help you run a better team. The rest is noise.
Here's how to tell the difference.
Start with the question, not the metric
Most teams pick metrics first and then try to figure out what they mean. That's backwards.
Start with the questions you actually need to answer:
- Is my team getting faster or slower over time?
- Are we making realistic sprint commitments?
- Who is blocked and why?
- Are we on track to hit this sprint goal?
- Is our velocity accurate enough to make hiring decisions?
Once you have those questions, the metrics become obvious. You don't need 15 numbers. You need 4 or 5 that directly answer the questions you're asking.
Metrics worth tracking
Velocity. The amount of work your team ships per sprint, measured consistently over time. The key word is "over time." One sprint's velocity is meaningless. Ten sprints' worth of velocity tells you whether your team is growing, plateauing, or struggling.
Track it per person, not just per team. An aggregate velocity that looks healthy can hide one person who's delivering at 150% and another who's blocked at 30%.
Cycle time. How long a ticket takes from "started" to "done." Short cycle times usually mean work is well-scoped. Long cycle times usually mean the opposite: unclear requirements, too many dependencies, or tickets that are too large.
Aim to understand the shape of your cycle time distribution. A median of 2 days is good. A median of 2 days with a tail out to 14 days tells you something different.
Carryover rate. What percentage of tickets planned in a sprint carry over to the next one? Above 20% consistently means your planning is broken. Either estimates are off, scope creeps in mid-sprint, or you're over-committing.
This is one of the most actionable metrics because it's directly tied to your planning process.
Blocker time. How long does it take from when a ticket gets blocked to when it gets unblocked? Long blocker times point to systemic issues: unclear ownership of dependencies, slow code reviews, external teams that aren't responsive.
Metrics that look useful but usually aren't
Raw story points. If different teams use points differently, or if your team's definition of a "3-pointer" has drifted over time, the numbers aren't comparable. Story points are useful as a relative measure within a team, not as an absolute benchmark across teams.
Commit frequency. The number of commits per day tells you almost nothing about team health. Some engineers make 20 tiny commits. Others make 3 large, careful ones. Both can be excellent engineers.
Lines of code. Don't track this. A 10-line bug fix can be more valuable than a 500-line feature. Measuring lines of code creates incentives to write more code, not better code.
DORA metrics for small teams. DORA metrics (deployment frequency, lead time for changes, change failure rate, time to restore service) are valuable for large engineering organizations. For a 5-person team doing 2-week sprints, they add overhead without much signal.
How to present metrics to stakeholders
Engineering managers often face pressure to report metrics upward. A few principles:
Show trends, not snapshots. A single sprint's velocity means nothing. Velocity over 10 sprints tells a story.
Add context. If velocity dropped two sprints ago, explain why. A major refactor, a new engineer ramping up, a critical bug that required all hands. Numbers without context lead to wrong conclusions.
Use metrics to start conversations, not end them. "Cycle time increased 40% this month" should open a discussion, not close one. The metric tells you something happened. It doesn't tell you what to do about it.
Never use metrics to evaluate individual performance in isolation. A low velocity sprint from a senior engineer might mean they spent the week unblocking 3 junior engineers and reviewing 15 PRs. That's not visible in the velocity number.
Building an agile metrics practice for Linear teams
If your team uses Linear, you have the raw data for all of these metrics. The challenge is that Linear doesn't surface them in a way that's easy to use for ongoing decision-making.
SprintIQ is built specifically for this. It connects to your Linear workspace (30 seconds, read-only OAuth) and computes:
- Velocity per team member across all past cycles, with trend charts
- Blocker detection: tickets that have been In Progress too long, surfaced automatically
- Sprint forecast: will you hit your cycle goal based on current pace?
- Carryover tracking across sprints
It's designed for engineering managers who need real answers without spending hours in spreadsheets.
Free to start. No credit card required.