Velocity Dashboard

The Velocity Dashboard provides a one-stop-shop into your software development lifecycle helping you and your teams understand any bottlenecks in your process. Get a clear view of PR cycle time to identify bottlenecks and spot inefficiencies, compare pull request size against commit activity, and visualize your team’s success rate across the entire review process — all in one place.

Using the Velocity Dashboard

To view the dashboard, click Eng Intelligence > Velocity Dashboard in the main nav.

Adjust the time range

By default, the dashboard displays data from the last 7 days. To change the date range, click Last 7 days and select a new date.

Apply a filter option

You can apply filters across for team, owner, repository, user label, and more to narrow down the set of data you are reviewing. To select filters, click Filter in the upper right corner of a graph.

Drill down

For each metric, click on the card to drill down into the data for further analysis. After clicking into the card, you can segment the data and drill down further to view a list of data points:

Click a metric card to see a visual and data that can be segmented. Drill further into that data to get a list of data points.

Cycle time breakdown

Cycle time measures the complete journey of a pull request from first commit to merge into your main branch. Understanding this end-to-end process is crucial for identifying bottlenecks and optimizing your development workflow.

Note: This metric is not supported for Azure DevOps.

The four stages of cycle time:

  • Time to open: Development phase: First commit to PR creation

  • Time to approve: Review process: First review to final approval

  • Time to first review: Waiting period: PR opened to initial reviewer engagement

  • Time to merge: Merge process: Approval to successful merge

Key insight

Most teams discover that one of these four areas makes up 50-60% of their overall cycle time, providing a clear picture of a focus area to improve overall velocity.

Cycle time best practices

  • Set SLA targets for each stage (e.g., first review within 1 business day)

  • Implement automated code review assignments to eliminate reviewer uncertainty

  • Use draft PRs for work-in-progress to separate development time from review time

  • Track P95 metrics to identify and address outlier PRs that skew team performance

Visualization options

Choose from multiple statistical views (average, sum, max, min, P95, median) to analyze your data. The default average view provides a balanced perspective, while P95 highlights outliers that might indicate systemic issues.

Cycle time x PR size

Pull request size directly impacts development velocity. Larger PRs consistently correlate with longer cycle times across every stage of the development process - from initial review to final merge. This visualization helps you understand how PR size affects efficiency throughout your development lifecycle.

Compare your pull request size to:

  • Time to open

  • Time to first review

  • Time to approve

  • Time to merge

  • Overall cycle time

Key insight

Smaller PRs move faster. They're easier to review, less likely to introduce conflicts, and require fewer revision cycles. Teams that maintain smaller, focused PRs typically see 2-3x faster cycle times and higher code quality.

PR size best practices

Aim for PRs under 400 lines of code. Break larger features into smaller, logical chunks that can be reviewed and merged independently.

Visualization options

Outliers are excluded by default. To include them, toggle on the Show outliers option.

PR success rate

PR success rate measures the percentage of pull requests that successfully make it from creation to merge, providing critical insight into development efficiency and process waste. A high success rate indicates focused development work, while a low rate suggests potential issues with planning, scope creep, or unclear requirements.

Key insights

A sudden drop in success rate often correlates with rushed feature development, unclear requirements, or experimental work that should happen in separate branches. High-performing teams maintain consistent success rates even during periods of increased PR volume.

PR success best practices

  • Target a PR success rate above 80% for optimal efficiency

  • Review closed (unmerged) PRs weekly to identify patterns and root causes

  • Implement clearer definition-of-done criteria before starting development

  • Use draft PRs for experimental work to separate exploration from production-ready code

  • Track success rate alongside cycle time - both metrics together reveal process health

  • Consider feature flags for risky changes instead of abandoning PRs

Visualization options

Switch between viewing open, closed or merged PRs against success rate.

Last updated

Was this helpful?