Velocity” tells you last sprint’s weather when you need tomorrow’s forecast. We share the six leading indicators our Micro-GCC squads feed into SteadCAST to flag scope creep, burnout, tech-debt drift, and pipeline drag up to two sprints before they explode. Copy-paste JQL / SQL queries included, plus a Grafana dashboard JSON you can import in five minutes.
Velocity answers one question: “How many story-points did we ship?”
Great for history, useless for the future—because it can be high and unhealthy:
Predictability demands forward-looking signals—leading indicators that move before deadlines slip.
| # | Indicator | Why It Predicts Failure | Target |
| 1 | Risk-High WIP % | Too many high-risk stories → re-work spike | ≤ 25 % of sprint cards |
| 2 | Time-to-First-Review (TTFR) | Review wait ⟹ merge pile-up, QA squeeze | < 2 h median |
| 3 | Build Time Delta | CI drifting ⟹ less iteration, dev frustration | ≤ 10 % MoM increase |
| 4 | SBOM Size Delta | Growing dep tree → bigger attack surface, perf hit | ≤ 5 % per sprint |
| 5 | Code Re-open Rate | Stories re-opened in sprint ⟹ spec or quality gap | < 2 re-opens/sprint |
| 6 | Unplanned Work % | Prod bugs & urgent asks hijack sprint | ≤ 10 % of capacity |
SteadCAST ingests these via Jira API, GitHub Actions, and CI logs—alerting amber/red in Slack every Friday.
Process
JQL
CopyEdit
project = ABC AND sprint in openSprints() AND labels = risk-high
Formula = (# risk-high cards in sprint) ÷ (total cards).
SteadCAST threshold: amber at 25 %, red at 35 %.
GitHub GraphQL:
graphql
CopyEdit
{
repository(name:”app”, owner:”org”) {
pullRequests(last:20){
edges{
node{
createdAt
reviews(first:1){ edges{ node{ createdAt } } }
}
}
}
}
}
Compute median ∆ between PR create and first review.
Pulse every hour; ping #dev-ops if > 2 h.
CI logs upload to S3. Athena query compares week-over-week average:
sql
CopyEdit
WITH w1 AS (SELECT avg(duration) d FROM builds WHERE week = current_week-1),
w2 AS (SELECT avg(duration) d FROM builds WHERE week = current_week)
SELECT (w2.d – w1.d)/w1.d AS delta
Amber at 10 %, red at 20 %.
CycloneDX JSON keyed by hash. Lambda compares component count versus last green build; pushes metric to CloudWatch.
Jira workflow places “Re-opened” status. Query last sprint:
JQL
CopyEdit
status CHANGED TO “Re-opened” DURING (startOfSprint(), endOfSprint())
Tag hot-fixes & BAU as “unplanned.” Capacity = SP committed.
SteadCAST shows bar chart: planned vs. unplanned burndown.
(Truncated for brevity—provide file download in resource center)
json
CopyEdit
{
“panels”:[
{“title”:”Risk High WIP %”,”type”:”gauge”,”targets”:[{“expr”:”risk_high_wip_percent”}]},
{“title”:”TTFR Median (h)”,”type”:”timeseries”,”targets”:[{“expr”:”ttfr_seconds/3600″}]},
{“title”:”Build Time Δ”,”type”:”stat”,”targets”:[{“expr”:”build_time_delta”}]}
]
}
Import → set Prometheus data-source → watch live metrics.
Before indicators: sprint rollover 34 %, two hot-fix Fridays per month.
After 3 sprints:
| Metric | Baseline | 3 Sprints Later |
| Risk-High WIP % | 40 % | 18 % |
| TTFR | 5.3 h | 1.7 h |
| Build Time | +18 % MoM | –2 % |
| Hot-fixes | 2 / mo | 0 |
Velocity went down 5 % (fewer “easy points”), yet predictability soared—zero missed releases in next six months.
| Pitfall | Fix |
| Noisy alerts | Use amber → Slack, red → PR block; silence during retro. |
| Label drift (“risk-high” missing) | Jira automation: flag ≥ 8 SP stories as risk-high by default. |
| Developer gaming metrics | Make dashboard public; add dev-voted metric of the month. |
| Over-focusing on one indicator | Dashboard shows composite “Predictability Score” (weighted). |
| Sprint | Add Indicator |
| 1 | TTFR & Build Time Δ |
| 2 | Risk-High WIP % & Code Re-open |
| 3 | SBOM Size Δ |
| 4 | Unplanned Work % & composite score |
Hold “Predictability Retro” month-end; kill one amber metric per retro.