In a previous article, I explored the dangers of unmanaged expectations in software businesses, along with suggestions and tips. One way software teams manage expectations is by providing estimates of engineering effort. Here I’ll examine a specific technique that help teams estimate with confidence. This technique can be particularly helpful for teams that are new or struggling to get on track with expectations.
Predicting the Future
All software delivery teams are asked to predict the future at some point, typically through software development estimates. Estimates are professional opinions provided by engineering, quality assurance, and release teams to help plan the way forward. An estimate reflects how much work may be required to deliver a new feature, process change, bug fix, or other valuable update.
Typical estimation processes start with a Product Manager, Product Owner, or Business Analyst describing the deliverable and user context to a delivery team, followed by clarifying Q&A. Then the delivery team provides an estimate of the work effort necessary. The team estimate typically comes from group discussion with input from each software, quality assurance, or automation engineer responsible for the deliverable.
Common metrics for software estimates include:
- T-Shirt Sizes: S, M, L, or XL
- Story Points – sometimes restricted to Fibonacci Numbers
- Development Hours or Ideal Days
In practice, estimates are highly contextual and may be difficult to compare across teams. Also, S, M, L, or XL models may be used for rough planning early in a project, and then refined into Story Points later. Changing the form of estimates from T-shirt to points or hours typically merits a new team conversation, since by this point the team may have new information to consider in the revised estimate.
Since the purpose of estimates is to help teams and management prioritize and plan investments, value delivery, and release roadmaps, it is important to estimate well. And to do that, it helps to capture context.
Risks, Assumptions, and Dependencies
A good practice is to explore risks, assumptions, and dependencies (RAD) during the estimation process. These are often unexpressed unless you ask. And while risks, assumptions, and dependencies are often interrelated, using these three different words with your team may help you uncover hidden risks in different ways.
Common assumptions relate to team stability or composition. We might assume our team today will be the team delivering features in the future. However, if a team is likely to change people and skills, and our estimate assumes an unchanged team, then this assumption should be captured.
In software, dependencies can arise around other teams, technologies, or deliverables. For example, the level of work required to deliver a new capability might be lower if we assume the availability of an upcoming web service or use of an underlying platform. If our estimated level of effort assumes the service or platform is available, then we need to capture that dependency.
There also may be risks unrelated to assumptions or dependencies. Experienced managers and team leads keep an eye out for these. If the deliverable being estimated could change based on a likely factor or business decision, it is wise to document the risk. When working with estimates, if a concern is shadowing your mind or your team’s, capture it in context.
Of course, you can make yourself crazy by over-documenting risks. And a mature team working on consistent types of deliverables may find less value in capturing RAD for each deliverable. So keep it simple by focusing on what could reasonably diminish your ability to plan with confidence. For example, you might use a risk probability and impact matrix to help gauge which risks need mitigation. Or explore what works best for you and your team.
Can’t I Pad My Estimates to Mitigate Risk?
In organizations that struggle with predictability in their business and delivery, poor practices are often found. For example, people may adjust their estimates upward in order to dull expectations. They might go on record with a larger estimate than what they honestly believe is needed in order to manage personal or team safety.
This can be a hard habit to break. A team that delivers sooner than planned or under budget may be rewarded for beating expectations. By contrast, a team that is seen as underperforming may feel demoralized or unsafe. In these environments, contributors, teams, or managers may silently pad (increase) estimates to reduce their feeling of risk.
Full disclosure: the author, in his youth, padded an estimate here or there as a way to mitigate risks. And in truth, padding estimates or using undisclosed contingency buffers can work for some teams for a period. But if it is done silently, padding estimates becomes a slippery slope towards deception. This can lead to devaluing of your professional opinion and creating mistrust between teams.
For instance, managers or sponsors who believe teams are padding, or who have reporting to show that a team historically mis-estimates, may elect to discount or ignore estimates. They might even build a refactoring model to get by – such as “you double the estimate, then I cut it in half.” These behaviors are often personnel-dependent. So even if a working balance emerges for a time, the model eventually breaks down as staff changes, causing suffering. In the end, this bad habit diminishes rather than builds confidence.
Building Confidence in Your Estimates
Having captured risks, assumptions, and dependencies, and avoided silently padding our estimates, we can now estimate with confidence. As a general rule, all estimates come with uncertainty. Capturing that uncertainty provides helpful context later in planning, especially for teams that are struggling with counterproductive risk mitigation habits or who are new and still getting to know each other. Mature teams with strong scrum masters and product owners may not need this extra step.
When a person provides an estimate, they usually feel an internal confidence associated with their opinion. This is the inner dialogue that accompanies their choice of what to say. That inner dialogue might be affected by their feeling of safety in front of their peers and management, or might be due to specific knowledge or personal experience. It might also be affected by a person wanting to protect their team, knowing they will be held accountable to a plan built from their estimate. This gut feeling of confidence, or lack of confidence, is valuable and worth capturing. Too often, however, these insights hide unrequested and unexpressed.
A good practice when teams are feeling uneasy, or are still maturing their estimation practices, is to ask team members to provide a confidence level with their estimate and risk context. Since we are asking colleagues to express gut feelings, which may be hard to quantify, simple labels like “high”, “medium”, or “low” confidence can help.
- High confidence suggests the estimate will prove accurate, as good or better than average for the team. This is typical for work that is well understood and follows a familiar engineering pattern.
- Medium confidence suggests the estimate will prove reasonably accurate when compared to their norms. This is the “normal” confidence level for most commercial software teams.
- Low confidence suggests the estimate could be significantly off, based on unexplored factors. This is common for work that requires new patterns or isn’t well understood yet. A research spike may be necessary before moving further.
In organizations that use quantitative estimates to calculate how much scope fits into a given release or development cycle, confidence levels can improve plan accuracy. For instance, you might configure a field in Jira to store a High, Med, Low confidence factor alongside story point estimates. This keeps it handy when looking at quantitative measures, so you don’t infer too much confidence from estimates that don’t have it yet.
Confidence levels are a helpful qualitative tool when working with team leads and management to develop plans, especially if risks haven’t been managed well in the past. For instance, high confidence items might be safe to use in planning with considerations for RAD. These are often the “safe bets” in a plan or backlog.
Medium confidence items tend to be the most common. When there are lots of medium confidence items, the planning team might mitigate overall risk in some form. This could be as simple as including a visible contingency reserve. Or it may involve revisiting the delivery team to work the details, plan research spikes, break down scope, or carry the risk until the Cone of Uncertainty works its magic over time. If you are an experienced PM, you’ve worked with many medium confidence estimates.
Low confidence items require additional exploration before they can be accurately planned for delivery. For example, one team may decide that no low confidence items can go into a sprint until they have been researched enough to raise confidence. Another team may be forced to work on a low confidence item ASAP by business needs, and so may choose to prototype or manage expectations and communicate risks proactively instead of reactively and defensively.
What is most important is to use the valuable insights that come with accurate, un-padded estimates and confidence levels. If your team trusts you with their honest candor, honor their trust and insights in your planning.
A word of caution from the school of hard knocks – it may be tempting to try to use confidence levels quantitatively, like a +/- precision measure. That might work for you, though personally, when I need more precise estimates, I’ve had more success by circling back and working the details with my teams. This might mean including research spikes or simply facilitating another round of planning poker with dialogue.
I do not recommend teams increase confidence levels by artificially padding estimates. At first, you may have to remind people to give you their best professional estimate, and not to “help” by padding estimates in order to boost confidence. In a lean software business, unpredictability and over-allocation of budget are bad news. And teams that fall into this trap become less trusted for their judgment over time.
By contrast, thoughtful use of confidence and RAD builds trust. Teams will give you their best input when they can safely communicate their estimate confidence level and assumptions, knowing you will put their insights to good use.
When you are asked to predict and plan for your software, especially with new teams or new initiatives, consider using the tools we’ve discussed. If teams struggle with expectations or safety, capturing estimates with confidence and RAD context into tools like Jira can help. When you use these techniques wisely, you honor your teams’ insights, reduce your risks, and build trust. Most importantly, you can predict the future better with confidence.