Archive

Posts Tagged ‘velocity’

Recognizing Bottleneck’s

March 24th, 2009

The term “bottleneck” refers to a point where after the flow or velocity perceptively reduces. It is metaphorically derived from flow of water through a narrow mouthed bottle where the flow of water is constrained by its neck. For drinking purposes it is a good thing as it regulates the flow of water through the bottle to the drinker, preventing wasteful spillage. Bottleneck’s though constraining can be both a good thing and a bad thing, depending on desirability from the system.

The scrum framework contains product backlog which is essentially a queue (stack ranked, one after the other) of product backlog items (PBI’s) that the scrum team has to complete. Queue’s/backlog’s don’t feel good. Think about the last time you were in a rush and had to go to a bank where you stood in a queue behind fifteen people before the teller addressed your needs. Or the time you had to stand in queue to board your daily commute bus - uncertain whether you will get a chance to board this bus before the driver declares “its full” and rides on. There is the frustration of waiting in a line and the uncertainty of whether you will make it. If PBI’s could feel, then I guess they would empathize with people stuck in queues.

IMO, inherent assumption within complex software product development efforts is that

the business will always have more concepts or requirements than the team’s capacity to transform these into potentially shippable product increments.

If this assumption holds true for you then your product backlog expresses the aggregate effect of bottlenecks that exists downstream in your system. In other words, if there were no bottlenecks or team had infinite capacity then the all the PBI’s will be transformed into potentially shippable product increments within a sprint. Velocity metric represents this constrained capacity of a scrum team. In scrum terms these bottlenecks and other blockers are commonly referred to as impediments. Impediments being a broad generic term, I’m focusing on bottlenecks - which are impediments that specifically cause reduction in flow at a systemic level over multiple sprints. (Nothing too revolutionary!)

Here are a few of such bottlenecks:

1. Ill-defined product backlog items

Top priority product backlog items are not well defined for the team to fill up at least their next sprint and start working.Too often this results in lengthy sprint planning meetings, and delay in terms of days before a scrum team makes sprint commitment and get going. In such cases the team spends the first few days of the sprint analyzing requirements, holding design sessions etc prior to making a sprint commitment. Sprints in this case go in fits and starts with a significant gap in software development efforts between the end of previous sprint and the start of new sprint. In this case the bottleneck is recognized as gaps between sprint end and start dates.

2. No product backlog items

In most cases, this situation is not an invalidation of the assumption that the business has more work than the team can do in a sprint. In fact too often the business has a pressing need to get many features out of the door. Flood of information, lack of agreement, “have to get it right the first time” has a paralyzing effect effectively boxing them in a state of limbo where PBI’s are not defined and the scrum team is left out to dry. Often the bottleneck in this case is upstream of the scrum development team causing either an abrupt end to sprint rhythm or a false start with frequently extended ’sprint 0′.

3. Not-Done product backlog items within a sprint

It is common understanding that PBI’s within a sprint are either Done or Not-Done as measured against their definition of done. PBI’s that get Done are counted towards team’s sprint velocity and the rest don’t. Teams that end their sprints with some or many Not-Done PBI’s find that their ability to pull in new PBI’s next sprint is bottle-necked. Following the mechanics of good scrum practices these teams present Not-Done PBI’s to Product Owner and estimate remaining work for prioritization in Product Backlog. In my observations it is most likely occurrence that the Product Owner will ask for Not-Done PBI’s to be completed in following sprint. Effectively the team carries forward Not-Done PBI’s from last sprint into the next sprint. In cases like these, it may feel like that there is a smooth flow from concept to realization however it ain’t true. Look at the worst case scenario, no new PBI’s are pulled from product backlog and the team spends the next sprint finishing up not-done work from previous sprint. In this worst case example it is easy to call out such a bottleneck, in real teams there are variations along the continuum of getting all committed PBI’s Done to getting none of committed PBI’s Done. It takes an experienced scrum team and/or ScrumMaster to recognize this pattern while its happening for real.

4. Insufficient infrastructure

Lack of staging environments, insufficient QA infrastructure and/or production ready environments stops the flow of developed features at some point before these features can be deemed production ready or ‘potentially shippable’. All these features pile up and aggregate until a sprint or two prior to release date. Then the teams make a major ‘push’ to release all of the developed functionality out of the door.

These last sprints are often called as ’stabilizing sprints’. And I have to say, I will start liking the term ‘Stabilizing sprint’ under the condition that all of the previous sprints are called destabilizing sprints. Each one of the previous sprints was destabilizing the product increment unpredictably. Sadly for me, most people interpret the term ‘Stabilizing Sprints’ as a good thing .

Tricky thing with all the hard stuff, like performance testing, regression testing etc, that could not be done within regular sprints is that the hard stuff does not get simpler if its left for last sprint. It gets even more harder. Causing a snowball effect. Take regression testing for example, if regression testing was not done in previous sprints then in the last stabelizing sprint potentially a lot of regression bugs can show up. If these bugs cannot be fixed in the last stabelizing sprint, these will then ideally fuel addition of PBI’s in product backlog. Leading to either lower quality product release or a delayed release date. In either case, there in lack of objective perception regarding both quality and predictable release date. This bottleneck is obvious to everyone, for there are features piling up every sprint waiting to be production ready however the exponential negative impact is still underappreciated.

Experience, Product Backlog, Uncategorized , , , , ,

Velocity

November 24th, 2008

Def: Velocity is the amount of product backlog that a team can fully implement through product owner acceptance with in a given sprint.

The amount of product backlog in the definition above is often expressed in terms of “story points” or “ideal days”. Fully Implement implies that the product increment built during the sprint is accepted by Product Owner and is potentially shippable or at-least meets the definition of done. Also, velocity measurements are made only at the end of every sprint.
.
Track Record
The purpose of taking velocity measurements is to capture team-system’s track record at translating product backlog items into acceptable working software. This track record is typically expressed in total number of story points (velocity). Traditionally projects have been estimated prior to the start of the project. Estimates at their best are educated guesses. Guesses - none the less, based on assumptions that need validation from reality. In traditional project management this aspect of validating assumptions, made during the start of the project, is severely lacking during project execution. Project management is then reduced to protecting the planned estimates as opposed to achieving desired goals. Iterative delivery of software every sprint and taking velocity measurements for every sprint has negated the need to make large inaccurate estimates for the entire project. Instead small inaccurate estimates are made for each sprint. And that is a good thing! velocity works with the fact that estimates are inherently inaccurate (cone of uncertainty).
.
Velocity acts as a correction factor.
This is how: Say a team estimates (makes an educated guess) that they can fully implement 40 story points of work in a sprint. At the end of the sprint if only 20 story points are fully implemented and accepted then the team’s velocity for that sprint is 20. Velocity is informed by reality. Going forward, in the next sprint, if the team is consistent with their estimating technique then they can reasonably expect to complete about 20 points of work. Disciplined velocity measurement provides correctional ability to re-estimate amount of work that can be reasonably expected to be done in the next sprint. Thus allowing for reliable commitments for a given sprint.
.
Velocity and commitment.
It is important to note that velocity does not imply commitment. Most of us understand this however we behave at odds with our understanding. Too often I have observed product owners and other managers demand that a team commit to 20 points of work this sprint since their velocity previous sprint or average velocity was 20 points. Velocity is a tool to make reliable commitments, not a substitute for team judgement at making these commitments. Velocity does not imply commitment for the upcoming sprint and it definitely does not imply commitment over the next bunch of sprints.
.
Peering into the future:
Future can not be predicted. However one can arguably say that project release date based on velocity measurements is many times more probable to be true than the probability of releasing on a date arrived from purely educational guess work. I’m not aware of a scientific study that will prove my assertion but I’m confident that the probability of my assertion being true is greater than it being wrong
.
Velocity directly depends on:
Reliable velocity measurement are based on consistent sprint length, same team members, similar product domain, similar product technology and consistent relative estimating. These are the direct cause and effect links with velocity. Changes to any of these factors make velocity unreliable. There are numerous other indirect factors which affect velocity. Understanding these requires understanding the relation between velocity and team.
.
Velocity and team:
The most common misconception is that velocity is an attribute of a team. This is understandable since changes to team members directly impacts velocity. This link of cause and effect is frequently yanked where in team members are changed thus impacting velocity thus re-enforcing our belief that velocity is an attribute of team. When in-fact velocity is an attribute of the system which includes both the team and the organizational environment surrounding the team. An example of organizational environment impacting velocity is evident through the dramatic increase in velocity observed in teams after they are collocated. There are various other organizational factors that affect velocity of a team system. Organizational culture and management’s response to team impediments are the biggest contributors to velocity improvements.
.
Velocity and team productivity:

Velocity is often mis-used to express team productivity. There are two common ways of mis-using velocity to express productivity

  • MisUse 1. Velocity used to comparatively express productivity of one team over that of another team. This is when Team A is deemed more productive than Team B since Team A velocity in story points is greater than that of Team B.
  • MisUse 2. Velocity from previous sprints is used to express relative gains in productivity. In other words if teams velocity in sprint 1 was 10 and then in sprint 4 it is 20 then it is incorrect to state that team doubled its productivity in Sprint 4. Let me share an example of a real team. This team had a consistent velocity in the range of 70-80 story points. In one of their sprints the team created an automation script that effectively made testing data inconsistencies a breeze. Stories that were initially estimated at 8 were now being estimated at 2. The level of effort involved with these stories decreased dramatically and so did their relative estimates. Their total velocity however remained at 70-80 story points. Now you will agree with me that the automation script did improve team productivity however it didnot change overall velocity. This is one example of why velocity is a bad measure of productivity. For an excellent article on productivity; see Martin Fowler’s article here.

Tools & Artifacts , , , ,