Training Schedule 2011

Happy New Year! We are looking forward to a great 2011. We have a great lineup of training planned, including our CSM and CSPO offerings, AgileEVM training, and some new trainings.

New Training Offerings

Along with our current training offerings around Certified ScrumMaster (CSM) and Certified Scrum Product Owner (CSPO) courses, we will also be offering training for:

  • Agile Portfolio and Release Management using AgileEVM
  • Managing Software Debt Workshop (based on our book)
  • Online training and micro-consulting

Although we have been training these unofficially for some time now, these are new official offerings, we are looking for companies and individuals who are interested in taking these courses for a discount. Please let us know if you are interested in test-driving any of these offerings.

CSM and CSPO Training

Lets not forget our CSM and CSPO course listings. You can find a full list of our classes on the Scrum Alliance and on our web site. We will be holding CSM and CSPO classes back-to-back in San Francisco on the following dates:

  • February 15-18
  • March 22-25
  • April 26-29
  • June 14-17
  • August 23-26

Of course, our CSPO course will include introductory training on the use of Innovation Games® to gather data from your customers and stakeholders. We feel strongly that a healthy relationship with your user community and stakeholders is essential to delivering great products.

AgileEVM Training

In fact, we have been working with our customers over the past 6 months on enhancing and improving our own product, AgileEVM. You can take a tour of AgileEVM on our training video page. Please register for a free account today if you don’t already have one. You can manage up to 2 active releases with your free account so please try it out and let us know what you think at AgileEVM Support.

We are looking forward to a great year in 2011. Please let us know if you are interested in our training or consulting services at Sterling Barton.

Our Book is Available: Managing Software Debt - Building for Inevitable Change


I am quite happy about the book that took much of my time over the past couple years has finally come out. Thank you Addison-Wesley for asking me to write a book. Also, I want to thank Jim Highsmith and Alistair Cockburn for accepting the book into their Agile Software Development Series. Finally, I have to thank all of those that have guided, influenced, and supported me over my career and life, with special thanks to my wife and kids who put up with me during the book’s development. My family is truly amazing and I am very lucky to have them!

Automated Promotion through Server Environments

To get towards continuous deployment, or even continuous validated builds, a team must take care of their automated build, test, analyze, deploy scripts. I always recommend that you have 2 scripts:

  • deploy
  • rollback

And that these scripts are used many times per iteration to reduce the risk of surprises when deploying to downstream server environments, including production. The following diagram shows a generic flow that these deploy and rollback scripts might incorporate to increase the confidence of teams in the continuous deployment of their software to downstream environments.

The incorporation of continuous integration, automated tests (unit, integration, acceptance, smoke), code analysis, and deployment into the deployment scripts provides increased confidence. Teams that I have worked on, coached, and consulted with have found the use of static code analysis with automated build failure when particular metrics are trending in a negative direction enhances their continued confidence in changes they make to the software. The software changes may pass all of the automated tests but the build is still not promoted to the next environment because, for instance, code coverage has gone down more than 0.5% in the past week. I am careful to suggest that teams should probably not set a specific metric bar like “90% code coverage or bust” but rather that the trend of any important metric is not trending in the wrong direction.

Please let us know how your teams move towards continuous delivery or at least continuous deployment to downstream environments with confidence in the comments section of this blog entry. Thanks.

Webinar Nov 19: Translating Points to Dollars

This fall we are launching a webinar series on the potential of Earned Value Management to transform Agile development organizations. You are cordially invited to join us for the first session -

Translating Points to Dollars: Agile EVM and Project Portfolio Management

Friday, November 19, 9:30-10:30 PST

Agile teams speak in points and iterations, but business managers think in terms of dates and dollars. This conceptual and language barrier makes strategic business planning and reporting a significant challenge for Agile teams.

In this webinar you will learn:

  • How to bridge the “Points vs. Dollars” language barrier to simplify project portfolio management, address the needs of strategic planners, and reduce the overhead of “managing up”.
  • How Agile development teams use AgileEVM to more quickly identify cost, schedule, scope and quality issues on Agile projects.

You will see real-life examples of AgileEVM in action, and you’ll learn why the AgileEVM application has galvanized Agile thought leaders and executives from leading IT and commercial product organizations. You’ll also learn from Barton’s and Sterling’s own experience as Agile development team leaders, consultants and instructors. An interactive Q&A session will further enrich the discussion.

Reserve your Webinar seat now at:
https://www3.gotomeeting.com/register/410356326

About the Presenter:
AgileEVM cofounders Brent Barton and Chris Sterling created the AgileEVM application to bridge the gap between Agile methodologies and traditional business metrics, so that organizations can better plan, manage, and realize the full value of IT investments.

Brent Barton’s executive management background and nearly two decades of experience in software technology gives him the ability to provide valuable guidance concerning engineering capability and organizational proficiency. Brent has used Agile practices for a decade to help small, medium and Fortune 100 organizations overcome seemingly intractable problems and successfully deliver mission-critical solutions.

Update: Slides are now available:

Boeing Webinar - Integrating Quality in Portfolio Management - oct 2010

Brent Barton and Chris Sterling presented this at a webinar for Boeing.
View more presentations from Brent Barton.

New Release: AgileEVM has Portfolio Sharing

We are excited to present this significant new release of AgileEVM (http://www.agileevm.com)!

You can now share your portfolios with as many people as you like at no extra charge. Our unique value-based pricing on active releases helps you promote transparency because you can share information with as many people as desired.

You have the ability to choose Read-Only or Read-and–Write access for each person you share a portfolio with. You can also unshare your portfolio contents at any time. With the ability to integrate and share portfolios you can aggregate many releases into a consolidated view of multiple portfolios.

Enhanced portfolio metrics provide clear financial information and forecasts. EVM metrics are more clearly labeled with both traditional EVM abbreviations and human-friendly labels.

You can still manage up to two releases at no charge. With our release report feature you can also prepare and send out reports to your colleagues for free!

If you don’t already have an account register at http://www.agileevm.com today and help your organization become more value driven than cost managed!

AgileEVM Overview Shows the connections between releases and portfolios

AgileEVM (http://www.agileevm.com) is a powerful way to view performance of your portfolio. The portfolio focuses on releases because this is the only unit of work where cost and benefits can really be compared.

Using outcomes from each iteration, rich information is provided. Take a look!

Movements of a Hypnotic Nature at AgilePalooza in Redmond, WA

Discussed the basis for high performing teams and the principles of Scrum as a framework , empirical process control mechanism, and the values. We then ran “Movements of a Hynpotic Nature” that helps us understand emergent design, iterations (if you only touch it once, you are not iterating), teamwork and operating from high-level requirements. This is based on cross-functional, self-organizing teams and overlapping development phases that are the roots of Scrum

Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments

For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out of the way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

<resources>
  <resource>
  <id>248390</id>
  <key>com.adobe:as3corelib</key>
  <name>AS3 Core Lib</name>
  <lname>AS3 Core Lib</lname>
  <scope>PRJ</scope>
  <qualifier>TRK</qualifier>
  <lang>flex</lang>
  <version>1.0</version>
  <date>2010-09-19T01:55:06+0000</date>
  <msr>
    <key>technical_debt_ratio</key>
    <val>12.4</val>
    <frmt_val>12.4%</frmt_val>
  </msr>
  </resource>
</resources>

Within this XML, there is a section called <msr> that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.