AYE Conference - Day 2

Posted by Chris Sterling on 07 Nov 2007 | Tagged as: Leadership, Agile

I found myself in two more great sessions today at the Amplify Your Effectiveness (AYE) conference in Phoenix, AZ. It just amazes me how much knowledge I have witnessed within two days here. The following information is important information and results of my experience today.

Reflect and Adapt presented by Elisabeth Hendrickson

This session started off a bit slow because there was a limited group of attendees for an amazing simulation. It incorporated product management, developers, testers, interoffice mail, and computers into an amazing simulation which felt like real world scenarios within software development organizations I had been part of or witnessed. I could not believe how writing instruction sets, test cases, and confusing requirements on index cards could feel so much like real life.

Two of the participants had actually run or been through the simulation before. One of the most incredible realizations to me and others in the room was that the product manager tried to stall the customer when they asked for a demo and he knew the product was still buggy. This may or may not have been what this person would have done in a real life scenario but many of the considerations were those that I have heard from product managers I have worked with and coached in the past. I was a tester and it was amazing to me that I really got into the role. When I ran test cases against the running system and they continued to have the same or new bugs I started to think “what is the developer doing?”. As a developer this was frightening to me and it was a natural reaction to the way we started doing our work which excluded face-to-face communication. Elisabeth did a great job of working with the group and the entire group discussed topics which related to situations arising during the simulation. As we reflected on our work so far we would modify our working agreement which caused interesting changes into the environment resulting in improvement over time.

Resistance as a Resource presented by Dale Emery

Dale Emery had a great laid back style as he presented resistance as a resource through activities and discussion. This session had incredibly simple but powerful activities which helped people understand what constitutes resistance. Dale mentioned quite early that he did not know much about resistance but has found that he does know something about fielding responses. This statement may seem cryptic and I believe that you may have to be in one of Dale’s sessions to actually understand it.

Two things really stood out for me in this session:

  • If you are not willing to change yourself when encountering resistance then it may not be helpful to confront the resistance yourself at that time
  • When encountering resistance, if we are rigid we may become easy to knock over but if we center ourselves before confronting the resistance we can work through differences

The second item may be confusing so I will do my best to explain. When we center ourselves we are establishing our ability to move with the resistance to find a point of view which both parties can move forward from. One way to do this is by asking yourself “what do I want to get out of this?”. This can help you center yourself by understanding the intent of your suggestion. Once you are centered or congruent you can work with other people who are resistance to the changes you are proposing to them by using techniques such as reflective listening followed by curiosity about finding more information about the situation which may be helpful information on how to move forward from each other’s positions.

During an exercise where I posed a change I wanted to make in an organization another person I was amazed at the responses we found as a pair. Upon being questioned by the other person about the feasibility of my plan I began to ask if they were not able to meet my needs. They found it difficult to move forward with me and at one point I found myself in a position of bullying power which was uncomfortable. We both spoke more in depth about the situation and each of us learned quite a bit from this short role playing exercise.

AYE Conference - Day 1

Posted by Chris Sterling on 06 Nov 2007 | Tagged as: Leadership, Agile

I am currently at the Amplify Your Effectiveness (AYE) conference in Phoenix, AZ. Although I have only participated in one day of sessions it has been an overwhelmingly worthwhile experience. When one gets around this many talented, passionate, and knowledgeable people there seems to be a learning experience around every corner. I thought that I would type some notes from today’s sessions. I am going to be careful not to discuss how the sessions were conducted since I believe it is important to be in the context of these sessions and simulations in order to fully understand any context I would attempt to convey through written word. They are just that powerful in person.

Dynamics of Distributed Teams presented by Esther Derby

Although I work with many distributed teams in my consulting on Agile there seems to always be a bit more to learn. The items below are quick list of ideas:

  • Imposing collaboration across team boundaries may create problems. In the simulation, some folks from the main office, including myself, attempted to get together with an off-site team in another location after running separated for a while. The off-site team had gotten used to a certain solidarity which made them feel quite effective in doing their work. The main office team saw an opportunity to be inclusive of the off-site team in planning how we were going to implement the company vision. Taking this opportunity without giving the off-site team options for using their time in this manner, enough notice to discuss and plan, and imposing our main office culture onto them was problematic.
  • Aligning on a vision which is understood across office locations helps create more collaboration on how to coordinate the work. We were given less information about the company vision and how the product lines supported that vision at the beginning. It became apparent at one point in time that we needed alignment when one project asked a fundamental question which had huge ramifications based on the answer. Upon hearing the answer to that question and discussion with other groups about their understanding of other information it was clear that we had been running with different objectives in mind across the group.
  • Try to get alignment before a separate culture sets in fully is a good preventative measure. If possible, get distributed teams together early on to create a sense of affiliation with the entire team. Also, continually work on keeping that sense of affiliation current throughout the life of the project.
  • Getting things done is not necessarily doing the right work. Without a coherent vision that was understood by the teams we found ourselves not fulfilling the needs of the company to keep us in business even though we were allocating ourselves as management had requested. The teams got together with the later acquired vision and decided how we would meet the needs of the company and also our individual aspirations.
  • Be careful, even as a team member, that external suggestions of “how” to do the work of another team can create distrust and resentment. There was a message which was passed from the off-site team to the main office which asked for more information on “what” should be built. A message was sent back from the main office which told the team an obvious strategy for “how” to start the work. Later on in the session we debriefed each team and they mentioned this message as a “duh” moment for them.
  • “Words can create worlds” - Diana Larsen. Diana participated in the session and made that statement in response to the use of “home office” and how that is perceived within the context of multiple site companies. Most of the group perceived this to mean that all decisions would be made at that location and correspondence from the “home office” was highly scrutinized.

Congruence is the Foundation for All Effectiveness presented by Jerry Weinberg and Dwayne Phillips

The context for having a conversation about congruence seem to stem from this question posed by Jerry Weinberg:

Have you ever thought “I could have handled that better”?

Congruence is described as a balance between self, other, and context which embodies self-esteem. When we are in situations with other people we may be incongruent and therefore ineffective in handling the situation. Here are some ways that incongruence can manifest itself:

  • Leaving yourself out; placating - def. tending or intended to pacify by acceding to demands or granting concessions
  • Leave others out; blaming - def. a reproach for some lapse or misdeed
  • Leaving self and others out; super reasonable - excessive reasoning which does not fully take into account the context and actors of a situation. This behavior could be identified by listening for queues such as usage of “there is” or “it is” instead of “I” and “you”. Reasoning without the context of the people involved.

One interesting note that I took was the thought that asking too many questions of our customer’s requirements could cause customer to feel blamed. Too many questions can seem to indicate a dissatisfaction with the requirements and therefore the person or people delivering them to your team.

Another great quote for me was “90% of the message is not in the words”.

Definition of “self-esteem” and “self-confidence” was good in that it helped put these terms into context with each other. My interpretation of the definitions demonstrated were:

Self-esteem - how one values themselves along with other people

Self-confidence - how you assess your capabilities to take an action

I took away an important lesson overall which is that helping others get congruent when we are working together involves bringing them into the “here and now”. We all have different backgrounds and understanding of the past which could be plaguing our ability to invest in our future. By bringing our awareness to the “hear and now” we can be open to improving our capabilities for the future. But even after we seem to get congruent at the start of an interaction we must continue to cultivate the congruence of the interaction. We can check into the group and see if there are changes in posture, tone, and attention which could make the interaction become incongruent. An example may be the group starts using references to food as analogies which may present an incongruence that people are hungry and need to eat before moving forward.

I recommend anybody who would like to become more self aware and effective come to this conference. I know that I am saying that after only one day but I can already see how much value there is in it.

Research, Spikes, Tracer Bullets, Oh My!

Posted by Chris Sterling on 22 Oct 2007 | Tagged as: Product Owner, XP, TDD, Scrum, Architecture, Agile

A couple of years ago, a team that I was working on as ScrumMaster had a discussion on what is a spike versus research versus tracer bullets. The reason for the discussion was how loosely we used these terms with our current customer. It was confusing to both the customer and us. It also allowed too much room for working on things we did not commit to as a team within a Scrum sprint.

The team sat down and decided that spike, research, and tracer bullet all meant different things to them. Here is what we decided on along with their respective indicators for using them:

  • Spike - a quick and dirty implementation, designed to be thrown away, to gain knowledge - indicator: unable to estimate a user story effectively
  • Research - broad, foundational knowledge-gaining to decide what to spike or give the ability to estimate - indicator: don’t know a potential solution
  • Tracer Bullet - very narrow implementation in production quality of an epic/large user story - indicator: user story is too large in estimation

This clarification allowed the customer and team to identify when and how these activities should be used. We decided with the customer that a spike and research were timeboxed activities and were similar to an investment by the customer. These were done in one iteration and used to help define an upcoming user story’s estimate and a starting point for implementation in the next iteration. Although estimates were an indicator for spikes and research, they were not the end goal. The idea of a spike or research is also to:

  • Understand “how” to implement a piece of business value
  • Propose a solution to help the customer make business value decisions
  • Minimize risk hidden in the cost of implementing a piece of business value
  • Control the cost of R&D through the use of an “investment” model

A tracer bullet was used to breakdown an epic or large user story into smaller chunks and could have some effect on the customer’s backlog of features. If the team discussed a user story on the backlog which introduced a new architectural element then a tracer bullet would be implemented to introduce it into the software without the overhead of a detailed user interface. For instance, if we were hooking into our Peoplesoft and Siebel instances and wanted to show a customer’s information combined from both systems, we may have a tracer bullet user story such as:

As a customer service rep I want to view the customer’s name and multiple system identifiers

After implementing this user story we may have many other backlog user stories which references additional customer information which must retrieved from either or both of the systems.

The team put these definitions and indicators up on the wall as a big, beautiful information radiator that we could refer to. Other teams also started to use these descriptions in their own projects to help clarify essential work with their customers.

Building a Definition of Done

Posted by Chris Sterling on 05 Oct 2007 | Tagged as: Product Owner, XP, Leadership, Scrum, Architecture, Agile

Joe, the Developer, waltzed into work one sunny Tuesday morning and was approached by Kenny, the Project Manager, asking if the feature Joe was working on was done. Joe had checked in his code to their shared source code repository yesterday afternoon and had unit tested it before doing so. With an emphatic “yes” Joe confirmed the feature’s completion. Kenny sighed in relief and said “great, then we will go ahead and deploy it into the UAT environment for our customer advocates to view”. Joe quickly backtracked on his original answer and blurted out “but it has not been fully tested by QA, the documentation is not updated, and I still need to pass a code review before it is finished”.

Has this ever happened to you? Were you Joe or Kenny? How did you react in this situation? Did it feel like development was not being honest? Did it seem that the Project Manager was assuming too much? We’ve got just the tool for you; the “Definition of Done”. Following is a list of steps that I use when coaching a team on their own Defintion of Done:

  1. Brainstorm - write down, one artifact per post-it note, all artifacts essential for delivering on a feature, iteration/sprint, and release
  2. Identify Non-Iteration/Sprint Artifacts - identify artifacts which can not currently be done every iteration/sprint
  3. Capture Impediments - reflect on each artifact not currently done every iteration/sprint and identify the obstacle to it’s inclusion in an iteration/sprint deliverable
  4. Commitment - get a consensus on the Definition of Done; those items which are able to be done for a feature and iteration/sprint

During the brainstorming portion of the exercise it is important to discuss whether or not each artifact is needed to deliver features for release. Some examples are:

  • Installation Build (golden bits)
  • Pass All Automated Tests in Staging Environment
  • Sign Off
  • Pass Audit
  • Installation Documentation Accepted by Operations
  • Release Notes Updated
  • Training Manuals Updated

It is important to note that these are not features of the application but rather the artifacts which are generated for a release. Some questions you may ask about each artifact are:

  • Who is the target audience for this artifact?
  • Is this a transitory artifact for the team or stakeholders?
  • Who would pay for this?
  • Is it practical to maintain this artifact?

When identifying non-iteration/sprint artifacts I usually ask the team to create a waterline mark below the brainstormed post-it notes. Look at each of the artifacts written on the post-it notes above the line and discuss whether or not it can be done every iteration/sprint for each feature, potentially incrementally. If it can leave it above the waterline. If it can not then move the artifact below the waterline.

In the next step of capturing impediments the team will look at each of the artifacts below the waterline and discuss all of the obstacles which stop them from delivering this each iteration/sprint. This is a difficult task for some teams because we must not hold ourselves to the current status quo. I like to inform the team that answers such as “that is just the way it is” or “we can’t do anything about that” are not acceptable answers since we can not action them. The obstacles, no matter how large an effort to remove them may seem, can be informative to management about how they can support the team in releasing with more predictability. I have found that many of the obstacles identified by teams in this step create issues such as having an unpredictable release period after the last feature is added. The obstacle may be that we have an independent verification from QA. There could be many reasons behind this, derived usually from audit guidelines and governance policies, but there may be creative ways to incrementally conduct the verification which increases predictability of the release stabilization period. Over time these obstacles can be removed and the artifact which was not included in the Definition of Done for each iteration/sprint based on that obstacle can be promoted to above the waterline.

Once you have your Definition of Done, identified artifacts which can not be delivered each iteration/sprint, and captured the obstacles for those artifacts it is time to gain consensus from the team. You can use any consensus building technique you would like but I tend to use the Fist of Five technique. If the team agrees with the Definition of Done then we are finished. If there people on the team who are not on board yet it is time to discuss their issues and work towards a consensus. It is important that all members of the team agree to the Definition of Done since they will all be accountable to each other for delivering on it for each feature. Once you have consensus I like to have the Definition of Done posted in the team’s co-located area as an information radiator informing them of their accountability to each other.

The Definition of Done exercise can have many ramifications:

  • Creation of an impediments list that management can work on to support the delivery of the team
  • Organizational awareness of problems stemming from organizational structure
  • Team better understanding expectations of their delivery objectives
  • Team awareness of other team member roles and their input to the delivery

If you do not already have a Definition of Done or it has not been formally posted, try this exercise out. I hope that building a Definition of Done in this manner helps your team get even better at their delivery. Below is an example of a real team’s Definition of Done:

Definition of Done example

Develop Architectural Needs through Abuse User Stories

Posted by Chris Sterling on 16 Sep 2007 | Tagged as: Product Owner, XP, Acceptance Testing, Scrum, Architecture, Agile

On many occasions I am asked the question “How do we incorporate architecture needs into Scrum?”. A few years ago when I first started tackling this question on projects my answer was just put them on the Product Backlog. Over time I found this approach had issues in certain circumstances. Here are a few examples:

  • Product Owner would not prioritize architecture focused Product Backlog items
  • Value was difficult to communicate without heavy documentation for Product Owner or one of their trusted advisors in architecture
  • Product Owner did not understand results of architecture implementation
  • Some architecture focused Product Backlog items were too large to fit into a Sprint
  • Many architecture focused Product Backlog items showed up right before they were “needed” and therefore decreased predictability of releases

At Agile 2006 I was at a talk given by Mike Cohn on User Stories. A question came from the audience regarding security concerns in User Stories and Mike had an interesting response. He brought up the notion of an abuser perspective User Story. I do not remember his exact example but it had to do with a Hacker trying to take down your web site. Almost immediately this was a revelation to me. My familiarity with Bill Wake’s INVEST in Stories article enabled me to link the INVEST model, architecture needs, and abuser perspective User Stories. Here is an example:

As a Cracker I want to ciphon credit card information so that I can use it for fraudulant purchases

This user story has added the user role of Cracker and their criminal intentions for our system. If I were on a project that had features which revolved around credit card information I may not have enough time to delivery user interface functionality along with implementing a third-party software solution or configuring our systems to handle these situations effectively. This User Story would allow our team to focus on implementing architectural elements stopping a Cracker from ciphoning credit card information out of our systems. Also, this User Story meets the INVEST model in the following ways:

  • Independent - a team could implement, in any order, this story or any other story involving user interface functionality with credit card information
  • Negotiable - the Product Owner could negotiate with the team on the best approach to take based on business and technical suggestions
  • Valuable - a Product Owner can see this as valuable since it would be highly costly to allow a Cracker access to sensitive data such as credit card information
  • Estimatable - based on the negotiation between the team and Product Owner, a solution could be estimated by the team focused on specific acceptance criteria
  • Small - the solution would only involve security of the credit card information rather than a generic security solution
  • Testable - multiple security cracking techniques could be attempted and foiled by the implementation of this user story

This User Story can now be prioritized by the Product Owner. Once it is high enough in priority we can take the User Story card and have a conversation about it and drive out the essential confirmation or acceptance criteria for this User Story as described by Ron Jeffries.

In conclusion, think about how you can use the abuser perspective to help describe User Stories that meet the INVEST model for architectural needs. Helping the Product Owner understand why architectural elements are valuable through a narrative approach such as User Stories will enhance trust between the team and Product Owner. I have also found that the process of deriving User Stories in this manner helps the team solidify their technical suggestions into smaller components which are more easily managed in an iterative and incremental approach such as in Scrum and Extreme Programming.

For reference, creating user roles from an abuser perspective was added to Mike Cohn’s SD West 2007 presentation on User Stories.

Inferior Tracks Lead to Superior Locomotives

Posted by Chris Sterling on 05 Aug 2007 | Tagged as: Leadership, TDD, XP, Scrum, DotNet, Architecture, Java, Agile

Larry L. Peterson, professor and Chair of Computer Science, Princeton University, gave a great talk on PlanetLab: Evolution vs. Intelligent Design which I believe is interesting to people involved in emerging architecture. One of the Agile Manifesto for Software Development’s principles is:

The best architectures, requirements, and designs emerge from self-organizing teams.

Remember this principle while listening to the talk. Larry points out many examples of when this is true and the issues that can occur as a result. These issues are all resolvable but the resolution may not be what you initially expect.

One of the great discussion points, in my opinion, was on how inferior tracks lead to superior locomotives. The story is that track standards across the American west were much more liberal than previously defined on the east coast and Europe. Therefore, European trains could not run well on American tracks and the locomotive industry in America had to cope with this issue. The American locomotive industry then created much more robust locomotives which dealt with the real world issues running trains on these tracks. Coming from the Jini community of y’ore this reminds me of the 8 Fallacies of Distributed Computing and how much abstraction can be designed before increasing missing detail overhead in software development.

Value-Driven User Stories Exercise Paper

Posted by Chris Sterling on 05 Aug 2007 | Tagged as: XP, Product Owner, Scrum, Agile

In a past blog entry I had discussed an exercise for generating user stories based on value statements. I have written a paper since then that describes in more detail how the exercise works. If you are interest please download the paper. If you try it out let me know how well it works for you.

Comparing the Product Owner and Enterprise Architect Roles

Posted by Chris Sterling on 04 Aug 2007 | Tagged as: Product Owner, Leadership, Architecture, Agile

At an IASA meeting approximately 1 year ago, we came up with the following definition for what an Enterprise Architect does:

“What does an architect do?”

  • Understand the client’s world view and grasp their problems
  • Serving as the client advocate, Envision solutions to the client’s problems
  • Serving as the client advocate, Partition the problem up into manageable units or work
  • Serving as the client advocate, Communicate the solution to all stakeholders and participants
  • Serving as the client advocate, Ensure the solutions are delivered as expected

With the exception of the “Partition” entry, these bullets are derived from Marc and Laura Sewell’s definition taken from their book, The Software Architect’s Profession: An Introduction (ISBN 0130607967). I added the “customer advocate” and I believe it was Nick Malik who added the “Partition” element to the list. Many people who are title Enterprise Architect in organizations do not have the above responsibilities. I find that EA tends to stand for information gatekeeper. They own items such as the data model and continually look for ways to increase business intelligence and minimize divergence in solutions to manage throughout the organization. This is why I believe that many development teams and project management groups refer to EA as an “Ivory Tower”.

Since the Enterprise Architect role has become synonymous with the “Ivory Tower” approach I am looking for a new title. In Scrum, there is a role identified as the Product Owner which resembles many of the characteristics listed above. A Product Owner’s responsibilities include understanding the client’s world view and problems, envisioning solutions, partitioning work into manageable units, communicating the solution, and ensuring the solutions are delivered. The difference between an Enterprise Architect as we envisioned above and the Product Owner role was something that was unwritten but understood by the group:

How much of the “how” do we envision in the solution?

The Product Owner role revolves around the “what” not the “how”. This is a simplistic way to describe it but captures the intent nicely in most situations I have worked with Product Owners. It has been my experience that Enterprise Architects can be a supporting person to the Product Owner mostly in the partitioning of work into manageable units. They can also be helpful mentors and coaches for the software development teams introducing existing infrastructure, understanding team capabilities and issues with current tools, describing higher level architecture roadmaps, taking feedback from teams, and managing vendor relationships.

These are just a few ideas on how Product Owners and Enterprise Architects, as described by our group, can work together for the betterment of software delivery. I am interested in hearing other’s thoughts on this subject. Is this a topic of interest to others in the Agile and software development community? If so, I would like to follow this up with more real world examples of how I have seen this work. Thanks.

Perspective Based Architecture

Posted by Chris Sterling on 04 Aug 2007 | Tagged as: Leadership, Architecture, Agile

I happened to run into Lewis Curtis on a plane recently and we started a discussion on Perspective Based Architecture that he had been working on for a few years. Here is the mission statement from the main web site:

“At the end of the day, no matter what technologies, vendors or methodologies are utilized, all IT architects must address very difficult questions for a successful solution. Fifty years ago, today and fifty years from now, IT architects will still need to address these difficult questions. Therefore, The PBA Method focuses on capturing those questions from architects in a community model organized within a meta-model in an easy to use capability complimenting most methodologies and processes to promote more successful architectures.”

I thought this was a nifty way to communicate the questions we must answer in business which is supported by technology and software applications. I believe that many of the topic areas in the meta-model are useful not only to IT Architects as proposed but also to entire IT and software product delivery organizations. Take a look and I will post more ideas on this subject in the near future. I will be at Agile 2007 hosting a discovery session on “Architecture in an Agile Organization”. Please join me if you are heading to the conference. I am interested in meeting many new people in the technology and software development community.

Managing Unit and Acceptance Tests Effectively

Posted by Chris Sterling on 23 Jul 2007 | Tagged as: Acceptance Testing, TDD, Scrum, DotNet, Architecture, Java, Agile

In my experience, the use of Test-Driven Development (TDD) and automated acceptance testing on software projects makes for a powerful tool for flexible code and architectural management. When coaching teams on the use of TDD and acceptance testing there are some foundational test management techniques which I believe are essential for successful adoption. As a project progresses the amount of unit and acceptance tests grow tremendously and this can cause teams to become less effective with their test first strategy. A couple of these test management techniques are categorizing types of unit tests for running on different intervals and environments and structuring acceptance tests for isolation, iterative, and regression usage.

In 2003, I was working on a team developing features for a legacy J2EE application with extensive use of session and entity beans on IBM WebSphere. The existing code base lacked automated unit tests and had many performance issues. In order to tackle these performance issues the underlying architecture had to be reworked. I had been doing TDD for some time now and had become quite proficient with JUnit. The team discussed and agreed to use JUnit as an effective manner to incrementally migrate the application to our new architecture. After a few weeks the unit tests started to take too long to run for some of the developers including myself.

One night I decided to figure out if there would be a way to get these tests to run more quickly. I drew a picture of the current and proposed architecture on the white board and it hit me. We could separate concerns between interfacing layers by categorizing unit tests for business logic and integration points. Upon realizing this basic idea, I came up with the following naming convention that would be used by our Ant scripts for running different categories of unit tests:

  • *UnitTest.java - These would be fast running tests that did not need database, JNDI, EJB, J2EE container configuration, or any other external connectivity. In order to support this ideal we would need to stub out foundational interfaces such as the session and entity bean implementations.
  • *PersistanceTest.java - These unit tests would need access to a the database for testing configuration of entity beans to schema mappings.
  • *ContainerTest.java - These unit tests would run inside the container using a library called JUnitEE and test the container mappings for controller access to session beans and JNDI.

In our development environments we could run all of the tests ending with UnitTest.java when saving a new component implementation. These tests would run fast; anywhere from 3-5 seconds for the entire project. The persistance and container unit tests were run on an individual basis in a team member’s environment and the entire suite of these tests would be run by our Continuous Integration server each time we checked in code. These took a few minutes to run and our build server had to be configured with an existing WebSphere application server instance and DB2 relational database configured to work with the application.

In the “Psychology of Build Times”, Jeff Nielsen presented on the maximum amount of time builds, unit tests, and integration tests should take for a project. If builds and tests took too long than a team will be less likely to continue the discipline of TDD and Continuous Integration best practices. At Digital Focus, where Jeff worked, they had a similar unit test naming convention as the one I described above:

  • *Test.java - unit tests
  • *TestDB.java - database integration tests
  • *TestSRV.java - container integration tests

Another good source of information by Michael Feathers set out unit testing rules to help developers understand what unit tests are and are not. Here is a list of “a test is not a unit test if”:

  • It talks to the database
  • It communicates across the network
  • It touches the file system
  • It can’t run at the same time as any of your other unit tests
  • You have to do special things to your environment (such as editing config files) to run it

Automated acceptance tests also have effective structures based upon the tools that you are using to capture and run them. For those using Fit I have found that structuring my tests into two categories, regression and iteration, supports near and long term development needs. The Fit tests which reside in the iteration directory can be ran each time code is updated to meet acceptance criteria for functionality being developed in the iteration. The regression Fit tests are ran in the Continuous Integration environment to give feedback to the team on any existing functionality which has broke with recent changes.

If you are using StoryTestIQ, a best practices structure for your automated acceptance tests has been defined by Paul Dupuy, creator of the tool. That structure looks like the following:

  • Integration Tests Tag Suite
  • Iteration Tests
    • Iteration 1 Tag Suite
    • Iteration 2 Tag Suite
  • Story Tests
    • User Story 1 Suite
    • User Story 2 Suite
  • Test Utilities
    • Login
      • Login As Steve
      • Login As Bob
    • Shopping Cart

You might have noticed a couple of descriptions above; Tag Suite and Suite. A tag suite is a collection of all test cases which have been tagged with a particular tag. This is helpful to allow for multiple views of the test cases for different environments such as development, QA, and Continuous Integration. The suite is usually used to collect a user stories acceptance tests together. There are other acceptance testing tools such as Selenium, Watir, and Canoo Web Test each with their own best practices on structuring.

Teams can effectively use TDD and automated acceptance tests without accruing overhead as the implemented functionality gets larger. It takes high levels of discipline to arrange your tests into effective categories for daily development, feedback, and regression. Tests should be treated as first class citizens along with the deliverable code. In order to do this they must be coded and refactored with care. The payoff for the effective use of automated unit, integration, and acceptance tests are tremendous in the quest for zero bugs and a flexible codebase. Happy coding.

« Prev - Next »