Category Archives: Acceptance Testing

Develop Architectural Needs through Abuse User Stories

On many occasions I am asked the question “How do we incorporate architecture needs into Scrum?”. A few years ago when I first started tackling this question on projects my answer was just put them on the Product Backlog. Over time I found this approach had issues in certain circumstances. Here are a few examples:

  • Product Owner would not prioritize architecture focused Product Backlog items
  • Value was difficult to communicate without heavy documentation for Product Owner or one of their trusted advisors in architecture
  • Product Owner did not understand results of architecture implementation
  • Some architecture focused Product Backlog items were too large to fit into a Sprint
  • Many architecture focused Product Backlog items showed up right before they were “needed” and therefore decreased predictability of releases

At Agile 2006 I was at a talk given by Mike Cohn on User Stories. A question came from the audience regarding security concerns in User Stories and Mike had an interesting response. He brought up the notion of an abuser perspective User Story. I do not remember his exact example but it had to do with a Hacker trying to take down your web site. Almost immediately this was a revelation to me. My familiarity with Bill Wake’s INVEST in Stories article enabled me to link the INVEST model, architecture needs, and abuser perspective User Stories. Here is an example:

As a Malicious Hacker I want to ciphon credit card information so that I can use it for fraudulant purchases

This user story has added the user role of Malicious Hacker and their criminal intentions for our system. If I were on a project that had features which revolved around credit card information I may not have enough time to delivery user interface functionality along with implementing a third-party software solution or configuring our systems to handle these situations effectively. This User Story would allow our team to focus on implementing architectural elements stopping a Malicious Hacker from ciphoning credit card information out of our systems. Also, this User Story meets the INVEST model in the following ways:

  • Independent – a team could implement, in any order, this story or any other story involving user interface functionality with credit card information
  • Negotiable – the Product Owner could negotiate with the team on the best approach to take based on business and technical suggestions
  • Valuable – a Product Owner can see this as valuable since it would be highly costly to allow a Malicious Hacker access to sensitive data such as credit card information
  • Estimatable – based on the negotiation between the team and Product Owner, a solution could be estimated by the team focused on specific acceptance criteria
  • Small – the solution would only involve security of the credit card information rather than a generic security solution
  • Testable – multiple security cracking techniques could be attempted and foiled by the implementation of this user story

This User Story can now be prioritized by the Product Owner. Once it is high enough in priority we can take the User Story card and have a conversation about it and drive out the essential confirmation or acceptance criteria for this User Story as described by Ron Jeffries.

In conclusion, think about how you can use the abuser perspective to help describe User Stories that meet the INVEST model for architectural needs. Helping the Product Owner understand why architectural elements are valuable through a narrative approach such as User Stories will enhance trust between the team and Product Owner. I have also found that the process of deriving User Stories in this manner helps the team solidify their technical suggestions into smaller components which are more easily managed in an iterative and incremental approach such as in Scrum and Extreme Programming.

For reference, creating user roles from an abuser perspective was added to Mike Cohn’s SD West 2007 presentation on User Stories.

Managing Unit and Acceptance Tests Effectively

In my experience, the use of Test-Driven Development (TDD) and automated acceptance testing on software projects makes for a powerful tool for flexible code and architectural management. When coaching teams on the use of TDD and acceptance testing there are some foundational test management techniques which I believe are essential for successful adoption. As a project progresses the amount of unit and acceptance tests grow tremendously and this can cause teams to become less effective with their test first strategy. A couple of these test management techniques are categorizing types of unit tests for running on different intervals and environments and structuring acceptance tests for isolation, iterative, and regression usage.

In 2003, I was working on a team developing features for a legacy J2EE application with extensive use of session and entity beans on IBM WebSphere. The existing code base lacked automated unit tests and had many performance issues. In order to tackle these performance issues the underlying architecture had to be reworked. I had been doing TDD for some time now and had become quite proficient with JUnit. The team discussed and agreed to use JUnit as an effective manner to incrementally migrate the application to our new architecture. After a few weeks the unit tests started to take too long to run for some of the developers including myself.

One night I decided to figure out if there would be a way to get these tests to run more quickly. I drew a picture of the current and proposed architecture on the white board and it hit me. We could separate concerns between interfacing layers by categorizing unit tests for business logic and integration points. Upon realizing this basic idea, I came up with the following naming convention that would be used by our Ant scripts for running different categories of unit tests:

  • *UnitTest.java – These would be fast running tests that did not need database, JNDI, EJB, J2EE container configuration, or any other external connectivity. In order to support this ideal we would need to stub out foundational interfaces such as the session and entity bean implementations.
  • *PersistanceTest.java – These unit tests would need access to a the database for testing configuration of entity beans to schema mappings.
  • *ContainerTest.java – These unit tests would run inside the container using a library called JUnitEE and test the container mappings for controller access to session beans and JNDI.

In our development environments we could run all of the tests ending with UnitTest.java when saving a new component implementation. These tests would run fast; anywhere from 3-5 seconds for the entire project. The persistance and container unit tests were run on an individual basis in a team member’s environment and the entire suite of these tests would be run by our Continuous Integration server each time we checked in code. These took a few minutes to run and our build server had to be configured with an existing WebSphere application server instance and DB2 relational database configured to work with the application.

In the “Psychology of Build Times”, Jeff Nielsen presented on the maximum amount of time builds, unit tests, and integration tests should take for a project. If builds and tests took too long than a team will be less likely to continue the discipline of TDD and Continuous Integration best practices. At Digital Focus, where Jeff worked, they had a similar unit test naming convention as the one I described above:

  • *Test.java – unit tests
  • *TestDB.java – database integration tests
  • *TestSRV.java – container integration tests

Another good source of information by Michael Feathers set out unit testing rules to help developers understand what unit tests are and are not. Here is a list of  “a test is not a unit test if”:

  • It talks to the database
  • It communicates across the network
  • It touches the file system
  • It can’t run at the same time as any of your other unit tests
  • You have to do special things to your environment (such as editing config files) to run it

Automated acceptance tests also have effective structures based upon the tools that you are using to capture and run them. For those using Fit I have found that structuring my tests into two categories, regression and iteration, supports near and long term development needs. The Fit tests which reside in the iteration directory can be ran each time code is updated to meet acceptance criteria for functionality being developed in the iteration. The regression Fit tests are ran in the Continuous Integration environment to give feedback to the team on any existing functionality which has broke with recent changes.

If you are using StoryTestIQ, a best practices structure for your automated acceptance tests has been defined by Paul Dupuy, creator of the tool. That structure looks like the following:

  • Integration Tests Tag Suite
  • Iteration Tests
    • Iteration 1 Tag Suite
    • Iteration 2 Tag Suite
  • Story Tests
    • User Story 1 Suite
    • User Story 2 Suite
  • Test Utilities
    • Login
      • Login As Steve
      • Login As Bob
    • Shopping Cart

You might have noticed a couple of descriptions above; Tag Suite and Suite. A tag suite is a collection of all test cases which have been tagged with a particular tag. This is helpful to allow for multiple views of the test cases for different environments such as development, QA, and Continuous Integration. The suite is usually used to collect a user stories acceptance tests together. There are other acceptance testing tools such as Selenium, Watir, and Canoo Web Test each with their own best practices on structuring.

Teams can effectively use TDD and automated acceptance tests without accruing overhead as the implemented functionality gets larger. It takes high levels of discipline to arrange your tests into effective categories for daily development, feedback, and regression. Tests should be treated as first class citizens along with the deliverable code. In order to do this they must be coded and refactored with care. The payoff for the effective use of automated unit, integration, and acceptance tests are tremendous in the quest for zero bugs and a flexible codebase. Happy coding.