The architecture of specification subsystems has always been critical to quality factors like cycle time -
Industry estimates indicate that between 30 and 50 percent (or in some cases, even more) of the cost of developing well-engineered systems is taken up by testing. If the software architect can reduce this cost, the payoff is large. [Bass et al, 2013]
However, in many enterprize environments, we have seen longer cycle times associated with the use of large, manual test teams. Traditionally, many systems were verified by large, monolithic test phases, utilising tens to hundreds of personnel for each test release.
In digital environments, where we seek to adopt Continuous Delivery, we quickly begin to see the inadequacies of legacy system architectures, which have evolved to support the manual testing process. Low testability quality factor in underlying systems drives low levels of reliability and repeatability within digital pipelines.
Lack of automation know-how?
A great source for architects on automated testing and associated architectural patterns is [Meszaros, 2007].
Reasons not to test
- "Poorly written test codebases often make systems harder to change"
- "We don't have the people to implement such a sophisticated technical practices"
- "We prefer strategies with low technical barriers to entry (as developers are cheaper and easier to recruit)"
Organizations that have adopted these approaches have found themselves in a vicious cycle, struggling to attract and retain the best talent.
Lean and build quality in (Deming/Dodge)
In contrast, manual test approaches have never been favoured in XP circles, simply because late, manual testing is such a clear Deming fail -
Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You can not inspect quality into a product.” [Deming, 1986]
If you rely on exhaustive manual testing, this means your quality strategy is late inspection, rather than build quality in. This strategy often goes hand in hand with high numbers of system defects and patch releases.
Why go manual?
This type of approach was previously seen as ideal for several reasons -
- Release cadence was every few months - this suits mini-waterfall 'test phase' approaches
- System designers were incentivised on increments of capacity, not true ROI throughout system lifetime
- System architects viewed development resources as scarce and expensive, whereas tester resources were less costly and more freely available. They concluded that maximum project ROI resulted when "developers minimize their testing effort"and ship only functional code
- High project ROI means technical practices could be sub-optimal, yet still viewed as hugely successful by the business (as they have view of total cost of ownership)
- High project ROI means throwaway architecture can become a de-facto norm. This has the effect of de-emphasising system quality factors such as testability, maintainability and changeability, though may indeed emphasise strong modularity (such that systems can be easily replaced over time)
Consequences of manual full system testing
- Late feedback and increased cost per defect
- Bugs in production, lots of defects in test
- Reliance on monolithic test data loads and 'one shot' test approaches
- Environmental downtime, costing millions of dollars per year
- Subsystems do not mature; consumers bear the cost of upstream defects; teams building subsystems do not develop consumer-driven design skills
- Poor levels of external testability are often synonymous with poor internal design
- Low adoption of technical practices means that automated testing does not 'snowball'
- Throwaway architecture emphasises only moderate code quality, as systems are not designed to be maintained and extended indefinitely; cost focus leads management towards cheaper development resources, with the goal of maximizing throughput by using larger teams
- Formation of large teams, leading to problems with co-location and communication
Quality Factors for Continuous Delivery
Today, Continuous Delivery quality factors sweep the manual test option away. In a CD context, testability quality factors demand test reliability and repeatability. In a digital context, testability quality factors go further, to include test-in-isolation game changers like mobile device cloud testability. Can your test architecture permit executable specification, minus customer data, to run in the cloud?
- Because 'testability' has never been a feature, service subsystem components can have low testability quality factor, limiting the degree to which we can test to our specification boundary; it is the role of the architect to request testability features in systems with which the team must integrate, and to collaborate with service providers to design acceptable solutions to address this problem
- Architecture should define clear SLAs around availability, stubbing and quality of test data; this should not be a contract, but an open, collaborative undertaking to continuously improve
- Architects should pair with developers and QAs to design CD pipelines and define acceptance criteria for the system
- Measure API availability in test; institute co-operation between teams to provide the most stable consumer focussed APIs
...for a system to meet its acceptance criteria to the satisfaction of all parties, it must be architected, designed and built to do so - no more and no less. [Rechtin, 1991]
If teams don't know how to handle specification or build quality in", then tactics are -
- Instigate coding and design dojos, to promote TDD, pair programming and knowledge sharing between architecture, design and QA teams; these simple practical sessions form the bedrock of collaboration and technical excellence
- Recruit QA engineers and give them shared responsibility, with developers, for the system's code quality
- When discussing testability with external service teams, architects should include developers and QAs in that discussion, as they will demand reliable, repeatable, mutual specification
[Bass et al, 2013] Software Architecture in Practice (3rd Edition) (SEI Series in Software Engineering), Len Bass, Paul Clements, Rick Kazman, Addison-Wesley, 2013.
[Deming,1986] Out of the Crisis, Deming,W. Edwards, 1986, Massachusetts Institute of Technology, Center for Advanced Engineering Study, Cambridge, MA 02139, USA.
[Meszaros, 2007] xUnit Test Patterns - Refactoring Test Code, Gerard Meszaros, Addison-Wesley, 2007.
Software Developer | Application Architect | Audio Engineer