Test Strategy?
The question I’m asked most frequently by Test Managers is ‘what should I include in my Test Strategy?’.
There is no set answer for this, the deliverable will depend on the delivery process it is aiming to support and the type of organisation it needs to fit with. Importantly there are two key factors:
- How risk adverse your organisation is. If your business’s risk aversion is high, expect your strategy to be more prescriptive and set higher standards.
- The delivery methodology you are using. A waterfall test strategy can be quite prescriptive. On the other hand for Agile it needs to be flexible and updated frequently. In an Agile environment it is also important to separate Test Plans from Test Strategy as these will iterate at different frequencies.
Below is a template for what you could include in your strategy. It is purposely non-prescriptive, instead it is in the format of a cookbook, where you can extract the sections that are relevant to your organisation.
Each section includes:
- A Policy — what is the requirement for this testing activity?
- KPIs — how will success be measured
- Approach — a description of how the testing activity is to be completed.
START
- Version Control
- Stakeholder Engagement and approvals
- Glossary — Definitions of standard testing terms used, aligned to industry best practices (e.g. ISO29119)
Glossary of Terms
KPI
Key Performance Indications — defined criteria for success
Test Strategy
A document outlining test processes and test governance.
Test Scenario
A collection of test cases that cover a particular functional area.
Test Case
A single test designed to execute a specific piece of functionality as defined by a Test Condition.
Test Condition
A brief description what the subject being tested, written in the ‘Given, When, Then’ format.
Positive Test
Testing aimed at showing software works. Also known as “test to pass”.
Negative Test
Testing aimed at showing software does not work. Also known as “test to fail”.
Test Report
A document to summarise test results and provide recommendations for next steps.
Defect
An instance of non-conformance to requirements, acceptance criteria or functional / program specification.
Defect Report
A tracking ticket to capture a defect.
Test Tool
Computer programs used in the testing of a system, a component of the system, or its documentation.
Smoke Test
A ‘quick-and-dirty’ test that the high risk functions of a piece of software work. Should take no more than 1 hour to execute.
Regression Testing
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Technical Testing
A generic term to describe all non-functional testing including Performance and Security Testing.
Test Bed
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software. The Test Case for a change should enumerate the Test Beds(s) to be used.
Integration Testing
Testing the functionality of a solution is working once merged into an existing component.
Alpha and Beta testing
Alpha testing — is simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing
Beta testing — comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team.
- Contents Page
INTRODUCTION
- Purpose of Document — description of how the document should be used and who the intended audience is.
- Test Policy — reference to the Corporate Test Policy that the Strategy is an enactment of
- Scope / Types of Testing — definition of the types of testing and methodology relevant to the strategy. For example:
System Testing
The goal of System Testing is to test how well the component conforms to the published requirements. This is also known as Functional or Black Box testing and includes usability and accessibility testing.
Decisions on what to test will be based on path coverage to ensure all paths through the functionality and user stories are covered. Each path will be covered with positive and negative test conditions documented in the Test Cases or Stories.
Each Test Condition will be given a relative test priority rank according to business impact (risk) to help plan and prioritise test execution.
Test Scripts may be executed multiple times depending on the requirement for Regression testing following a change or defect. The delivery team will also perform time boxed exploratory testing to investigate the functionality.
Technical Testing
Technical Testing, also known as Non-Functional Testing, includes Performance (Efficiency), Security, Capacity and Recovery testing. This testing will be planned separately to System Testing to ensure sufficient detail is given to covering the non-functional requirements.
The most common technical requirements revolve around performance and security. The approach for Performance Testing is to use defined datasets to produce profile metrics against the system under test.
A Risk Based Security Testing approach will be taken towards security to target security weaknesses identified throughout the product delivery and attack the biggest risks first. External test teams with specialist expertise will be brought in where necessary to execute security penetration testing.
Capacity and Recovery testing will be considered on a project-by-project basis.
- Overview of Job Roles and Organisational Hierarchy — to define accountability and responsibility
The Testing Approach
This section provides an overview of the different types of testing performed by the Test Team by describing the approach, the policy (i.e. minimum requirement) and the KPIs that measure the achievement of that policy.
1.1 Team Structure
1.1.1 Policy
· The Test Team will be led by a Test Manager with a hierarchical reporting structure
· Follow an Agile delivery methodology consistent with SCRUM
1.1.2 KPI
· All testers will be aligned to support at least one delivery team
· All testers will have a specified Line Manager who is not also their delivery manager
· All testers will have a specified role in System Testing or Technical Testing
1.1.3 Approach
The Test Team will be managed through a version of matrix management designed around delivery. The delivery teams are structured so that they can be self-organising and cross functional to fulfil the requirements of the business. The horizontal dimension (The Test Team) is structured for sharing knowledge, tools, process improvements and personal development.
1.2 Team Resource
1.2.1 Policy
· The Test Team must have sufficient resource capacity to meet all project and business requirements including contingency resource to cover absences.
1.2.2 KPI
· No more than 2 team members having annual leave booked at the same time per 6 member team.
1.2.3 Approach
Where there is a pipeline of work the team should be built up with permanent resource, for short term changes in resource demand contract personnel may be used.
All Test Team members will be ISEB qualified to at least Foundation level and have sufficient application, environment and test tool knowledge to allow them to perform testing to an acceptable standard.
A standard Test Analyst job specification is maintained on the intranet.
1.1 Team Appraisal Process
1.1.1 Policy
· All team members will have their goals reviewed annually with the aim of supporting professional and technical achievement.
1.1.2 KPI
· All team members complete Annual and Mid-Year Personal Development Review process
1.1.3 Approach
Individual’s goals will be a combination of personal targets and a filter of the business and department objectives.
Annual appraisal occurs each April and will feedback to each team member according to the following parameters:
- Goal achievement
- Future development/training
- Projects worked on
- 360 feedback
1.1 Retrospectives and Process Improvements
1.1.1 Policy
· The Test Team will always respond to feedback on testing processes and look to improve where necessary.
· The Test Team will hold internal retrospectives at least two times per year
1.1.2 KPI
· Outcomes will be reflected through improved test processes and documentation.
1.1.3 Approach
At the end of a release the Test Team will capture any lessons learned. This will be fed back into the SDLC to improve the development process.
1.1 Test Planning
1.1.1 Policy
· The allocation of test resource will be allocated to ensure that there is a constant flow of delivery.
· Where an instance of resource requirement conflict occurs, prioritisation with take place.
1.1.2 KPI
- Sufficient resource
- Defined scope and timelines
- Software and document version control
- Stakeholder’s communicated to effectively
- Test conditions traced clearly to requirements
- Test Bed and Test Tools available
1.1.1 Approach
Testing effort where possible will be aligned to release timescales. Where this is not possible or there is a conflict for testing resource the decision will be escalated to the relevant Product Owners to decide how resource should be allocated.
When considering the priority for usage of testing resource and the schedule of testing the following factors should be considered:
- Release timelines
- Business impact of changes
- Tester skills set
- Resource availability
- Test support availability
- Likelihood of delay
- Planned 80% efficiency of all resource.
1.1 Test Estimation
1.1.1 Policy
· Estimates for testing effort will not be provided as standard practice where there is a continuous workflow of stories that has been prioritised according to business need.
· Test estimates will be provided where required for external billing.
1.1.2 KPI
· Minimising cycle time.
Cycle time is the amount of time it takes for a unit of work to travel through the team’s workflow–from the moment work starts to the moment it ships. By optimizing cycle time, the team confidently forecast the delivery of future work.
1.1.3 Approach
Any estimates will be in Points and based on:
- Functional Points within the Work Request
- The Likelihood that the functionality will change (Story Risk Factor)
- Test Scripting and Execution effort required
- Relative development budget
Points are equal to total Ideal Man Days estimated multiplied by 70% Productivity
Note: An Ideal Man Day is defined by “If I were locked into a room with no phone or other disturbances and a perfect test setup, after how many days would I have this testing completed?”
All estimates will be exclusive of the effort required for retesting of client changes and defects and additional regression testing incurred from project changes.
1.1 Test Design / Scripting
1.1.1 Policy
· All test cases and scenarios will be written to a consistent high standard complicit with industry best practices
· All tests will consider both positive and negative tests for a given functionality
1.1.2 KPIs
· Evidence of knowledge transfers and peer reviews
· Existence of tickets and defects raised
1.1.3 Approach
All testing up to system test level, including integration and user testing will be in Agile BDD format (Given, When, Then). All testing below system level, including Unit Testing, will be structured for TDD. This is to facilitate automated testing.
Test Pyramid
Volume of Testing measured by width of segment.
1.1 Automation
1.1.1 Policy
· There will be an automated regression test suite
· The automated regression test suite will be run at build time as part of integration
1.1.2 KPI
· 80% path coverage with automated integration tests.
1.1.3 Approach
Tests will be automated from Gherkin style Feature Files using automation tools such as Cucumber or Specflow.
Where possible Test Driven Development will be used, with creation of automated tests following the story refinement stage. Where this is not possible tests will be automated following manual execution.
1.1 Regression Testing
1.1.1 Policy
· A Regression Test Suite will exist for execution during pre-production and disaster recovery testing
1.1.2 KPI
· The Regression Test Suite is updated at least annually
1.1.3 Approach
A subset of the full test suite will be created aligned to prioritisation of identified risk the application.
1.1 Boundary Testing vs use of Live Data
1.1.1 Policy
· Test cases will be designed to use the appropriate test data that both fits the user requirement and validates the integrity of the product under test.
· Any sensitive live data is sanitised before copying to non-production environments.
1.1.2 KPI
· No confidential user data exists in non-production environments
1.1.3 Approach
Data Sampling
The sample of test data used will depend on the test techniques employed. Boundary Value Analysis and Equivalence partitioning each have their own strategies for data selection. Where a wider data population is available, simple random sampling will be used to create the data set. The data set will be designed by the use of sampling tools appropriate to the data types under test.
Use of Data Sets
For non-functional and database testing data sets will be clearly defined and used repeatedly. In these circumstances a process of backup and restore may be applied to the database.
Data Sanitisation
Sansitisation is the process of removing sensitive information from a document or other medium, so that it may be distributed to a broader audience. No test data should be migrated into the Live environment and no client sensitive test data should be accessible to unauthorised members of the development team.
The method used for anonymisation of the data set will be to mask the data in the database and where necessary encrypt the database to secure the data.
1.1 User Testing / Acceptance Testing and Business Verification
1.1.1 Policy
· Users and Product Owners will be involved in story refinement and acceptance of story completion
1.1.2 KPI
· No story is considered ‘Done’ until accepted by the Product Owner
1.1.3 Approach
Following integration testing users and product owners will be invited to complete User Acceptance Testing, this may through being given access to a non-production environment or through a product demo.
Also see Alpha and Beta Phases
1.1 Alpha and Beta Phases
1.1.1 Policy
· Alpha and Beta phases of delivery will be completed
1.1.2 KPI
· Completion of Phase Assessments
1.1.3 Approach
Alpha Testing
Testing by a controlled user group within the business so that feedback and defects can be collected from the group.
Beta Testing
The application is run in parallel to the legacy application in order to be tested in a production environment. This is external public user acceptance testing.
A private beta phase may be included prior to public testing to sample a controlled user group.
1.1 Live Testing
1.1.1 Policy
· Live tests will be planned and executed in coordination with business users using production environments.
1.1.2 KPIs
· Zero major incidents (Service Desk Severity 1 or 2) attributed to the execution of Live Testing following root cause analysis
1.1.3 Approach
Best Practice:
The business will observe the following behaviours when executing Live Testing:
- Sanitise all test data to maintain client confidentiality.
- Mark all test data clearly as Test Data.
- Clear down test data after test completion.
- Clear Internet Histories and Cookies from external computers.
- Gain approval for test execution from an external computer from the client or business management where appropriate.
- Plan testing to use the minimum access required to the Live environment.
- Plan non-functional testing as to not impact production users, i.e. out of business hours.
Prohibited Actions:
The business will refrain from the following practices when executing Live Testing:
- Sharing test data or test processes with third parties.
- Use test data for personal or professional gain.
- Use inappropriate or illegal test data.
- Leave access to test data or live environments unsecured.
- Remove Client or Live data.
1.1 Defect Process and Severity Definitions
1.1.1 Policy
· A Ticket will be created with appropriate severity for any defects identified in the pre-production environment
· A release will not be recommended for a Go decision with any severity 1 or 2 defects unresolved
1.1.2 KPI
· Zero unresolved severity 1 or 2 defects prior to Go Live
1.1.3 Approach
Defect Severity Definitions:
One
Interruption to business critical functionality that is causing severe impact/inaccessibility to the availability of the service
Non-exhaustive examples:
- Total site outage.
- 70% or greater server failures
Two
Interruption to business critical functionality that is causing disruption to the availability of the service or a degrading the performance of the service provided
Non-exhaustive examples:
- Part Product inaccessible.
- Outage to a Service.
Three
Interruption to a service that is causing some operational impact but is having little or no impact to business critical services and no workaround is available
Non-exhaustive examples:
- Technical errors been received on multiple servers (between 6–15). Large/Full sections of CMS not responding, but the rest of CMS is functional
Four
Non critical functionality that is having a personal/small scale impact, workarounds or repair is possible. No impact to operational or business critical services.
Non-exhaustive examples:
- Access request to edit content.
- Challenging the rejection or publication of a comment.
Five
Non critical functionality that is having no impact to any services.
Non-exhaustive examples:
- General feedback.
- Suggestion for improvements to the product.
1.1 Testing Suspension and Resumption
1.1.1 Policy
· Testing will be suspended and resumed on a product or story in accordance with standardised criteria (see below)
1.1.2 KPI
· Number and severity of defects unresolved
1.1.3 Approach
Suspension and Resumption Criteria
Project testing will be suspended when one of the following criteria is met:
- 1 Critical defect is found.
- At least 2 major severity defects have been raised.
- All functional and non-functional tests have been executed, but blocked tests are still outstanding.
- An application release is due and no time is available to test the current version.
- The Test Bed is unavailable or has become invalid.
- Test support is unavailable and a critical issue requiring support is open.
Project testing will be resumed following suspension when all the following criteria are met:
- All critical blocking defects or issues are resolved and a fix has been implemented.
- Sufficient resource is available to test the current release.
- The Test Bed is fully available.
1.1 Release Recommendations
1.1.1 Policy
· The Delivery team will make a recommendation to the Test Manager, Release Manager and Change Board based on the empirical results from test execution.
1.1.2 KPI
· A recommendation is delivered either in writing or verbally for each planned release
1.1.3 Approach
The final Production Release criteria will be determined by the Release Manager for each release. It is anticipated that the Test Team will feed into this process with the following information:
- Testing recommendation.
- Test pass rate.
- Outstanding defect information.
Release Status Report
At the end of the Release a Status Report will be produced to document the feature complete user stories and incident tickets ready for release.
1.1 Escalation Procedure
1.1.1 Policy
· In the event that an issue regarding the Test Team is not satisfactorily resolved there should be a clear and effective escalation process
1.1.2 KPI
· All escalated issues resolved satisfactorily and in a timely manner that the affected user is happy with
1.1.3 Approach
In the event that a defect or resource issue cannot be resolved within the project team it should be escalated though the following levels:
1) Test Analyst
2) Test Manager
3) Head of Technology
5) Senior Management Team
6) Delivery Director
7) Executive Management Team
1.1 Non-Production Environments
1.1.1 Policy
· The Test Bed will evolve to meet project requirements.
· Non-Production environment will enable continuous integration and continuous delivery
1.1.2 KPI
· During working hours there is a non-production environment available which contains features to be tested
1.1.3 Approach
The resources detailed below provide an underlying basis that will be augmented by project specific test plans.
1.1 Device Testing
1.1.1 Policy
· Delivery will follow a mobile first approach to imitate user experience
1.1.2 KPI
· Functional requirements should be tested against at least one mobile device before acceptance
1.1.3 Approach
Mobile Telephones, Smart Phones and Tablets are becoming common interfaces to web content. Testing on these platforms will be prioritised according to client requirements and handset usage metrics. Testing methods will be device specific but will commonly involve the use of emulators.
Priority should be given to testing with Operating Systems and Browsers more commonly used.
1.1 Performance Testing
1.1.1 Policy
· There will be no significant degradation in functionality performance as a result of a change
· Acceptance of new products includes that attainment of non-functional requirements.
1.1.2 KPI
· Significant degradation is defined as a more than 5% performance drop
1.1.3 Approach
As defined in the Performance Test Strategy
1.1 Concurrency Testing
1.1.1 Policy
· Concurrency testing will be planned and resourced according to user requirements
1.1.2 KPI
· Number of concurrent users needed to validate functional behaviour
1.1.3 Approach
Concurrency testing is multi-user testing geared towards determining the effects of accessing the same application code, module or database records. This identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Where concurrency testing is required, resource and test execution will need to be planned in detail on a by project basis.
1.1 Security Testing / Penetration Testing
1.1.1 Policy
· Security Testing will be aligned to ISO27001 standard
1.1.2 KPI
· All Severity 1 and 2 Security related defects are resolved within agreed timescales
1.1.3 Approach
As defined by Information Security Processes.
1.1 Disaster Recovery / Business Continuity Testing
1.1.1 Policy
· Disaster recovery and business continuity testing will be aligned to ISO27001 standard
1.1.2 KPI
· All Severity 1 and 2 Security related defects are resolved within agreed timescales
1.1.3 Approach
As defined by the Infrastructure Plan
1.1 Web Accessibility Testing
1.1.1 Policy
· The Business should conform to WCAG 2.0 Level AA.
1.1.2 KPI
· Number of accessibility criteria met
1.1.3 Approach
Web Content Accessibly Guidelines (WCAG) is 2 levels of standard for making web content more accessible for people with disabilities. Products should conform to WCAG 2.0 Level AA. The Test Team will use a checklist to confirm the sites conformity.
WAVE
Wave (http://wave.webaim.org/) is an automated online tool for smoke testing web content for accessibility issues. The Test Team will utilise this tool during smoke testing
Formal WCAG Level AA certification will be provided by a third party.
1.1 Social Media
1.1.1 Policy
· Use of Social Media accounts will be strictly controlled
1.1.2 KPI
· A list of authorised testing social media accounts will be maintained by the test team
1.1.3 Approach
A controlled set of social media accounts will be available for use by the delivery teams. Usage will be monitored and test data will be cleared down.
APPENDIX
Useful links
http://www.softwaretestingstandard.org
http://www.testingstandards.co.uk/
http://www.aptest.com/glossary.html
http://www.softwaretestingclub.com/
http://www.softwareqatest.com/
http://www.opensourcetesting.org/
http://www.stickyminds.com//index.asp
http://www.w3schools.com/default.asp
1.1.2 Test Page Examples
Java: http://www.java.com/en/download/help/testvm.xml
Flash: http://www.adobe.com/software/flash/about/
Quick Time: http://trailers.apple.com/
Acrobat (pdf): http://finaid.georgetown.edu/sample.pdf
Real Media: http://www17.real.com/realplayer/test/
Media Player: http://www.vdat.com/techsupport/windowstest.asp
1.2 Contact Details
The Test Team can be contacted via email at:
I hope you found this example useful. If so, or if you have any additions, please let me know in the comments.