Skip to content
This repository was archived by the owner on May 17, 2023. It is now read-only.

Test Strategy

mkolisnyk edited this page Oct 1, 2013 · 4 revisions

Introduction

This document describes general approach for Sirius Test Automation Platform Testing. It is the basic document defining what/where/when and how should be tested during current project development.

Scope

The testing covers all Sirius components including:

  • Server modules
  • All client modules

Testing Types

By interaction with application under test

Static

Static testing is done using dedicated static code analysis tools (see appropriate table in the Toolset section).

Major metrics to collect

Metric Definition
Code style violations The number of naming, formatting and any other conventions which are designed to keep the code formatted uniformely
Code analysis violations The number of redundant, empty, useless or potentially dangerous code constructions
Cyclomatic Complexity Number Also known as McCabe Metric. Represents the number of potential flows which each specific method may have
Non Commenting Source Statements Represents the number of statements withing the method

Accepted measures

Metric Accepted Count Exceptions
Code style violations 0 Service client generated code
Code analysis violations 0 Service client generated code
Cyclomatic Complexity Number 10 per method Service client generated code
Non Commenting Source Statements 100 per method Service client generated code

Dynamic

By the abstraction from the internal implementation details

White-box

Black-box

By the changes newness

New feature testing

Regression testing

By the requirement types

Installation

Configuration

Configuration testing should be applied for all modules separately as well as the entire system. It should be performed on all supported environment instances. Major varying parameters here are runtime entironment versions. They are:

Technology Version Components
Java 1.6, 1.7 All Java Clients, All server modules
.NET 2.0, 3.5, 4.0 All C# clients, C# Win32 Server
Ruby 1.9 All Ruby clients

Additionally, Web modules should be tested against the following browsers:

Browser Versions
IE 9.0
Firefox N/A
Chrome N/A
Opera N/A

Security

Functional

Performance

Volume

Stability

Reliability

This group of tests is targeted to verify that system keeps working in case of fail. So, major focus is paid to negative cases. Special attention should be paid to the methods which process big volume of data and potentially cause OutOfMemory exceptions.

Usability

Testing Stages

Component Testing

Integration Testing

System Testing

Acceptance Testing

Tool Set

General technology stack

Module Technology IDE Build Engine Code Analysis Test Engine
Server Java, .NET Eclipse, Visual Studio Maven, MSBuild CheckStyle, StyleCop JUnit, JBehave, NUnit, SpecFlow
Java Client Java Eclipse Maven CheckStyle JUnit, JBehave
Ruby Client Ruby Eclipse Rake Rubocop Cucumber
C# Client .NET Visual Studio MSBuild StyleCop NUnit, SpecFlow

Static analisis toolset

Check Type/Language Java C# Ruby
Code style correspondence Check-Style StyleCop Rubocop
Code analysis PMD FXCop
Cyclomatic Complexity JavaNCSS

Major engines

Engine Type/Language Java C# Ruby
Build Engine Maven MSBuild Rake
Core test engine JUnit NUnit Test/Unit
BDD Engine Cucumber-JVM SpecFlow Cucumber

Feature Life-cycle

Metrics

Metric Description Area of applicability
Code style violations 0 Service client generated code
Code analysis violations 0 Service client generated code
Cyclomatic Complexity Number 10 per method Service client generated code
Non Commenting Source Statements 100 per method Service client generated code
Code Coverage 50%
Conditions coverage 50%
Features coverage 100%
The number of failed tests 0 Edge cases, Experimental code

Acceptance Criteria

Metric Accepted Count Exceptions
Code style violations 0 Service client generated code
Code analysis violations 0 Service client generated code
Cyclomatic Complexity Number 10 per method Service client generated code
Non Commenting Source Statements 100 per method Service client generated code
Code Coverage 50%
Conditions coverage 50%
Features coverage 100%
The number of failed builds 0
The number of failed tests 0 Edge cases, Experimental code

Clone this wiki locally