Tuesday, 26 June 2012

7 measures for effective and efficient Test Case Design

Working in the ICT sector for a decade now, having a professional role in Software Testing, I must say that I have seen lots of variations of how a test case looks like, so imagine how much effectiveness and efficiency it was experienced :-).

Let's start first with the very basics: What is a Test Case? 


My attempt for a definition: "A test case is a set of defined structured steps, which can be followed by an <<actor>> in order to allow the comparison of the actual behavior noticed and the expected one, on the software under Test".


The use of <<Actor>> term was used in purpose in order to show the two possible types of execution that can happen in testing at any time: manual execution of a test case by a human person and automatic execution by the use of an Automation tool.

By doing this discrimination on the <<actor>> it is obvious that there can be test cases designed only for manual execution or only for use with an automation tool or for both cases.

A Software or an application as most of use imagine it, is something either handling graphics installed in our OS in our Personal Computer or a maybe smaller application for our mobile or iPad.

These are not all, there are software systems and application with great complexity and diversity in their functionality, with many features  available for the end users experience.

So a test case cannot handle all the functionality and all the features provided in an application. A test case covers or better to say should cover as little as possible requirements, in order to have as much as possible fine stratification and uniqueness in coverage.

So a collection of many test cases can verify the correct and expected behaviour/results of the software application under test.

Let's focus now on a single Test Case element and see how a test case can be effective and efficient. 


1) A test case should always have a requirement REFERENCE. 
The goal of a test case: to be able to verify the correct implementation of a requirement element and discover any potential misinterpretations from implementation. A requirement element usually is nominated with a unique ID, so the use of this unique identity is essentially needed in a test case, in order to reference and justify the reason of existence of a test case. 


2) A short but descriptive TITLE should be given to a test case. 
Usually this can be a re-phrase of the requirement definition, for which the test case is designed for. As an example, if the requirement element says: "The system 'X' shall be 'y' and should be doing 'z' ", the title then can become: "Test case to verify that the system 'X' is 'y' and is doing 'z' ". 


You will notice from above that a requirement's definition states what should be implemented, in future time and is the expected, while a test case states what expects to find in current time and when implementation has completed.


3) A short DESCRIPTION of what the test case is about
It is good to explain in no more that 2-3 sentences what is the scope of the test case and what is expected after its execution.


4) A section to specify any CONDITIONS prior of the test case execution.
It is common for test cases to include in a section called pre-conditions, any initial situation that must apply in the system/software application under test before starting the execution of the steps described. There are various reasons, due to compulsory business constraints or due to gain time and effort by not repeating same actions, for several test cases execution.


5) The EXECUTION STEPS listed in logical and sequential order
The steps listed is the main content of a test case. The composition of a test case requires to possess the "technique" to instruct an <<actor>> to follow steps in order to verify an expected behaviour of the system/software application under test, in such a way to increase the probability of finding any potential issue.


6) EXPECTED RESULTS listed in sequence in accordance to the execution step followed. 
The Detailed expected behaviour/results listed for each execution step followed.  In case that an issue/problem is discovered the reporting can be detailed specifying only the step(s) responsible for the problem.


7) A section to specify the CONDITIONS after the test case execution.
When all execution steps have been completed, the test case might have affect in the state of the system/software application. The output of a test case might be the input for another test case, so the starting point of a test case should be known when in manual or automated execution mode. 


Let's consider now that the design - preparation phase for a test case has been completed and the test case is executed on the system/software application that was originally designed for.


The result of a test case can be mainly two things after the execution, 

  • "PASS", when a positive comparison has been noticed between the expected behaviour/results and the actual behaviour/results.
  • "FAIL", when a negative comparison has been noticed between the expected behaviour/results and the actual behaviour/results.

(There can be situations that a test case cannot be executed and when this condition implies the test case has a result marked as "BLOCKED", but let's leave this situation aside as it is not related with effectiveness and efficiency of a test case. I would rather say is related with the quality of implementation on the system/software application under test, it could be a future post on its own this.)


Let's make the hypothesis that the result of a test case after its execution is marked with a "PASS". 


This means that the structured steps defined in the test case, were followed and resulted to a perfect match between the expected behaviour/results stated and the actual noticed. 


Let's make the opposite hypothesis now: the result of a test case after its execution is marked with a "FAIL". 


This means that the structured steps defined in the test case, were followed and resulted to a deviation on the expected behaviour/results stated and the actual ones. 


The deviation might be either a wrong result, e.g. instead of "a" , "b" was noticed or an unexpected issue/problem noticed/produced while following a step, (e.g coding exception in the screen/page executing the test).


The two above examples of a positive and a negative result of two different execution of test cases, is not obvious that can prove effectiveness and efficiency. 


A test case although marked positively passing, might not necessary mean that the interpretation of the requirement definition was correctly made. 


So the test case might not be enriched as should have been, covering all aspects or any other logical combinations of the paths and flows of the functionality. 


So a marked result as "pass" is not always bullet proof and might be misleading, if this situation can be proved, the test case is definitely not effective. 


However in order to prove that a test case is not effective, discovering that a test case was not properly designed, can happen only while being in latest stages of Software life cycle. 


Such stages are when testing is done in client's premises (high visibility), and worst of all while in Production phase when the system application is live (highest visibility), used by the end user's in real Business operations, for entertainment, transportation or health purposes among other real life paradigms! 


As a measure to avoid nasty situations -discovering problems, incorrect system behaviours in the latest phases of the software life cycle - is to invest proper time in the preparation phase and especially in review cycles. 


When a test case is under review the steps defined and the expected results stated, are crosschecked with the requirements definition that the test case declares covering. Early spotting of wrong interpretation in the design of a test case is vital, and cost less!

The effectiveness of a test case can be evaluated from a very important factor in the Software Testing process: the number of defects/issues/problems/incidents that can discover or better to use a more well known word:  "bugs" that can be discovered!

So if a test case achieves to discover "bugs", it is considered effective. The ultimate goal of software testing in any test campaign is to discover as many problems as possible in the early stages of a Software life cycle to avoid high costs for fixing them while in late stages e.g production phase.

The efficiency of a test case can be found by measuring the time required to design it, to execute it and the actual outcome it produces, which is problems found. This requires very good time tracking and a very organised framework that a team or an individual Software Testing Professional handles testing activities.

Talent, experience and the necessary technique to "morph" a test case out of a requirement definition effectively and with efficiency in order to verify the correct implementation as it is expected and in the same time to increase probability to discover potential threat due to bad coding or side affect from any other system or component integrations.

Overall the design of a test case should aim to include techniques to discover problems/bugs by following non conventional paths.

Hope this post is useful, let me know your opinion.

No comments:

Post a Comment