Thursday, 20 November 2014

Fundamentals

Defect Tracking

To track defects, a defect workflow process has been implemented. Defect workflow training will be conducted for all test engineers. The steps in the defect workflow process are as follows:
a) When a defect is generated initially, the status is set to "New". (Note: How to document the defect, what fields need to be filled in and so on, also need to be specified.)
b) The Tester selects the type of defects:
  • Bug
  • Cosmetic
  • Enhancement
  • Omission
c) The tester then selects the priority of the defect:
  • Critical - fatal error
  • High - require immediate attention
  • Medium - needs to be resolved as soon as possible but not a showstopper
  • Low - cosmetic error
d) A designated person (in some companies, the software manager; in other companies, a special board) evaluates the defect and assigns a status and makes modifications of type of defect and/or priority if applicable).
  • The status "Open" is assigned if it is a valid defect.
  • The status "Close" is assigned if it is a duplicate defect or user error. The reason for "closing" the defect needs to be documented.
  • The status "Deferred" is assigned if the defect will be addressed in a later release.
  • The status "Enhancement" is assigned if the defect is an enhancement requirement.
e) If the status is determined to be "Open", the software manager (or other designated person) assigns the defect to the responsible person (developer) and sets the status to "Assigned".
f) Once the developer is working on the defect, the status can be set to "Work in Progress".
g) After the defect has been fixed, the developer documents the fix in the defect tracking tool and sets the status to .fixed,. if it was fixed, or "Duplicate", if the defect is a duplication (specifying the duplicated defect). The status can also be set to "As Designed", if the function executes correctly. At the same time, the developer reassigns the defect to the originator.
h) Once a new build is received with the implemented fix, the test engineer retests the fix and other possible affected code. If the defect has been corrected with the fix, the test engineer sets the status to "Close". If the defect has not been corrected with the fix, the test engineer sets the status to .Reopen.. Defect correction is the responsibility of system developers; defect detection is the responsibility of the AMSI test team. The test leads will manage the testing process, but the defects will fall under the purview of the configuration management group. When a software defect is identified during testing of the application, the tester will notify system developers by entering the defect into the PVCS Tracker tool and filling out the applicable information.

Test Case Specifications

The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should explain "how" to implement the test cases described in the test plan.
Test Specification Items
Each test specification should contain the following items:

Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case number.

Title: is the title of the test.

ProgName: is the program name containing the test.

Author: is the person who wrote the test specification.

Date: is the date of the last revision to the test case.
Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.

Expected Error(s): Describes any errors expected

Reference(s): Lists reference documententation used to design the specification.

Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.

Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.

Example Test Specification

Case No. 7.6.3 Title: Invalid Sequence Number (TC)
ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)
Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001
Script: (Pseudo Code for Coding Tests)
SEND_PIU FIS, OIC, DR1, DRI SNF=20
SEND_PIU LIS, SNF=20
R_RSP $2001

Test Case Template

How do test case templates look like?



Software test case templates are blank documents that describe inputs, actions, or events, and their expected results, in order to determine if a feature of an application is working correctly. Test case templates contain all particulars of test cases. For example, one test case template is in the form of a 6-column table, where column 1 is the "test case ID number", column 2 is the "test case name", column 3 is the "test objective", column 4 is the "test conditions/setup", column 5 is the "input data requirements/steps", and column 6 is the "expected results".
All documents should be written to a certain standard and template. Why? Because standards and templates do help to maintain document uniformity. Also because they help you to learn where information is located, making it easier for users to find what they want. Also because, with standards and templates, information is not be accidentally omitted from documents.

The purpose of a good test case?

There are several objectives of any effective test whether manually executed or automated. The rationale for test case usually falls into one of seven categories. Each category provides value to the organization in different ways, but they all essentially function to reduce risk and qualify the testing effort. So, a good test provides some measurable value to the organization with the explicit purpose of:
  • Verifying conformance to applicable standards and guidelines and customer requirements

  • Validating expectations and customer needs

  • Increasing control flow coverage

  • Increasing logic flow coverage

  • Increasing data flow coverage

  • Simulating 'real' end user scenarios

  • Exposing errors or defects

Writing Test Cases



What is a test case

There are many ways to write test cases. Overall, we can say a test case is a code fragment that programmatically checks that another code unit - a method - functions as expected. In order to make the testing process an efficient one, it is recommended to use a testing framework. There are two major java testing framework.
  1. JUnit is written by Eric Gamma of the GoF fame and Kent Beck. JUnit has long been un-unchallenged in the testing framework arena.

  2. TestNG, written by Cedric Beust is a newer framework, which uses innovative concepts, much of which have been picked up by JUnit in the 4.x versions.

Using a framework to develop your test cases has a number of advantages, most important being that others will be able to understand test cases and easily write new ones and that most development tools enable for automated and / or one click test case execution. Which of the frameworks you'll choose, that's a matter of personal preference. Both enable you to properly test your code, are equally well supported by Java development tools that matter and both are easy to use. In this article we'll address only JUnit test case development issues.

How to Write Test Cases with JUnit

If you're using a version earlier then JUnit 4, it is mandatory for your test case to inherit from the TestCase class, like in the following code example:
public class ExampleTestCase extends TestCase {
...
}

JUnit Coding Conventions

A JUnit test case can contain multiple tests. Each test is implemented by a method. The declaration of a test method must comply to a set of conventions in order to help JUnit and associated tools to automate the discovery and execution of tests. These conventions are:
  1. the name of the method must begin with "test", like in "testCreateUser",

  2. the return type of a test method must be null,

  3. a test method must not throw any exception,

  4. a test method must not have any parameter.
Suppose you need to test the following method: public User createUser(String userName, String password1, String password2) throws PasswordDeosntMatchException { ... }. A typical test case for this method could look like this:

public class CreateUserTestCase extends TestCase {
public void testCreateUser() {v ...
}
public void testCreateUserFailPasswordDoesntMatch() {
...
}
}

How to Write a Success Test With JUnit

Why write three tests for one single method, you may ask? Well, this will be better explained in the article that explains how to develop test cases. For now, let's just say that you must not only test that your method works as expected under normal conditions, but also that it fails as expected when current environment configuration requires it.

The first method will simply call the method then compare the returned object with the provided parameters if the user was properly created. To compare the result with reference values you should use the assert family of methods. Also, we must not forget to force the test to fail if the exception is thrown when it is not supposed to. Here is how you could implement "testCreateUser()":

public void testCreateUser() {
try {
User ret = instance.createUser("john", "pwd", "pwd");
// use assertEquals to compare the name
// of the returned User instance with the reference value
assertEquals(ret.getUserName(), "john");
// use assertEquals to compare the pasword
// of the returned User instance with the reference value
assertEquals(ret.getPassword(), "pwd");
} catch (PasswordDeosntMatchException e) {
// force the test to fail, because the exception was thrown
// and it shouldn't have
fail();
}
}

The assertEquals method will compare the two provided values and if they're not equal, it will generate an internal exception which will inform JUnit about the failure. Also, if PasswordDeosntMatchException is thrown - knowing that this should not happen with the set of parameters we have provided - the fail() method will have the same effect. In either case, JUnit will mark the test as failed and report it as such. You can pass a message to the assertEquals and fail methods. This message will be shown in the report when that particular assertion fails, helping developers to more easily identify errors. This feature can be particularly helpful if you maintain a large test base each with many assertion points.

What Is a Good Test Case?

Designing good test cases is a complex art. The complexity comes from three sources. Test cases help us discover information. Different types of tests are more effective for different classes of information. Test cases can be "good" in a variety of ways. No test case will be good in all of them.
People tend to create test cases according to certain testing styles, such as domain testing or risk-based testing. Good domain tests are different from good risk-based tests.

Developing Test cases

Testing Simple Algorithms

Choosing a testing framework is just the first part of the story. The second and most important one is to actually fully and properly test your code. The typical code unit that you'll test is a method. The first thing that you'll have to test is that your method works properly. For example, let's consider the following method: double divide(double number, double divider) throws DivisionByZeroException. Your test would look like this:


public void testMethod() {
double precision = .001;
try {
double result = divide(6, 3);
if (2 - precision < result < 2 + precision)
SUCCESS();
else
FAIL();
} catch (DivisionByZeroException e) {
FAIL();
}
}

This will tell you if your method works properly or not: first, you make sure that no exception is thrown when your method is given a valid set of parameters. You also make sure that the result returned by the method is within specified precision requirements. This is usually called "testing for success".
But what about that exception that is being thrown? It's there to enable programmers to gracefully recover from abnormal execution conditions, such as when the divider is zero. This is a very important aspect because software may cause serious damage if abnormal execution conditions are not properly reported and handled. So just testing that your testing method works properly is not enought. You should also make sure that DivisionByZeroException is thrown when the method is invoked with a divider parameter equal to zero. This is usually called "failure test" and a typical failure test may look like this:


public void testMethod() {
try {
divide(6, 0);
FAIL(); // because the exception should have been thrown
} catch (DivisionByZeroException e) {
SUCCESS(); // because the exception was thrown as excepted, showing that the abnormal condition (an invalid set of parameters) has been properly detected and reported.
} }


Testing More Complex Code

The previous example was a very simple one, because the code unit that was tested is very simple. If your algorithm is more complex - that is, if its execution graph is complex - then only one success test may not be enough. To make sure your code is properly tested, you can adopt the following strategy:
  1. identify all the if / else and switch / case constructs - this is called building your algorithm's decision graph. A path through this graph is triggered by a set of input parameters and / or a particular state of the execution environment.

  2. build a set of parameters and an execution environment state for each possible path in the decision graph

  3. run a success test in each of these

Designing Test Cases

A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. You need to develop a test case for each test listed in the test plan. Figure 2.10 illustrates the point at which test case design occurs in the lab development and testing process.






A test case includes:
  • The purpose of the test.

  • Special hardware requirements, such as a modem.

  • Special software requirements, such as a tool.

  • Specific setup or configuration requirements.

  • A description of how to perform the test.

  • The expected results or success criteria for the test.

Test cases should be written by a team member who understands the function or technology being tested, and each test case should be submitted for peer review.

Organizations take a variety of approaches to documenting test cases; these range from developing detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform the test and what data to use.
Most organizations prefer detailed test cases because determining pass or fail criteria is usually easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate than descriptive test cases. This is particularly important if you plan to compare the results of tests over time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and can require debugging, consuming time that would be better spent on testing.

Test Case Design

Test Case ID:
It is unique number given to test case in order to be identified.

Test description:
The description if test case you are going to test.

Revision history:
Each test case has to have its revision history in order to know when and by whom it is created or modified.

Function to be tested:
The name of function to be tested.

Environment:
It tells in which environment you are testing.

Test Setup:
Anything you need to set up outside of your application for example printers, network and so on.

Test Execution:
It is detailed description of every step of execution.

Expected Results:
The description of what you expect the function to do.

Actual Results:
pass / failed

If pass - What actually happen when you run the test.

If failed - put in description of what you've observed.

QTP/UFT Tutorials © 2014. All Rights Reserved | Powered by-Blogger | Designed by-Dapinder