Home Basic SDP Test Case Web Testing FAQ Others
You are here: Home>>Basic Concept>> Integration

Introduction

Testing Types

Black Box Testing

GUI Test Checklist

Stubs and Drives

Decision tables

Equivalence Class

Standards for software

you are here Integration

Software Review

Usability Testing

Load / Stress Testing

QA Glossary


We need your help! To keep the site alive, please make the donation by clicking the button below.
Thank you very much!

How to Make a Donation Using PayPal Without an Account?

Integration

Integration concept

deadlock

* The situation in which two communicating processes are each waiting for the other to perform an action
* Any of a number of situations where two or more processes cannot proceed because they are both waiting for the other to release some resource.

livelock

* When tasks are fighting for an exclusive resource, but are also ready to run. For example both tasks may be in a loop checking for a resource's availability.

Heterogeneity

* A heterogeneous compound, mixture, or other such object is one that consists of many different items
* Consisting of elements that are not of the same kind or nature Infrastructures
* Infrastructure includes all of the hardware, software and services that allow an e-business to function.

Pattern

* A pattern is a form, template, or model (or, more abstractly, a set of
rules) which can be used to make or to generate things or parts of a thing,
especially if the things that are generated have enough in common for the
underlying pattern to be inferred or discerned, in which case the things are
said to exhibit the pattern.

Event

* something that happens at a given place and time


Integration Testing for Component-Based Software

Overview

When developing a component-based software system, we test each component
individually. Why should we worry about the assembly of these components?

Apparently, components may have been developed by different people, written
in different programming languages, and executed in different operating
platforms.
Therefore, the interfacing among components needs to be tested
[1?]. Integration testing is necessary to ensure communication between
components is correct. IEEE defines it as:
Testing in which software components, hardware components, or both are
combined and tested to evaluate the interaction between them.

In this chapter, we explain what integration testing is and describe some
typical integration-testing methodologies. We then explore the problems that
may be encountered when applying test techniques to component-based
software. Finally, a UML-based integration technique is discussed.

Introduction

When we develop component-based software, we develop and test each component
separately. Why can problems be encountered when we integrate them? And how
do we perform integration testing to identify them?

The following is a fault model that specifies possible faults that are likely to be overlooked during unit-level testing [4]. These interaction-related faults can be classified into programming-related faults, which are intercomponent faults, and nonprogramming-related faults, which are interoperability faults. Faults that are overlooked during unit testing can be identified during integration testing and are classified as traditional faults.


9.1.1 Type I: Intercomponent faults

Even when the individual components have been evaluated separately, there
can still be faults in the interactions between them. Programming-related
faults that are associated with more than one component are considered to be
intercomponent faults.

An example is given in the following code. After
adding i = 0 onto I1 of component C1 and testing I1 and I2, we cannot detect
any problems. But when the component C1 is deployed together with C2 and C3,
and I1 is invoked followed by I2, a failure will occur in C3.
C1: I1 ? i = 0;
    I2 ? return i;
C2: I1 ? C1::I1();
C3: I4 ? j = 1/C1::I2();


Many other possible faults can be classified as intercomponent faults or
example, deadlock and livelock. Most of these faults cannot be identified
during unit-level testing. Therefore, they are the major targets of
integration testing.

Type II: Interoperability faults


Many characteristics of component-based systems, such as heterogeneity,
source code unavailability, and reusability, will lead to different types of
interoperability problems. These interoperability problems can be classified
into system-level, programming-language level, and specification-level
interoperability faults.

* System-level interoperability faults: In a component-based system,
different components can have been built under different infrastructures,
and the infrastructures may not be 100% compatible. For example,
incompatibility between different CORBA products can affect the interaction
between CORBA components.
* Programming-level interoperability faults: When components are written in
different programming languages, incompatibility between the languages may
cause problems. For instance, Microsoft supports the integration of VC and
VB components. But the floating-point incompatibility may cause
interoperability faults.
* Specification-level interoperability faults: Specifications may be
misinterpreted by developers, and there are many different ways that
specifications may be misunderstood. This type of fault can be caused by the
data that pass through the interfaces or the patterns of the component
interactions.
Some detailed discussion and examples can be found in Section 3.2.1.

Type III: Traditional faults and other faults

Traditional testing and maintenance techniques can be adopted for those
faults that can be isolated within one component. These faults will be
identified as traditional faults. Other faults, such as faults related to
special-input or special-execution environments, also fall into this
category.

9.2 Traditional integration-testing methodologies

Integration testing needs to put a group of components together. To carry
out that process adequately, we need to answer two questions: what are the
orders we follow to incrementally integrate components, and how do we test
the newly integrated components? Some typical approaches follow in the next
two sections.

9.2.1 Function decomposition based integration testing

This approach is based on the function decomposition, which is often
expressed in a tree structure. For instance,


Given the functional-decomposition tree, four different approaches can be
used to pursue integration testing.


1. Big-bang approach: This approach will integrate all the components at
once. The big-bang approach seems simple. Its major drawback is in fault
identification. If failures are encountered, all components are subject to
the possibility of hosting faults. Therefore, fault identification can be
very costly. To avoid the difficulty of identifying faults during
integration, the following three incremental approaches can be used. When
new components are added and failures are encountered, the faults are most
likely to be in the newly added components. Therefore, the faults are often
much easier to identify than with the big-bang approach.


2. Top-down approach: This approach moves from the top level of the
functional-decomposition tree to the leaves of the tree. A breath-first
approach is often used to determine the order of integration. For example,
in Figure 9.1 we can go after the following order: Main, Authentication,
Transaction management, Get PIN, Validate PIN, Deposit, Withdrawal, and
Transfer. The difficulty of the top-down approach is that when testing a
partial group of components, stubs have to be developed separately for
components that are not included.


3. Bottom-up approach: In contrast to top-down approach, the bottom-up
approach starts from the leaves of the functional-decomposition tree and
works up towards the top level. Bottom-up approaches do not require stubs
but require drivers instead.


4. Sandwich approach: The top-down approach can identify high-level or
architectural defects in the systems in the early stage of integration
testing, but the development of stubs can be expensive. On the other hand,
the bottom-up approach will not be able to identify high-level faults early,
but the development of drivers is relatively inexpensive. A combination of
the two approaches, called the sandwich approach, tries to use the
advantages of the two approaches, while overcoming their disadvantages.
Functional-decomposition approaches do not consider the internal structure
of individual components. Therefore, testing is based solely on the
specification of the system.


9.2.2 Call graph based integration testing

The approaches in Section 9.2.1 are based on functional decomposition trees,
not the structure of the program. In addition, stubs or drivers need to be
developed to fulfill the task. To solve these two difficulties, the
call-graph approach has been developed. As the name of the approach implies,
it is based on the call graph of a program.


Based on the call graph, the internal structure of the program is clearly
represented. By following the structure of the call graph, similar
incremental approaches such as top-down or bottom-up approaches can be used.
But no additional drivers or stubs are required; the original program can be
used instead.

Besides the incremental approaches that are used in the
functional-decomposition-based methods, other incremental approaches can be
incorporated that are based on the characteristics of call graphs or
instance, neighborhood integration and path-based integration. To speed up
the incremental process, neighborhood approaches, instead of incorporating
single components, will added all adjacent components (callers and callees)
into each integration-testing phase. To take the overall behavior of the
system into consideration, paths in the call graph, which corresponds to a
single thread of behavior in the system, can serve as the basis for
path-based integration testing.

Based on call graphs, to test newly integrated components, data flow based
[5, 6] and coupling-based [7] approaches can be adopted. Coupling-based
approaches measure the dependence relationships [7] and then the
interactions between components. Couplings can be classified into call
coupling, parameter coupling, shared-data coupling, and external-device
coupling; consequently, coupling def, coupling use, and coupling path are
defined and used as the criteria for integration testing.

A test model for integration testing of component-based software
When performing integration testing for component-based software, some of
the approaches mentioned earlier can be used, for instance,
functional-decomposition approaches. But the more effective approaches,
called graph-based approaches, cannot be used because of the implementation
transparent features of component-based software. To overcome this
difficulty, M. J. Harrold et al. [8] proposed a testing technique based on
analysis of component-based systems from component-provider and
component-user perspectives. The technique was adapted from an existing
technique [5, 6], which makes use of complete information from components
for which source code is available and partial information from those for
which source code is not available. The authors of [9, 10] extended their
work by proposing a framework that lets component providers prepare various
types of metadata, such as program slicing. The metadata were then used to
help test and maintain component-based systems. S. Ghosh and A. P. Mathur
[11] discussed issues that arise when testing distributed component-based
systems and suggested an interface and exception coverage-based testing
strategy.

Test elements

During integration testing, our test model emphasizes interactions between
components in component-based software systems and tries to reveal Type I
and Type II faults. A component may interact with other components directly
through an invocation of the interfaces exposed by those components, an
exception, or a user action that triggers an event. A component may also
interact with other components indirectly through a sequence of events. The
elements that need to be taken into account in component-based testing are:
* Interfaces: Interfaces are the most common way to activate components.
Therefore, it is necessary to test each interface in the integrated
environment at least once.
* Events: Every interface that can be invoked after deployment must be
tested at least once. This goal is similar to the traditional test criterion
that requires every function/procedure to be tested at least once. However,
invoking the same interface by different components within different
contexts may have different outcomes. Thus, to observe possible behaviors of
each interface during runtime, every unique invocation of the interfaces
needs to be tested at least once. Moreover, some events that are not
triggered by interfaces may have an impact on the components, which need to
be tested as well. Therefore, every event in the system, regardless of its
type, needs to be covered by some test.


9.3.1.1 Context-dependence relationships

Interface and event testing ensure that every interaction between two
components, the client and the server is exercised. However, when the
execution of a component-based software system involves interactions among a
group of components, the sequence of event triggering may produce unexpected
outcomes. To capture the inter-relationships among events, we define a
context-dependence relationship that is similar to the
control-flow-dependence relationship in traditional software. An event e2
has a context-sensitive dependence relationship with event e1 if there
exists an execution path where the triggering of e1 will directly or
indirectly trigger e2. For a given event, e, it is necessary to test e with
every event that has a context-sensitive dependence relationship with e, to
observe the possible impact of execution history on the outcome of the
execution of e.

Context-sensitive dependence relationships also include indirect
collaboration relationships between interfaces and events that occur through
other interfaces and events. Therefore, testing context-sensitive dependence
relationships may serve to identify interoperability faults that are caused
by improper interactions between different components.

Content-dependence relationships

The invocation of an interface of a component causes a function inside the
component to be invoked. Therefore, when a function declared in an interface
v1 has a data dependence relationship with another function declared in
another interface v2, the order of invocation of v1 and v2 could affect the
results.

Our concept of a content-dependence relationship between the two interfaces
v1 and v2 assumes that the two interfaces have a data-dependence
relationship. An interface encapsulates one or more signatures, where each
signature is a declaration of a function. When an interface is invoked, one
or more functions will be executed to perform the requested service. Thus,
the interface-dependence relationship can be derived from the
function-dependence relationship, which was shown to be useful information
in object-oriented class testing [12] and regression testing [13]. More
precisely, a function f2 depends on a function f1 if and only if the value
of a variable that is defined in f1 is used in f2. Therefore, a
content-dependence relationship is defined as follows: an interface v2 has a
content-dependence relationship with interface v1 if and only if v1 contains
the signature of f1, v2 contains the signature of f2, and f2 depends on f1.
Both the direct interaction among interfaces and events and the
context-dependence relationships should be included in the interactions of a
component-based system from the control-flow perspective. Contentsensitive
dependence, on the other hand, can provide valuable additional information
in generating test cases and detecting faults.

Component interaction graph

We define the component interaction graph (CIG) to model interactions
between components by depicting interaction scenarios. The interactions can
take place directly or indirectly. Direct interactions are made through a
single event, while indirect interactions are made through multiple events
in which execution order and data-dependence relationships may result in
different outcomes. A CIG is a directed graph where CIG = (V, E) V = VI ? VE
is a set of nodes, where VI is the set of interface nodes, and VE is the set
of event nodes, and E represents a set of directed edges. Given the CIG,
call graph based approaches can be directly applied.

Moreover, the following two types of entities may require some additional
treatment: context-dependence relationships and content-dependence
relationships. To identify context-dependence relationships in the CIG, a
path-based approach can be used, while to identify content-dependence
relationships, data-dependence relationships need to be taken into
consideration.

Test-adequacy criteria

Software development costs and schedules are constraints on quality in most
software-development processes, often requiring difficult trade-offs [14,
15]. Deciding between budget, schedule, and quality is a practical and
difficult software-management task. Testing is a time-consuming process, and
is often performed at different levels of detail to satisfy different
quality thresholds. For example, white-box approaches provide a variety of
test-adequacy criteria, including statement, branch and def-use pairs
coverage. Stronger fault-detection criteria often require more complex
analysis and test cases.
In our model, the CIG can be used to develop a family of test-adequacy
criteria, as depicted in Figure 9.3. The basic levels are all-interfaces and
all-events coverage, which enforce the coverage of all-interface or
all-event nodes. Once all interfaces and events have been tested, the direct
interactions among components are tested.


To test indirect interactions, all-context-dependence coverage will test
where the interfaces are invoked in different -contexts. To do this, the
interface-invocation sequences need to be identified. These sequences can be
defined as context-sensitive paths, paths between two context-dependent
events ei and ej in the CIG. Formally, all paths p ={ei, ei+1, ei+2, ? ej}
where (et, et+1) ? CIG. A triggering of e1 is likely to trigger e2;
therefore, all p are considered viable and need to be tested.
All-context dependence will test all the control scenarios in a
component-based system. To further test indirect interactions between
components, those caused by data interaction must be examined.
All-content-dependence coverage needs to be satisfied, which requires
covering all-content-dependence relationships in the component-based system.
Content-sensitive paths can be used to depict these dependence
relationships. Content-sensitive paths are defined by pairs of
context-sensitive paths (pa, pb), such that pa and pb are context-sensitive
paths, and interfaces I1 ? pa and I2 ? pb have a contentdependence
relationship. Every content-sensitive path needs to be exercised, and it is
necessary to make sure that the execution of I1 comes before that of I2 to
observe if a fault could occur due to the content-dependence relationship.
All-content-dependence coverage needs to cover all the
context-sensitive-path pairs, which are involved in a content-dependence
relationship. Therefore, there will be significantly greater complexity than
with the all-context-dependence coverage. Thus, another coverage criterion,
allcontext/some-content-dependence coverage, which requires covering one
context-sensitive path for each content-dependence relationship, will
adequately test the content dependence and therefore reduce the complexity
dramatically.