Testing Interview Questions And Answers |
Q: What is verification?
A: Verification is the process confirming that a product meets identified specifications.
Verification means "Are we building the product right?" - The software should conform to its specification.
It typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
The techniques for verification are testing, inspection and reviewing.
Q: What is validation?
A:Validation is the process confirming that a product meets the user's real requirements.
Verification means "Are we building the right product?" - The software should do what the user really requires.
The techniques for validation are testing, inspection and reviewing.
Q: What is a walk-through?
A: A walk-through is an informal meeting for evaluation or informational purposes. It is a review of requirements, designs or code characterized by the author of the material under review.
Little or no preparation is usually required.
Q: What is an inspection?
A: An inspection is a formal meeting, more formalized than a
walk-through and typically consists of 3-10 people including a moderator, reader (the
author of whatever is being reviewed) and a recorder (to make notes in the
document).
The subject of the inspection is typically a document, such as a
requirements document or a test plan.
The purpose of an inspection is to find
problems and see what is missing, not to fix anything.
The result of the meeting
should be documented in a written report. Attendees should prepare for this type
of meeting by reading through the document, before the meeting starts; most
problems are found during this preparation. Preparation for inspections is difficult,
but is one of the most cost-effective methods of ensuring quality, since bug
prevention is more cost effective than bug detection.
Q: What is quality?
A: Quality software is software that is reasonably bug-free,
delivered on time and within budget, meets requirements and expectations and is maintainable.
However, quality is a subjective term. Quality depends on who the
customer is and their overall influence in the scheme of things. Customers of a
software development project include end-users, customer acceptance test
engineers, testers, customer contract officers, customer management, the
development organization's management, test engineers, testers, salespeople,
software engineers, stockholders and accountants. Each type of customer will have
his or her own slant on quality. The accounting department might define quality
in terms of profits, while an end-user might define quality as user friendly and
bug free.
|
Q: What is a good code?
A: A good code is code that works, is free of bugs and is
readable and maintainable. Organizations usually have coding standards all developers
should adhere to, but every programmer and software engineer has different
ideas about what is best and what are too many or too few rules. We need to
keep in mind that excessive use of rules can stifle both productivity and
creativity. Peer reviews and code analysis tools can be used to check for problems and
enforce standards.
Q: What is a good design?
A: Design could mean to many things, but often refers to
functional design or internal design. Good functional design is indicated by software
functionality can be traced back to customer and end-user requirements. Good internal
design is indicated by software code whose overall structure is clear,
understandable, easily modifiable and maintainable; is robust with sufficient error
handling and status logging capability; and works correctly when implemented.
Q: What is software life cycle?
A: Software life cycle begins when a software product is first
conceived and ends when it is no longer in use. It includes phases like initial concept,
requirements analysis, functional design, internal design, documentation planning,
test planning, coding, document preparation, integration, testing,
maintenance, updates, re-testing and phase-out. |
Q: Why are there so many software bugs?
A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.
,
There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do.
,
Software complexity. All of the followings contribute to the exponential
growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
,
Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
,
As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
,
Bug tracking can result in errors because the complexity of keeping
track of changes can result in errors, too.
,
Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
,
Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and
software engineers feel they have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
,
Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times
the tools are poorly documented, which can create additional bugs.
|
Q: How do You Introduce a New Software QA Process?
A: It depends on the size of the organization and the risks
involved. For large organizations with high-risk projects, a serious management buy-in is
required and a formalized QA process is necessary. For medium size organizations
with lower risk projects, management and organizational buy-in and a slower,
step-by-step process is required. Generally speaking, QA processes should be
balanced with productivity, in order to keep any bureaucracy from getting out of
hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot
depends on team leads and managers, feedback to developers and good
communication is essential among customers, managers, developers, test engineers and
testers.
Regardless the size of the company, the greatest value for effort is in
managing requirement processes, where the goal is requirements that are clear,
complete and testable.
Q: Give me five common problems that occur during software development .
A: Poorly written requirements, unrealistic schedules, inadequate
testing, adding new features after development is underway and poor communication.
1.
Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.
2.
The schedule is unrealistic if too much work is crammed in too little time.
3.
Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
4.
It's extremely common that new features are added after development is underway.
5.
Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed. |
Q: Give me five solutions to problems that occur during software development .
A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.
1.
Ensure the requirements are solid, clear, complete, detailed, cohesive,
attainable and testable. All players should agree to requirements. Use
prototypes to help nail down requirements.
2.
Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
3.
Do testing that is adequate. Start testing early on, re-test after fixes
or changes, and plan for sufficient time for both testing and bug fixing.
4.
Avoid new features. Stick to initial requirements as much as possible.
Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.
5.
Communicate. Require walk-throughs and inspections when appropriate;
make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date.
Use documentation that is electronic, not paper. Promote teamwork and
cooperation.
Q: Do automated testing tools make testing easier?
A: Yes and no. For larger projects, or ongoing long-term
projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the
record/playback type. For example, a test engineer clicks through all combinations of
menu choices, dialog box choices, buttons, etc. in a GUI and has an automated
testing tool record and log the results. The recording is typically in the form
of text, based on a scripting language that the testing tool can interpret. If a
change is made (e.g. new buttons are added, or some underlying code in the
application is changed), the application is then re-tested by just playing back the
recorded actions and compared to the logged results in order to check effects of
the change. One problem with such tools is that if there are continual
changes to the product being tested, the recordings have to be changed so often that it
becomes a very time-consuming task to continuously update the scripts. Another
problem with such tools is the interpretation of the results (screens, data,
logs, etc.) that can be a time-consuming task. |
Q: What makes a good test engineer?
A: Rob Davis is a good test engineer because he
,
Has a "test to break" attitude,
,
Takes the point of view of the customer,
,
Has a strong desire for quality,
,
Has an attention to detail, He's also
,
Tactful and diplomatic and
,
Has good a communication skill, both oral and written. And he
,
Has previous software development experience, too.
Good test engineers have a "test to break" attitude, they take the point
of view of the customer, have a strong desire for quality and an attention to detail.
Tact and diplomacy are useful in maintaining a cooperative relationship with developers and
an ability to communicate with both technical and non-technical people. Previous
software development experience is also helpful as it provides a deeper
understanding of the software development process, gives the test engineer an appreciation
for the developers' point of view and reduces the learning curve in automated
test tool programming.
Q: What makes a good QA engineer?
A: The same qualities a good test engineer has are useful for a
QA engineer.
Additionally, Rob Davis understands the entire software development
process and how it fits into the business approach and the goals of the organization. Rob
Davis' communication skills and the ability to understand various sides of
issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization.
Communication skills and the ability to understand various sides of issues are important.
|
Q: What makes a good resume?
A: On the subject of resumes, there seems to be an unending
discussion of whether you should or shouldn't have a one-page resume. The followings are some of the comments I have personally heard: "Well, Joe Blow (car salesman) said I should have a one-page resume." "Well, I read a book and it said you should have a one page resume." "I can't really go into what I really did because if I did, it'd take more than
one page on my resume." "Gosh, I wish I could put my job at IBM on my resume but if I
did it'd make my resume more than one page, and I was told to never make the resume more
than one page long." "I'm confused, should my resume be more than one page? I
feel like it should, but I don't want to break the rules." Or, here's another
comment, "People just don't read resumes that are longer than one page." I have heard some
more, but we can start with these. So what's the answer? There is no scientific answer
about whether a one-page resume is right or wrong. It all depends on who you are and how
much experience you have. The first thing to look at here is the purpose of a
resume. The purpose of a resume is to get you an interview. If the resume is getting
you interviews, then it is considered to be a good resume. If the resume isn't getting
you interviews, then you should change it. The biggest mistake you can make on your resume is
to make it hard to read. Why? Because, for one, scanners don't like odd resumes.
Small fonts can make your resume harder to read. Some candidates use a 7-point font so
they can get the resume onto one page. Big mistake. Two, resume readers do not like
eye strain either. If the resume is mechanically challenging, they just throw it
aside for one that is easier on the eyes. Three, there are lots of resumes out there these
days, and that is also part of the problem. Four, in light of the current scanning scenario,
more than one page is not a deterrent because many will scan your resume into their database.
Once the resume is in there and searchable, you have accomplished one of the
goals of resume distribution. Five, resume readers don't like to guess and most won't
call you to clarify what is on your resume. Generally speaking, your resume should tell your
story. If you're a college graduate looking for your first job, a one-page resume is just
fine. If you have a longer story, the resume needs to be longer. Please put your experience
on the resume so resume readers can tell when and for whom you did what. Short resumes
-- for people long on experience -- are not appropriate. The real audience for these
short resumes is people with short attention spans and low IQs. I assure you that when
your resume gets into the right hands, it will be read thoroughly.
Q: What makes a good QA/Test Manager?
A: QA/Test Managers are familiar with the software development
process; able to maintain enthusiasm of their team and promote a positive atmosphere;
able to promote teamwork to increase productivity; able to promote cooperation
between Software and Test/QA Engineers, have the people skills needed to promote
improvements in QA processes, have the ability to withstand pressures
and say *no* to other managers when quality is insufficient or QA processes are
not being adhered to; able to communicate with technical and non-technical people;
as well as able to run meetings and keep them focused. |
Q: What is the role of documentation in QA?
A: Documentation plays a critical role in QA. QA practices should
be documented, so that they are repeatable. Specifications, designs,
business rules, inspection reports, configurations, code changes, test plans, test
cases, bug reports, user manuals should all be documented. Ideally, there should be
a system for easily finding and obtaining of documents and determining
what document will have a particular piece of information. Use documentation
change management, if possible.
Q: What about requirements?
A: Requirement specifications are important and one of the most
reliable methods of insuring problems in a complex software project. One of the common problems that occur during software development is to have poorly documented requirement specifications.
Requirements are the details
describing an application's externally perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive, attainable and
testable.
A non-testable requirement would be, for example,
"user-friendly", which is too subjective.
A testable requirement would be something such
as, "the product shall allow the user to enter their previously-assigned password
to access the application".
Care should be taken to involve all of a
project's significant customers in the requirements process. Customers could be
in-house or external and could include end-users, customer acceptance test
engineers, testers, customer contract officers, customer management, future
software maintenance engineers, salespeople and anyone who could later derail the
project. If his/her expectations aren't met, they should be included as
a customer, if possible. In some organizations, requirements may end up in
high-level project plans, functional specification documents, design documents, or other
documents at various levels of detail.
No matter what they are called,
some type of documentation with detailed requirements will be needed by test
engineers in order to properly plan and execute tests. Without such documentation
there will be no clear-cut way to determine if a software application is performing
correctly.
Q: What is a test plan?
A: A software project test plan is a document that describes the
objectives, scope, approach and focus of a software testing effort. The process of
preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product. The completed document will help
people outside the test group understand the why and how of product validation.
It should be thorough enough to be useful, but not so thorough that none
outside the test group will be able to read it. |
Q: What is a test case?
A: A test case is a document that describes an input, action, or
event and its expected result, in order to determine if a feature of an application is
working correctly. A test case should contain particulars such as a...
, Test case identifier;
, Test case name;
, Objective;
, Test conditions/setup;
, Input data requirements/steps, and
, Expected results.
Please note, the process of developing test cases can help find problems
in the requirements or design of an application, since it requires you to
completely think through the operation of the application. For this reason, it is useful
to prepare test cases early in the development cycle, if possible.
Q: What should be done after a bug is found?
A: When a bug is found, it needs to be communicated and assigned
to developers that can fix it. After the problem is resolved, fixes should
be re-tested.
Additionally, determinations should be made regarding requirements,
software, hardware, safety impact, etc., for regression testing to check the fixes
didn't create other problems elsewhere. If a problem-tracking system is in
place, it should encapsulate these determinations. A variety of commercial,
problem-tracking/management software tools are available. These tools, with the
detailed input of software test engineers, will give the team complete
information so developers can understand the bug, get an idea of its severity,
reproduce it and fix it.
Q: What is configuration management?
A: Configuration management (CM) covers the tools and processes
used to control, coordinate and track code, requirements, documentation,
problems, change requests, designs, tools, compilers, libraries, patches, changes
made to them and who makes the changes. Rob Davis has had experience with a full
range of CM tools and concepts. Rob Davis can easily adapt to your
software tool and process needs. |
Q: What if the software is so buggy it can't be tested
at all?
A: In this situation the best bet is to have test engineers go
through the process of reporting whatever bugs or problems initially show up, with the focus
being on critical bugs. Since this type of problem can severely affect schedules
and indicates deeper problems in the software development process, such as
insufficient unit testing, insufficient integration testing, poor
design, improper build or release procedures, managers should be notified and provided with
some documentation as evidence of the problem.
Q: How do you know when to stop testing?
A: This can be difficult to determine. Many modern software
applications are so complex and run in such an interdependent environment, that complete
testing can never be done. Common factors in deciding when to stop are...
Deadlines, e.g. release deadlines, testing deadlines;
, Test cases completed with certain percentage passed;
, Test budget has been depleted;
, Coverage of code, functionality, or requirements reaches a specified point;
, Bug rate falls below a certain level; or
, Beta or alpha testing period ends. |
Q: What if there isn't enough time for thorough testing?
A: Since it's rarely possible to test every possible aspect of an
application, every possible combination of events, every dependency, or everything that
could go wrong, risk analysis is appropriate to most software development
projects. Use risk analysis to determine where testing should be focused. This
requires judgment skills, common sense and experience. The checklist should
include answers to the following questions:
, Which functionality is most important to the project's intended purpose?
, Which functionality is most visible to the user?
, Which functionality has the largest safety impact?
, Which functionality has the largest financial impact on users?
, Which aspects of the application are most important to the customer?
, Which aspects of the application can be tested early in the development cycle?
, Which parts of the code are most complex and thus most subject to errors?
, Which parts of the application were developed in rush or panic mode?
, Which aspects of similar/related previous projects caused problems?
, Which aspects of similar/related previous projects had large maintenance expenses?
, Which parts of the requirements and design are unclear or poorly thought out?
, What do the developers think are the highest-risk aspects of the application?
, What kinds of problems would cause the worst publicity?
, What kinds of problems would cause the most customer service complaints?
, What kinds of tests could easily cover multiple functionalities?
, Which tests will have the best high-risk-coverage to time-required ratio?
Q: What if the project isn't big enough to justify extensive testing?
A: Consider the impact of project errors, not the size of the
project. However, if extensive testing is still not justified, risk analysis is again needed
and the considerations listed under "What if there isn't enough time for
thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited test plan based on the risk analysis. |
Q: What can be done if requirements are changing continuously?
A: Work with management early on to understand how requirements
might change, so that alternate test plans and strategies can be worked out in
advance. It is helpful if the application's initial design allows for some
adaptability, so that later changes do not require redoing the application from scratch.
Additionally, try to...
, Ensure the code is well commented and well documented; this makes changes easier for the developers.
, Use rapid prototyping whenever possible; this will help customers feel
sure of their requirements and minimize changes.
, In the project's initial schedule, allow for some extra time to commensurate with probable changes.
, Move new requirements to a 'Phase 2' version of an application and use
the original requirements for the 'Phase 1' version.
, Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
, Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job.
, Balance the effort put into setting up automated testing with the
expected effort required to redo them to deal with changes.
, Design some flexibility into automated test scripts;
, Focus initial automated testing on application aspects that are most
likely to remain unchanged;
, Devote appropriate effort to risk analysis of changes, in order to
minimize regression-testing needs;
, Design some flexibility into test cases; this is not easily done; the
best bet is to minimize the detail in the test cases, or set up only
higher-level generic-type test plans;
, Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.
|
Q: What if the application has functionality that wasn't in the requirements?
A: It may take serious effort to determine if an application has
significant unexpected or hidden functionality, which it would indicate deeper
problems in the software development process. If the functionality isn't necessary
to the purpose of the application, it should be removed, as it may have unknown
impacts or dependencies that were not taken into account by the designer
or the customer.
If not removed, design information will be needed to determine added
testing needs or regression testing needs. Management should be made aware of
any significant added risks as a result of the unexpected functionality. If
the functionality only affects areas, such as minor improvements in the user
interface, it may not be a significant risk.
Q: How can software QA processes be implemented without stifling productivity?
A: Implement QA processes slowly over time. Use consensus to
reach agreement on processes and adjust and experiment as an organization
grows and matures. Productivity will be improved instead of stifled. Problem
prevention will lessen the need for problem detection. Panics and burnout will
decrease and there will be improved focus and less wasted effort. At the same time,
attempts should be made to keep processes simple and efficient, minimize
paperwork, promote computer-based processes and automated tracking and reporting,
minimize time required in meetings and promote training as part of the
QA process. However, no one, especially talented technical types, like
bureaucracy and in the short run things may slow down a bit. A typical scenario
would be that more days of planning and development will be needed, but less time will
be required for late-night bug fixing and calming of irate customers.
Q: What if an organization is growing so fast that fixed QA processes are impossible?
A: This is a common problem in the software industry, especially
in new technology areas. There is no easy solution in this situation, other
than...
, Hire good people (i.e. hire Rob Davis)
, Ruthlessly prioritize quality issues and maintain focus on the customer;
, Everyone in the organization should be clear on what quality means to the customer. |
Q: How is testing affected by object-oriented designs?
A: A well-engineered object-oriented design can make it easier to
trace from code to internal design to functional design to requirements. While
there will be little affect on black box testing (where an understanding of the
internal design of the application is unnecessary), white-box testing can be oriented to
the application's objects. If the application was well designed this can
simplify test design.
Q: Why do you recommended that we test during the design phase?
A: Because testing during the design phase can prevent defects
later on. I recommend we verify three things...
1.
Verify the design is good, efficient, compact, testable and
maintainable.
2.
Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module and how to guarantee the state of each module).
3.
Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product.
Q: What is software quality assurance?
A: Software Quality Assurance (SWQA) when Rob Davis does it is
oriented to *prevention*. It involves the entire software development process.
Prevention is monitoring and improving the process, making sure any agreed-upon
standards and procedures are followed and ensuring problems are found and dealt with.
Software Testing, when performed by Rob Davis, is also oriented to *detection*.
Testing involves the operation of a system or application under controlled conditions and
evaluating the results. Organizations vary considerably in how they assign
responsibility for QA and testing. Sometimes they are the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers
and developers who work closely together, with overall QA processes monitored by
project managers. It depends on what best fits your organization's size and business
structure. Rob Davis can provide QA and/or SWQA. This document details some aspects of how he can
provide software testing/QA service. For more information, click here to send
e-mail. |
Q: What is quality assurance?
A: Quality Assurance ensures all parties concerned with the
project adhere to the process and procedures, standards and templates and test readiness
reviews. Rob Davis' QA service depends on the customers and projects. A lot will
depend on team leads or managers, feedback to developers and communications among
customers, managers, developers' test engineers and testers.
Q: Processes and procedures - why follow them?
A: Detailed and well-written processes and procedures ensure the
correct steps are being executed to facilitate a successful completion of a task. They
also ensure a process is repeatable. Once Rob Davis has learned and reviewed
customer's business processes and procedures, he will follow them. He will also recommend
improvements and/or additions.
Q: Standards and templates - what is supposed to be in a document?
A: All documents should be written to a certain standard and
template. Standards and templates maintain document uniformity. It also helps in learning where
information is located, making it easier for a user to find what they want. Lastly,
with standards and templates, information will not be accidentally omitted from a document.
Once Rob Davis has learned and reviewed your standards and templates, he will use them.
He will also recommend improvements and/or additions.
Q: What are the different levels of testing?
A: Rob Davis has expertise in testing at all testing levels
listed in the these FAQs. At each test level, he documents the results. Each level of testing is
either considered black or white box testing.
Q: What is black box testing?
A: Black box testing is functional testing, not based on any
knowledge of internal software design or code. Black box testing is based on requirements and functionality. |
Q: What is white box testing?
A: White box testing is based on knowledge of the internal logic
of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.
Q: What is unit testing?
A: Unit testing is the first level of dynamic testing and is
first the responsibility of developers and then that of the test engineers. Unit testing is
performed after the expected test results are met or differences are explainable/acceptable.
Q: What is parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles
the output of the new system to the output of the current system to verify the new system performs the operations correctly.
Q: What is functional testing?
A: Functional testing is black-box type of testing geared to
functional requirements of an application. Test engineers should perform functional testing.
Q: What is usability testing?
A: Usability testing is testing for 'user-friendliness'. Clearly
this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video
recording of user sessions and other techniques can be used. Test engineers are needed,
because programmers and developers are usually not appropriate as usability
testers.
Q: What is incremental integration testing?
A: Incremental integration testing is continuous testing of an
application as new functionality is recommended. This may require that various aspects of
an application's functionality are independent enough to work separately, before all
parts of the program are completed, or that test drivers are developed as needed. This type
of testing may be performed by programmers, software engineers, or test engineers.
|
Q: What is integration testing?
A: Upon completion of unit testing, integration testing begins.
Integration testing is black box testing. The purpose of integration testing is to ensure distinct
components of the application still work in accordance to customer requirements. Test
cases are developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team. Integration testing is
considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
Q: What is system testing?
A: System testing is black box testing, performed by the Test
Team, and at the start of the system testing the complete system is configured in a controlled
environment. The purpose of system testing is to validate an application's accuracy and
completeness in performing the functions as designed. System testing simulates real life
scenarios that occur in a "simulated real life" test environment and test all functions
of the system that are required in real life. System testing is deemed complete when actual
results and expected results are either in line or differences are explainable or
acceptable, based on client input.
Upon completion of integration testing, system testing is started.
Before system testing, all unit and integration test results are reviewed by SWQA to ensure all
problems have been resolved. For a higher level of testing it is important to
understand unresolved problems that originate at unit and integration test levels.
Q: What is end-to-end testing?
A: End-to-end testing is similar to system testing, the *macro*
end of the test scale; it is the testing a complete application in a situation that
mimics real life use, such as interacting with a database, using network communication,
or interacting with other hardware, application, or system.
Q: What is regression testing?
A: The objective of regression testing is to ensure the software
remains intact. A baseline set of data and scripts is maintained and executed to verify
that changes introduced during the release have not "undone" any previous
code. Expected results from the baseline are compared to results of the
software under test. All discrepancies are highlighted and accounted for, before
testing proceeds to the next level. |
Q: What is sanity testing?
A: Sanity testing is a cursory testing; it is performed whenever
a cursory testing is sufficient to prove the application is functioning according to
specifications.
This level of testing is a subset of regression testing. It normally
includes a set of core tests of basic GUI functionality to demonstrate connectivity to the
database, application servers, printers, etc.
Q: What is performance testing?
A: Performance testing verifies loads, volumes and response
times, as defined by requirements. Although performance testing is a part of system
testing, it can be regarded as a distinct level of testing.
Q: What is load testing?
A: Load testing is testing an application under heavy loads, such
as the testing of a web site under a range of loads to determine at what point the system
response time will degrade or fail.
Q: What is installation testing?
A: Installation testing is the testing of a full, partial, or
upgrade install/uninstall process. The installation test is conducted with the objective of
demonstrating production readiness. This test includes the inventory of configuration
items, performed by the application's System Administration, the evaluation of
data readiness, and dynamic tests focused on basic system functionality.
Following installation testing, a sanity test is performed when necessary.
Q: What is security/penetration testing?
A: Security/penetration testing is testing how well the system is
protected against unauthorized internal or external access, or willful damage. This type
of testing usually requires sophisticated testing techniques.
Q: What is recovery/error testing?
A: Recovery/error testing is testing how well a system recovers
from crashes, hardware failures, or other catastrophic problems. |
Q: What is compatibility testing?
A: Compatibility testing is testing how well software performs in
a particular hardware, software, operating system, or network environment.
Q: What is comparison testing?
A: Comparison testing is testing that compares software
weaknesses and strengths to those of competitors' products.
Q: What is acceptance testing?
A: Acceptance testing is black box testing that gives the
client/customer/project manager the opportunity to verify the system functionality and usability
prior to the system being released to production. The acceptance test is the
responsibility of the client/customer or project manager, however, it is
conducted with the full support of the project team. The test team also works with
the client/customer/project manager to develop the acceptance criteria.
Q: What is alpha testing?
A: Alpha testing is testing of an application when development is
nearing completion. Minor design changes can still be made as a result of alpha
testing.
Alpha testing is typically performed by end-users or others, not
programmers, software engineers, or test engineers.
Q: What is beta testing?
A: Beta testing is testing an application when development and
testing are essentially completed and final bugs and problems need to be found
before the final release. Beta testing is typically performed by end-users or
others, not programmers, software engineers, or test engineers.
Q: What testing roles are standard on most testing projects?
A: Depending on the organization, the following roles are more or
less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead,
Test/QA Manager, System Administrator, Database Administrator, Technical
Analyst, Test Build Manager and Test Configuration Manager. Depending on the project,
one person may wear more than one hat. For instance, Test Engineers may also
wear the hat of Technical Analyst, Test Build Manager and Test
Configuration Manager. |
Q: What is a Test/QA Team Lead?
A: The Test/QA Team Lead coordinates the testing activity,
communicates testing status to management and manages the test team.
Q: What is a Test Engineer?
A: A Test Engineer is an engineer who specializes in testing.
Test engineers create test cases, procedures, scripts and generate data. They execute
test procedures and scripts, analyze standards of measurements, evaluate
results of system/integration/regression testing. They also...
, Speed up the work of your development staff;
, Reduce your risk of legal liability;
, Give you the evidence that your software is correct and operates properly;
, Improve problem tracking and reporting;
, Maximize the value of your software;
, Maximize the value of the devices that use it;
, Assure the successful launch of your product by discovering bugs and
design flaws, before users get discouraged, before shareholders loose
their cool and before employees get bogged down;
, Help the work of your development staff, so the development team can devote its time to build up your product;
, Promote continual improvement;
, Provide documentation required by FDA, FAA, other regulatory agencies
and your customers;
, Save money by discovering defects 'early' in the design process, before
failures occur in production, or in the field;
, Save the reputation of your company by discovering bugs and design
flaws; before bugs and design flaws damage the reputation of your
company.
Q: What is a Test Build Manager?
A: Test Build Managers deliver current software versions to the
test environment, install the application's software and apply software patches, to both
the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more
than one hat. For instance, a Test Engineer may also wear the hat of a
Test Build Manager. |
Q: What is a System Administrator?
A: Test Build Managers, System Administrators, Database
Administrators deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the
operating system, set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a System Administrator.
Q: What is a Database Administrator?
A: Database Administrators, Test Build Managers, and System
Administrators deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the
operating system, set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a Database Administrator.
Q: What is a Technical Analyst?
A: Technical Analysts perform test assessments and validate
system/functional test requirements. Depending on the project, one person may wear more
than one hat. For instance, Test Engineers may also wear the hat of a
TechnicalAnalyst.
Q: What is a Test Configuration Manager?
A: Test Configuration Managers maintain test environments,
scripts, software and test data. Depending on the project, one person may wear more than
one hat. For instance, Test Engineers may also wear the hat of a Test
Configuration Manager.
Q: What is a test schedule?
A: The test schedule is a schedule that identifies all tasks
required for a successful testing effort, a schedule of all test activities and
resource requirements. |
Q: What is software testing methodology?
A: One software testing methodology is a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.
This methodology can be used and molded to your organization's needs.
Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.
Q: What is the general testing process?
A: The general testing process is the creation of a test strategy
(which sometimes includes the creation of test cases), creation of a test
plan/design (which usually includes test cases and test procedures) and the
execution of tests.
Q: How do you create a test strategy?
A: The test strategy is a formal description of how a software
product will be tested. A test strategy is developed for all levels of testing, as
required. The test team analyzes the requirements, writes the test strategy and reviews the
plan with the project team. The test plan may include test cases, conditions,
the test environment, a list of related tasks, pass/fail criteria and risk
assessment.
Inputs for this process:
, A description of the required hardware and software components, including test tools. This information comes from the test environment,
including test tool data.
, A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man- hours and schedules.
, Testing methodology. This is based on known standards.
, Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
, Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
, An approved and signed off test strategy document, test plan, including
test cases.
, Testing issues requiring resolution. Usually this requires additional
negotiation at the project management level. |
Q: How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing
functional requirements of the release and preparing logical groups of functions
that can be further broken into test procedures. Test procedures define test
conditions, data to be used for testing and expected results, including database updates,
fileoutputs, report results. Generally speaking...
, Test cases and scenarios are designed to represent both typical and
unusual situations that may occur in the application.
, Test engineers define unit test requirements and unit test cases. Test
engineers also execute unit test cases.
, It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
, Test scenarios are executed through the use of test procedures or scripts.
, Test procedures or scripts define a series of steps necessary to perform
one or more test scenarios.
, Test procedures or scripts include the specific data that will be used
for testing the process or transaction.
, Test procedures or scripts may cover multiple test scenarios.
, Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
, Test data is captured and base lined, prior to testing. This data serves
as the foundation for unit and system testing and used to exercise system
functionality in a controlled environment.
, Some output data is also base-lined for future comparison. Base-lined
data is used to support future application maintenance via regression
testing.
, A pre-test meeting is held to assess the readiness of the application
and the environment and data to be tested. A test readiness document is
created to indicate the status of the entrance criteria of the release.
Inputs for this process:
, Approved Test Strategy Document.
, Test tools, or automated test tools, if applicable.
, Previously developed scripts, if applicable.
, Test documentation problems uncovered as a result of testing.
,
A good understanding of software complexity and module path coverage,
derived from general and detailed design documents, e.g. software design document, source code and software complexity data.
Outputs for this process:
,
Approved documents of test scenarios, test cases, test conditions and
test data.
,
Reports of software design issues, given to software developers for
correction. |
Q: How do you execute tests?
A: Execution of tests is completed by following the test
documents in a methodical manner. As each test procedure is performed, an entry is
recorded in a test execution log to note the execution of the procedure and whether
or not the test procedure uncovered any defects. Checkpoint meetings are held
throughout the execution phase. Checkpoint meetings are held daily, if
required, to address and discuss testing issues, status and activities.
,
The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has
a different process for logging and reporting bugs/defects uncovered during testing.
,
A pass/fail criteria is used to determine the severity of a problem, and
results are recorded in a test summary report. The severity of a
problem, found during system testing, is defined in accordance to the customer's
risk assessment and recorded in their selected tracking tool.
,
Proposed fixes are delivered to the testing environment, based on the
severity of the problem. Fixes are regression tested and flawless fixes
are migrated to a new baseline. Following completion of the test,
members of the test team prepare a summary report. The summary
report is reviewed by the Project Manager, Software QA (SWQA)
Manager and/or Test Team Lead.
,
After a particular level of testing has been certified, it is the
responsibility
of the Configuration Manager to coordinate the migration of the release
software components to the next test level, as documented in the
Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance.
, The test team reviews test document problems identified during testing,
and update documents where appropriate.
Inputs for this process:
, Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
, Test tools, including automated test tools, if applicable.
, Developed scripts.
, Changes to the design, i.e. Change Request Documents.
, Test data.
, Availability of the test team and project team.
, General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
,
A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.
, Test Readiness Document.
, Document Updates.
Outputs for this process: |
, Log and summary of the test results. Usually this is part of the Test
Report. This needs to be approved and signed-off with revised testing
deliverables.
, Changes to the code, also known as test fixes.
, Test document problems uncovered as a result of testing. Examples are
Requirements document and Design Document problems.
, Reports on software design issues, given to software developers for
correction. Examples are bug reports on code issues.
, Formal record of test incidents, usually part of problem tracking.
, Base-lined package, also known as tested source and object code, ready
for migration to the next level. |
|