Software Testing Basics - Software Testing Interview Questions and
Answers
1. Can you explain the PDCA
cycle and where testing fits in?
Software testing is an
important part of the software development process. In normal software
development there are four important steps, also referred to, in short, as the
PDCA (Plan, Do, Check, Act) cycle.
Let's review the four steps in detail.
Let's review the four steps in detail.
Plan: Define the goal and the
plan for achieving that goal.
Do/Execute: Depending on the
plan strategy decided during the plan stage we do execution accordingly in this
phase.
Check: Check/Test to ensure
that we are moving according to plan and are getting the desired results.
Act: During the check cycle,
if any issues are there, then we take appropriate action accordingly and revise
our plan again.
So developers and other stakeholders of the project do the "planning and building," while testers do the check part of the cycle. Therefore, software testing is done in check part of the PDCA cyle.
2. What is the difference
between white box, black box, and gray box testing?
Black box testing is a testing
strategy based solely on requirements and specifications. Black box testing
requires no knowledge of internal paths, structures, or implementation of the
software being tested.
White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.
There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
The above figure shows how both types of testers view an accounting application during testing. Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.
White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.
There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
The above figure shows how both types of testers view an accounting application during testing. Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.
3. Can you explain usability
testing?
Usability testing is a
testing methodology where the end customer is asked to use the software to see
if the product is easy to use, to see the customer's perception and task time.
The best way to finalize the customer point of view for usability is by using
prototype or mock-up software during the initial stages. By giving the customer
the prototype before the development start-up we confirm that we are not
missing anything from the user point of view.
4. What are the categories of
defects?
There are three main
categories of defects:
Wrong: The requirements have
been implemented incorrectly. This defect is a variance from the given
specification.
Missing: There was a
requirement given by the customer and it was not done. This is a variance from
the specifications, an indication that a specification was not implemented, or
a requirement of the customer was not noted properly.
Extra: A requirement
incorporated into the product that was not given by the end customer. This is
always a variance from the specification, but may be an attribute desired by
the user of the product. However, it is considered a defect because it's a
variance from the existing requirements.
5. How do you define a
testing policy?
The following are the
important steps used to define a testing policy in general. But it can change
according to your organization. Let's discuss in detail the steps of
implementing a testing policy in an organization.
Definition: The first step
any organization needs to do is define one unique definition for testing within
the organization so that everyone is of the same mindset.
How to achieve: How are we
going to achieve our objective? Is there going to be a testing committee, will
there be compulsory test plans which need to be executed, etc?.
Evaluate: After testing is
implemented in a project how do we evaluate it? Are we going to derive metrics
of defects per phase, per programmer, etc. Finally, it's important to let
everyone know how testing has added value to the project?.
Standards: Finally, what are
the standards we want to achieve by testing? For instance, we can say that more
than 20 defects per KLOC will be considered below standard and code review
should be done for it.
6. On what basis is the
acceptance plan prepared?
In any project the acceptance
document is normally prepared using the following inputs. This can vary from
company to company and from project to project.
Requirement document: This
document specifies what exactly is needed in the project from the customers
perspective.
Input from customer: This can
be discussions, informal talks, emails, etc.
Project plan: The project
plan prepared by the project manager also serves as good input to finalize your
acceptance test.
7. What is configuration
management?
Configuration management is
the detailed recording and updating of information for hardware and software
components. When we say components we not only mean source code. It can be
tracking of changes for software documents such as requirement, design, test
cases, etc.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.
8. How does a coverage tool
work?
While doing testing on the
actual product, the code coverage testing tool is run simultaneously. While the
testing is going on, the code coverage tool monitors the executed statements of
the source code. When the final testing is completed we get a complete report
of the pending statements and also get the coverage percentage.
9. Which is the best testing
model?
In real projects, tailored
models are proven to be the best, because they share features from The
Waterfall, Iterative, Evolutionary models, etc., and can fit into real life
time projects. Tailored models are most productive and beneficial for many
organizations. If it's a pure testing project, then the V model is the best.
10. What is the difference
between a defect and a failure?
When a defect reaches the end
customer it is called a failure and if the defect is detected internally and
resolved it's called a defect.
11. Should testing be done
only after the build and execution phases are complete?
In traditional testing
methodology testing is always done after the build and execution phases.
But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software development.
But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software development.
12. Are there more defects in
the design phase or in the coding phase?
The design phase is more
error prone than the execution phase. One of the most frequent defects which
occur during design is that the product does not cover the complete
requirements of the customer. Second is wrong or bad architecture and technical
decisions make the next phase, execution, more prone to defects. Because the
design phase drives the execution phase it's the most critical phase to test.
The testing of the design phase can be done by good review. On average, 60% of
defects occur during design and 40% during the execution phase.
13. What group of teams can do software testing?
When it comes to testing
everyone in the world can be involved right from the developer to the project
manager to the customer. But below are different types of team groups which can
be present in a project.
Isolated test team
Outsource - we can hire
external testing resources and do testing for our project.
Inside test team
Developers as testers
QA/QC team.
14. What impact ratings have
you used in your projects?
Normally, the impact ratings
for defects are classified into three types:
Minor: Very low impact
but does not affect operations on a large scale.
Major: Affects
operations on a very large scale.
Critical: Brings the
system to a halt and stops the show.
15. Does an increase in
testing always improve the project?
No an increase in testing
does not always mean improvement of the product, company, or project. In real
test scenarios only 20% of test plans are critical from a business angle.
Running those critical test plans will assure that the testing is properly
done. The following graph explains the impact of under testing and over
testing. If you under test a system the number of defects will increase, but if
you over test a system your cost of testing will increase. Even if your defects
come down your cost of testing has gone up.
16. What's the relationship
between environment reality and test phases?
Environment reality becomes
more important as test phases start moving ahead. For instance, during unit testing
you need the environment to be partly real, but at the acceptance phase you
should have a 100% real environment, or we can say it should be the actual real
environment. The following graph shows how with every phase the environment
reality should also increase and finally during acceptance it should be 100%
real.
17. What are different types
of verifications?
Verification is static type
of s/w testing. It means code is not executed. The product is evaluated by
going through the code. Types of verification are:
1. Walkthrough: Walkthroughs
are informal, initiated by the author of the s/w product to a colleague for
assistance in locating defects or suggestions for improvements. They are
usually unplanned. Author explains the product; colleague comes out with
observations and author notes down relevant points and takes corrective actions.
2. Inspection: Inspection
is a thorough word-by-word checking of a software product with the intention of
Locating defects, Confirming traceability of relevant requirements etc
18. How do test documents in
a project span across the software development lifecycle?
The following figure shows
pictorially how test documents span across the software development lifecycle.
The following discusses the specific testing documents in the lifecycle:
Central/Project test plan: This
is the main test plan which outlines the complete test strategy of the software
project. This document should be prepared before the start of the project and
is used until the end of the software development lifecycle.
Acceptance test plan: This
test plan is normally prepared with the end customer. This document commences
during the requirement phase and is completed at final delivery.
System test plan: This
test plan starts during the design phase and proceeds until the end of the
project.
Integration and unit test
plan: Both of these test plans start during the execution phase and
continue until the final delivery.
19. Which test cases are
written first: white boxes or black boxes?
Normally black box test cases
are written first and white box test cases later. In order to write black box
test cases we need the requirement document and, design or project plan. All
these documents are easily available at the initial start of the project. White
box test cases cannot be started in the initial phase of the project because
they need more architecture clarity which is not available at the start of the
project. So normally white box test cases are written after black box test
cases are written.
Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
20. Explain Unit Testing,
Integration Tests, System Testing and Acceptance Testing?
Unit testing - Testing
performed on a single, stand-alone module or unit of code.
Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.
System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.
Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).
Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.
System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.
Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).
No comments:
Post a Comment