Testing Concepts

a. Feasibility study
b. Requirements analysis
c. Systems design: Describes desired features and business rules
d. Implementation: The real code is written here.
e. Integration and testing: Brings all the pieces together
f. Acceptance, installation, deployment
g. Maintenance

a. Requirements Analysis
b. Test Planning
c. Test Development
d. Test Execution
e. Bug Reporting and Result Analysis
f. Defect Retesting and Regression testing

a. Table of Contents
b. Objective of testing
c. Personnel responsible for each task
d. Personnel contact-info
e. Relevant naming conventions
f. Features to be Tested, Features not to be Tested
g. Assumptions and dependencies
h. Test environment - hardware, operating systems, Browsers etc
i. Project risk analysis
j. Initial smoke testing period and criteria
k. Process used to manage the bugs
l. Test suspension and restart criteria
m. Duration of test

a. Component Name
b. Prerequsitie
c. Test Cases Steps
d. Expected Result
e. Status
f. Comments

5. Bug Life Cycle
a. New
b. Assigned
c. Resolved
d. Verified
e. Closed
f. Reopen

6. Testing Techniques (Testing approach)
The most popular Black box testing techniques are:
1. Equivalence Partitioning
2. Boundary Value Analysis
3. Error-Guessing

1. Bug Tracking Tool: Bugzilla
2. Recording the bug in video format: Screencast-o-matic
3. Tool for taking Screen-shots for reporting the bug: Fireshot
4. Compatibility testing tools: Browser Sandbox and Adobe BrowserLab
5. Tool to Monitor CSS and HTML: Firebug
6. Tool to ensure valid HTML: HTML Validator
7. Performance Testing tool: Page Speed by Google
8. To find Broken Links: Pinger Ad-on and brokenlinkcheck
9. For checking the spelling of content: Spell Check

8. Severity: Determines the defect's effect
Priority: Determines the defect urgency

9. Hypertext Transfer Protocol Secure (HTTPS) is a combination of the Hypertext Transfer Protocol with the SSL/TLS protocol to provide encrypted communication and secure identification of a network web server.

10. Testing Process

a. Content should be correct.
b. Wrap-around properly.
c. Switch images off for ALT texts
d. Switch JavaScript off and See if appropriate messages are displayed to user.
e. Check sensible page titles.

a. Participation of QA and Designers
b. Participation of sample end user
c. Observation by test morderator
d. Development of research questions
e. Recomendation of improvements to the design of the product

c. Functional Testing
a. Check for broken links.
b. Validate the HTML (Validator W3).

Functional testing types:
a. Smoke testing / Sanity testing
b. Usability Testing
c. Regression Testing
d. Pre User Acceptance Testing which includes Alpha and Beta
e. User Acceptance Testing
f. Localization Testing

Non-functional testing types:
a. Load, Stress, Performance, Volume Testing
b. Compatibility
c. Security Testing
d. Installation Testing

d. Interface
a. Data display on browser should match with data available on server.

e. Compatibility
a. Windows
b. Browsers

f. Security Testing
a. Limit the number of tries.
b. Verify rules for password selection.
c. Is there a timeout limit?
d. Paste internal url directly without log-in
e. Test if SSL is used for security measures.
f. Type a single quote
g. SQL Injection: Try submitting following: (') single quote
h. Session hijacking: If your app has a session identifier number in the URL decrease that number

a. Can your site handle a large amount of users requesting a certain page.
b. Long period of continuous use: Is site able to run for long period, without downtime.
c. Apply Web page performance (speed) Tool
  • Performance Testing: The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.
  • Load test: To verify application behavior under normal and peak load conditions
  • Stress Testing: To determine application’s behavior when it is pushed beyond normal or peak load conditions.
    Here are some ways in which stress can be applied to the system:
    a. double the baseline number for concurrent users/HTTP connections
  • Volume testing:
    a. testing a word processor by editing a very large document
    b. testing a printer by sending it a very large job
  • Capacity test: To determine how many users and/or transactions a given system will support and still meet performance goals.
If chair is designed for 100 kg weight, and my weight is 70 kg then that testing is called as normal testing. If my weight is 100 kg then that testing is called as load testing. If my wt is 120 kg then that testing called as stress testing.
11. Smoke testing A smoke test is a cursory examination of all of the basic components of a software system to ensure that they work. Typically, smoke testing is conducted immediately after a software build is made. It ensures that the future testing is not blocked. 

12. System testing 
Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing is actually done to the entire system against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS).

Types Of System Tests:
a. Functional testing 
b. User interface testing 
c. Usability testing 
d. Compatibility testing 
e. Security testing 
f. Performance testing 
g. Sanity testing 
h. Regression testing 
i. Installation testing

13. Defect or Bug density: Defect density is equal to the ratio of number of defects to the number of lines of code.

DD=Total Defect/KLOC( Kilo lines of Code)
Ex: Suppose 10 bugs are found in 1 KLOC
Therefore DD is 10/KLOC (Kilo lines of code)

Traceability Matrix - Requirement Traceability matrix (RTM)
Traceability matrix is a matrix which is used to keep track of the requirements. It is a mapping between the requrements and test cases, we will do this for identify missing test cases.

Exact requirements from the requirement doc given by the client are copied in this matrix. These requirements are assigned a unique number and the remark as testable or not. Against each testable requirement test objective and test case is identified. It is highly possible that for one req there could be multiple test objectives and test cases. For each of the test objective and test case unique number is assigned.

a. We can trace the missing test cases.
b. Whenever requirements changes then we can easily refer to matrix document, change the usecase and go to corresponding testcases and change them.
c. Easy to test any functionality. Only we need to refer matrix document and we can reach to related test cases.
d. We can trace the impact of functionalities on one another. Because different functionalities can have same test cases. 

15. Bug: A software bug is the common term used to describe an error, mistake, failure, or fault in a computer program.

16. Monkey Testing
Monkey Testing is random testing.
a. Dumb Monkeys: A dumb monkey doesn't know anything about the software being tested; it just clicks or types randomly.

b. Semi-Smart Monkeys: Add logging to your monkey so that everything it does is recorded to a file. When the monkey finds a bug, you need only to look at the log file to see what it was doing before the failure.
Another solution to track what your monkey does is to set up a video camera to record what happens on the screen. When you notice that the software has failed, just rewind and replay the tape.

c. Smart Monkeys: A true smart monkey knows Where he is, What he can do there, Where he can go, Where he's been, If what he's seeing is correct.
A smart monkey isn't limited to just looking for crashing bugs, either. It can examine data as it goes, checking the results of its actions and looking for differences from what it expects.

17. Web Testing: Web testing nothing but Browser applications testing. They are usually a most visible, widely-used - and potentially most vulnerable - applications.

18. Server Side InterfaceIn web testing the server side interface should be tested. This is done by verify that communication is done properly. Compatibility of server with software, hardware, network and database should be tested.

19. Client Side CompatibilityThe client side compatibility is also tested in various platforms, using various browsers etc.

20. Desktop – Client server – Web Applications

Desktop application runs on personal computers and work stations.
Client server application you have two different components to test.  Application is loaded on server machine while the application (exe) on every client machine. 
Web application: Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers and different Operating Systems.

21. Why are there so many software bugs?
a. Unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.
b. Software complexity.
c. Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
d. Changing requirements

Related Testing Topics:
a. Black Box testing
b. Web Testing Checklist
c. Desktop App Testing Checklist


  1. very very thankful for these Q&A.It is very helpful for beginners/experienced