Thursday, November 6, 2008

Testing Interview Questions

Here are some more Testing Interview Questions colleted from internet...REad and ENjoyeee...

Q: Give me five common problems that occur during software development .

A: Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.
1. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.
Q: Give me five solutions to problems that occur during software development .

A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.
5. Communicate. Require walk-throughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

Q: Do automated testing tools make testing easier?

A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

Q: What makes a good test engineer?

A: Rob Davis is a good test engineer because he
· Has a "test to break" attitude,
· Takes the point of view of the customer,
· Has a strong desire for quality,
· Has an attention to detail, He's also
· Tactful and diplomatic and
· Has good a communication skill, both oral and written. And he
· Has previous software development experience, too.
Good test engineers have a "test to break" attitude, they take the point of view of the customer, have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming.

Q: What makes a good QA engineer?

A: The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Rob Davis' communication skills and the ability to understand various sides of issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important.

Q: What makes a good resume?

A: On the subject of resumes, there seems to be an unending discussion of whether you should or shouldn't have a one-page resume. The followings are some of the comments I have personally heard: "Well, Joe Blow (car salesman) said I should have a one-page resume." "Well, I read a book and it said you should have a one page resume." "I can't really go into what I really did because if I did, it'd take more than one page on my resume." "Gosh, I wish I could put my job at IBM on my resume but if I did it'd make my resume more than one page, and I was told to never make the resume more than one page long." "I'm confused, should my resume be more than one page? I feel like it should, but I don't want to break the rules." Or, here's another comment, "People just don't read resumes that are longer than one page." I have heard some more, but we can start with these. So what's the answer? There is no scientific answer about whether a one-page resume is right or wrong. It all depends on who you are and how much experience you have. The first thing to look at here is the purpose of a resume. The purpose of a resume is to get you an interview. If the resume is getting you interviews, then it is considered to be a good resume. If the resume isn't getting you interviews, then you should change it. The biggest mistake you can make on your resume is to make it hard to read. Why? Because, for one, scanners don't like odd resumes. Small fonts can make your resume harder to read. Some candidates use a 7-point font so they can get the resume onto one page. Big mistake. Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there these days, and that is also part of the problem. Four, in light of the current scanning scenario, more than one page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have accomplished one of the goals of resume distribution. Five, resume readers don't like to guess and most won't call you to clarify what is on your resume. Generally speaking, your resume should tell your story. If you're a college graduate looking for your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume readers can tell when and for whom you did what. Short resumes -- for people long on experience -- are not appropriate. The real audience for these short resumes is people with short attention spans and low IQs. I assure you that when your resume gets into the right hands, it will be read thoroughly.
Q: What makes a good QA/Test Manager?

A: QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused. Q: What is the role of documentation in QA? A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

Q: What about requirements?

A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, "user-friendly", which is too subjective. A testable requirement would be something such as, "the product shall allow the user to enter their previously-assigned password to access the application". Care should be taken to involve all of a project's significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren't met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly.

Q: What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

Q: What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a...
· Test case identifier;
· Test case name;
· Objective;
· Test conditions/setup;
· Input data requirements/steps, and
· Expected results. Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.

Saturday, October 25, 2008

Nest List of question will come soon!!!

Hi all readers,

I have lots of testing interview questions ,..I will bring more interview qustions soon, and from now onwards you will get updtae regularly....

Testing Interview Questions

I am hereby posting come Interview Question of Software Testing for Manual and Atomation testing, which I have collected from some websites.. These question will be usefull of all readers...
Q: What is verification?
A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
Q: What is validation?
A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
Q: What is a walk-through?
A: A walk-through is an informal meeting for evaluation or informational purposes.
Q: What is an inspection?
A: An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.
Q: What is quality?
A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization's management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.
Q: What is a good code?
A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.
Q: What is a good design?
A: Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.
Q: What is software life cycle?
A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.
Q: Why are there so many software bugs?
A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.
· There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do.
· Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
· Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
· As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
· Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
· Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
· Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
· Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.
Q: How do You Introduce a New Software QA Process?
A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and testable.

Tuesday, July 15, 2008

Software Testing Glossary

  • Load Testing:Load testing is the act of testing a system under load. It generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program's services concurrently. This testing is most relevant for multi-user systems, often one built using a client/server model, such as web servers .
  • Localization Testing:This term refers to making software specifically designed for a specific locality .
  • Logic coverage Testing / Logic driven Testing / Structural test case design:Test case selection that is based on an analysis of the internal structure of the component. Also known as white-box testing .
  • Loop Testing:A white box testing technique that exercises program loops .
  • Maintainability Testing / Serviceability Testing:Testing whether the system meets its specified objectives for maintainability.
  • Maintenance testing:Testing the changes to an operational system or the impact of a changed environment to an operational system .
  • Model Based Testing:Model-based testing refers to software testing where test cases are derived in whole or in part from a model that describes some (usually functional) aspects of the system under test.
  • Monkey Testing:Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out .
  • Mutation testing:A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations .
  • N+ Testing:A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+. The cycles are typically repeated until the solution reaches a steady state and there are no errors .
  • Negative Testing / Dirty Testing:Testing aimed at showing software does not work.
    Operational TestingTesting conducted to evaluate a system or component in its operational environment.
  • Pair testing:Two testers work together to find defects. Typically, they share one computer and trade control of it while testing .
  • Parallel Testing:The process of feeding test data into two systems, the modified system and an alternative system (possibly the original system) and comparing results.
  • Path coverage: Metric applied to all path-testing strategies: in a hierarchy by path length, where length is measured by the number of graph links traversed by the path or path segment; e.g. coverage with respect to path segments two links long, three links long, etc. Unqualified, this term usually means coverage with respect to the set of entry/exit paths. Often used erroneously as synonym for statement coverage .
  • Path Testing:Testing in which all paths in the program source code are tested at least once.
  • Penetration Testing:The portion of security testing in which the evaluators attempt to circumvent the security features of a system .
  • Performance Testing:Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload.Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly .
  • Playtest:A playtest is the process by which a game designer tests a new game for bugs and improvements before bringing it to market .
  • Portability Testing:Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.
  • Post-conditions: Cleanup steps after the test case is run, to bring it back to a known state.
  • Precondition: Dependencies that are required for the test case to run .
  • Progressive Testing: Testing of new features after regression testing of previous features .
  • Quality Control: Quality control and quality engineering are involved in developing systems to ensure products or services are designed and produced to meet or exceed customer requirements and expectations .
  • Ramp Testing: Continuously raising an input signal until the system breaks down .
  • Random Testing: Testing a program or part of a program using test data that has been chosen at random .
  • Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions .
  • Regression Testing: Regression testing is any type of software testing which seeks to uncover bugs that occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned.
  • Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs .
  • Reliability Testing: Testing to determine whether the system/software meets the specified reliability requirements.
  • Requirements based Testing: Designing tests based on objectives derived from requirements for the software component .
  • Resource utilization testing:The process of testing to determine the Resource-utilization of a software product .
  • Risk-based testing: Testing oriented towards exploring and providing information about product risks .
  • Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational .
  • Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load .
  • Scenario Testing: A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose.
  • Security Testing: Tests focused on ensuring the target-of-test data (or systems) are accessible only to those actors for which they are intended.
  • Session-based Testing: Session-based testing is ideal when formal requirements are non present, incomplete, or changing rapidly. It can be used to introduce measurement and control to an immature test process, and can form a foundation for significant improvements in productivity and error detection. It is more closely related to Exploratory testing. It is a controlled and improved ad-hoc testing that is able to use the knowledge gained as a basis for ongoing, product sustained improvement .
  • Simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs .
  • Smart testing: Tests that based on theory or experience are expected to have a high probability of detecting specified classes of bugs; tests aimed at specific bug types .
  • Smoke Testing: A sub-set of the black box test is the smoke test. A smoke test is a cursory examination of all of the basic components of a software system to ensure that they work. Typically, smoke testing is conducted immediately after a software build is made. The term comes from electrical engineering, where in order to test electronic equipment, power is applied and the tester ensures that the product does not spark or smoke.
  • Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed .
  • Soap-opera testing: A technique for defining test scenarios by reasoning about dramatic and exaggerated usage scenarios. When defined in collaboration with experienced users, soap operas help to test many functional aspects of a system quickly and-because they are not related directly to either the systems formal specifications, or to the systems features-they have a high rate of success in revealing important yet often unanticipated problems.
  • Software Quality Assurance: Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can only find defects, not prove that there are none.
  • Stability Testing: Stability testing is an attempt to determine if an application will crash.
  • State Transition Testing: A test case design technique in which test cases are designed to execute state transitions.
  • Statement Testing:Testing designed to execute each statement of a computer program.
  • Static Testing: Analysis of a program carried out without executing the program .
  • Statistical Testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.
  • Storage Testing: Testing whether the system meets its specified storage objectives.
  • Stress Testing: Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.Stress testing a subset of load testing.
  • Structural Testing: White box testing, glass box testing or structural testing is used to check that the outputs of a program, given certain inputs, conform to the structural specification of the program .
  • SUT: System Under Test
  • Syntax Testing: A test case design technique for a component or system in which test case design is based upon the syntax of the input.
  • System Testing: System testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of Black box testing
  • Technical Requirements Testing: Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.
  • Test Approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed .
  • Test Automation: Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
  • Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. Same as Test environment
  • Test Case: The specification (usually formal) of a set of test inputs, execution conditions, and expected results, identified for the purpose of making an evaluation of some particular aspect of a Target Test Item.
  • Test Cycle: A formal test cycle consists of all tests performed. In software development, it can consist of, for example, the following tests: unit/component testing, integration testing, system testing, user acceptance testing and the code inspection.
  • Test Data: The definition (usually formal) of a collection of test input values that are consumed during the execution of a test, and expected results referenced for comparative purposes .
  • Test Driven Development: Test-driven development (TDD) is a Computer programming technique that involves writing tests first and then implementing the code to make them pass. The goal of test-driven development is to achieve rapid feedback and implements the "illustrate the main line" approach to constructing a program. This technique is heavily emphasized in Extreme Programming.
  • Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness .
  • Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
  • Test Harness: In software testing, a test harness is a collection of software tools and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs.
  • Test Idea: A brief statement identifying a test that is potentially useful to conduct. The test idea typically represents an aspect of a given test: an input, an execution condition or an expected result, but often only addresses a single aspect of a test.
  • Test Log: A collection of raw output captured during a unique execution of one or more tests, usually representing the output resulting from the execution of a Test Suite for a single test cycle run.
  • Test Plan:A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
  • Test Procedure: The procedural aspect of a given test, usually a set of detailed instructions for the setup and step-by-step execution of one or more given test cases. The test procedure is captured in both test scenarios and test scripts .
  • Test Report: A document that summarizes the outcome of testing in terms of items tested, summary of results , effectiveness of testing and lessons learned.
  • Test Scenario: A sequence of actions (execution conditions) that identifies behaviors of interest in the context of test execution.
  • Test Script: A collection of step-by-step instructions that realize a test, enabling its execution. Test scripts may take the form of either documented textual instructions that are executed manually or computer readable instructions that enable automated test execution.
  • Test Specification:A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests .
  • Test Strategy: Defines the strategic plan for how the test effort will be conducted against one or more aspects of the target system.
  • Test Suite: A package-like artifact used to group collections of test scripts , both to sequence the execution of the tests and to provide a useful and related set of Test Log information from which Test Results can be determined .
  • Test Tools:Computer programs used in the testing of a system, a component of the system, or its documentation .
  • Testalibity: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met .
  • Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels .
  • Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested .
  • Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases .
  • Unit Testing: A unit test is a procedure used to verify that a particular module of source code is working properly .
  • Usability Testing: Usability testing is a means for measuring how well people can use some human-made object (such as a web page, a computer interface, a document, or a device) for its intended purpose, i.e. usability testing measures the usability of the object. If usability testing uncovers difficulties, such as people having difficulty understanding instructions, manipulating parts, or interpreting feedback, then developers should improve the design and test it again .
  • Use case testing: A black box test design technique in which test cases are designed to execute user scenarios .
  • Validation: The word validation has several related meanings:* In general, validation is the process of checking if something satisfies a certain criterion. Examples would be: checking if a statement is true, if an appliance works as intended, if a computer system is secure, or if computer data is compliant with an open standard. This should not be confused with verification.
  • Verification: In the context of hardware and software systems,formal verification is the act ofproving or disproving the correctness of a systemwith respect to a certain formal specification or property,using formal methods.
  • Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner .
  • White Box testing / Glass box Testing: White box testing, glass box testing or structural testing is used to check that the outputs of a program, given certain inputs, conform to the structural specification of the program. It uses information about the structure of the program to check that it performs correctly.
  • Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Sunday, July 13, 2008

Software Testing Glossary

  • CAST: Computer Aided Software Testing
  • Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
  • Compatibility Testing: Testing whether the system is compatible with other systems with which it should communicate.
  • Component Testing: The testing of individual software components.
  • Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
  • Conformance Testing / Compliance Testing / Standards Testing: Conformance testing or type testing is testing to determine whether a system meets some specified standard. To aid in this, many test procedures and test setups have been developed, either by the standard's maintainers or external organizations, specifically for testing conformance to standards. Conformance testing is often performed by external organizations, sometimes the standards body itself, to give greater guarantees of compliance. Products tested in such a manner are then advertised as being certified by that external organization as complying with the standard
  • Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
  • Conversion Testing / Migration Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
  • Coverage Testing: Coverage testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the software module or unit. It is also a measure of coverage of code lines, code branches and code branch combinations
  • Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing .
  • Data flow Testing: Testing in which test cases are designed based on variable usage within the code.
  • Data integrity and Database integrity Testing: Data integrity and database integrity test techniques verify that data is being stored by the system in a manner where the data is not compromised by updating, restoration, or retrieval processing .
  • Data-Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing .
  • Decision condition testing: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes .
  • Decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table .
  • Decision testing: A white box test design technique in which test cases are designed to execute decision outcomes .
  • Defect: An anomaly, or flaw, in a delivered work product. Examples include such things as omissions and imperfections found during early lifecycle phases and symptoms of faults contained in software sufficiently mature for test or operation. A defect can be any kind of issue you want tracked and resolved.
  • Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).
  • Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality .
  • Depth Testing: A test that exercises a feature of a product in full detail.
  • Design based Testing: Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms).
  • Development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers .
  • Documentation Testing: Testing concerned with the accuracy of documentation.
  • Dynamic Testing: Testing software through executing it.
  • Efficiency testing: The process of testing to determine the efficiency of a software product
  • End-to-end Testing: Test activity aimed at proving the correct implementation of a required function at a level where the entire hardware/software chain involved in the execution of the function is available.
  • Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution .
  • Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification .
  • Equivalence partition Testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
  • Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes ./
  • Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test .
  • Exploratory Testing: This technique for testing computer software does not require significant advanced planning and is tolerant of limited documentation for the target-of-test. Instead, the technique relies mainly on the skill and knowledge of the tester to guide the testing, and uses an active feedback loop to guide and calibrate the effort. It is also known as ad hoc testing .
  • Failure: The inability of a system or component to perform its required functions within specified performance requirements. A failure is characterized by the observable symptoms of one or more defects that have a root cause in one or more faults.
  • Fault: An accidental condition that causes the failure of a component in the implementation model to perform its required behavior. A fault is the root cause of one or more defects identified by observing one or more failures.
  • Fuzz Testing: Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing built-in code assertions), then there are defects to correct.The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior.
  • Gamma Testing: Gamma testing is a little-known informal phrase that refers derisively to the release of "buggy" (defect-ridden) products. It is not a term of art among testers, but rather an example of referential humor. Cynics have referred to all software releases as "gamma testing" since defects are found in almost all commercial, commodity and publicly available software eventually.
  • Gorilla Testing: Testing one particular module,functionality heavily.
  • Grey Box Testing: The typical grey box tester is permitted to set up or manipulate the testing environment, like seeding a database, and can view the state of the product after their actions, like performing a SQL query on the database to be certain of the values of columns. It is used almost exclusively of client-server testers or others who use a database as a repository of information,or who has to manipulate XML files (DTD or an actual XML file) or configuration files directly, or who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results.
  • GUI Testing: GUI testing is the process of testing a graphical user interface to ensure it meets its written specifications .
  • Heuristic evaluations: Heuristic evaluations are one of the most informal method of usability inspection in the field of human-computer interaction. It helps identifying the usability problems in a user interface (UI) design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the "heuristics").
  • High Order Tests: Black-box tests conducted once the software has been integrated.
  • Incremental Testing: "Integration testing where system components are integrated into the system one at a time until the entire system is integrated.
  • Installation Testing: Installation testing can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system the software product will eventually be installed on. While the ideal installation might simply appear to be to run a setup program, the generation of that setup program itself and its efficacy in a variety of machine and operating system environments can require extensive testing before it can be used with confidence
  • Integration Testing: Integration testing is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
  • Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to each other.
  • Interoperability testing: The process of testing to determine the interoperability of a software product .
  • Invalid testing: Testing using input values that should be rejected by the component or system .
  • Isolation Testing: Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.
  • Keyword driven Testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test .

Wednesday, July 9, 2008

Software Testing Glossary(with A&B)

  • Acceptance criteria: The expected results or performance characteristics that define whether the test case passed or failed.
  • Acceptance Testing / User Acceptance: Testing An acceptance test is a test that a user/sponsor and manufacturer/producer jointly perform on a finished, engineered product/system through black-box testing (i.e., the user or tester need not know anything about the internal workings of the system). It is often referred to as a(n) functional test, beta test, QA test, application test, confidence test, final test, or end user test .
  • Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
  • Ad-hoc Testing: Testing carried out using no recognised test case design technique. It is also known as Exploratory Testing .
  • Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm .
  • Alpha Testing: In software development, testing is usually required before release to the general public. This phase of development is known as the alpha phase. Testing during this phase is known as alpha testing. In the first phase of alpha testing, developers test the software using white box techniques. Additional inspection is then performed using black box or grey box techniques.
  • Arc Testing / Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes. A test method satisfying coverage criteria that require that for each decision point, each possible branch be executed at least once.
  • AUT: Application Under Test
  • Authorization Testing: Involves testing the systems responsible for the initiation and maintenance of user sessions. This will require testing the Input validation of login fields ,Cookie security,and Lockout testing .This is performed to discover whether the login system can be forced into permitting unauthorised access. The testing will also reveal whether the system is susceptible to denial of service attacks using the same techniques.
  • Back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies
  • Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests .
  • Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration .
  • Beta Testing / Field Testing: Once the alpha phase is complete, development enters the beta phase. Versions of the software, known as beta-versions, are released to a limited audience outside of the company to ensure that the product has few faults or bugs. Beta testing, is generally constrained to black box techniques although a core of test engineers are likely to continue with white box testing in parallel to the beta tests.
  • Big Bang Testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.
  • Black Box Testing / Functional Testing: Black box testing, concrete box or functional testing is used to check that the outputs of a program, given certain inputs, conform to the functional specification of the program. It performs testing based on previously understood requirements (or understood functionality), without knowledge of how the code executes.
  • Bottom-up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
  • Boundary value analysis/ testing: A test case design technique for a component in which test cases are designed which include representatives of boundary values. A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain.
  • Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail .
  • Bug: Bugs arise from mistakes and errors, made by people, in either a program's source code or its design that prevents it from working correctly or produces an incorrect result .
  • Business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes

Tuesday, July 8, 2008

Software Testing Glossary

  • Validation: The word validation has several related meanings:* In general, validation is the process of checking if something satisfies a certain criterion. Examples would be: checking if a statement is true, if an appliance works as intended, if a computer system is secure, or if computer data is compliant with an open standard. This should not be confused with verification.
  • Verification: In the context of hardware and software systems,formal verification is the act ofproving or disproving the correctness of a systemwith respect to a certain formal specification or property,using formal methods.
  • Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
  • Test Data: The definition (usually formal) of a collection of test input values that are consumed during the execution of a test, and expected results referenced for comparative purposes
  • Test Case: The specification (usually formal) of a set of test inputs, execution conditions, and expected results, identified for the purpose of making an evaluation of some particular aspect of a Target Test Item.
  • Test Cycle: A formal test cycle consists of all tests performed. In software development, it can consist of, for example, the following tests: unit/component testing, integration testing, system testing, user acceptance testing and the code inspection.
  • Test Approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed
  • Software Quality Assurance: Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can only find defects, not prove that there are none.
  • Bug: Bugs arise from mistakes and errors, made by people, in either a program's source code or its design that prevents it from working correctly or produces an incorrect result
  • Black Box Testing / Functional Testing: Black box testing, concrete box or functional testing is used to check that the outputs of a program, given certain inputs, conform to the functional specification of the program. It performs testing based on previously understood requirements (or understood functionality), without knowledge of how the code executes.
  • Defect: An anomaly, or flaw, in a delivered work product. Examples include such things as omissions and imperfections found during early lifecycle phases and symptoms of faults contained in software sufficiently mature for test or operation. A defect can be any kind of issue you want tracked and resolved.
  • Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points)
  • Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
  • Failure: The inability of a system or component to perform its required functions within specified performance requirements. A failure is characterized by the observable symptoms of one or more defects that have a root cause in one or more faults.
  • Fault: An accidental condition that causes the failure of a component in the implementation model to perform its required behavior. A fault is the root cause of one or more defects identified by observing one or more failures.
  • Grey Box Testing: The typical grey box tester is permitted to set up or manipulate the testing environment, like seeding a database, and can view the state of the product after their actions, like performing a SQL query on the database to be certain of the values of columns. It is used almost exclusively of client-server testers or others who use a database as a repository of information,or who has to manipulate XML files (DTD or an actual XML file) or configuration files directly, or who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results.

I will be back with some more Testing Glossaries which will be good for all testing guys.