Tuesday, July 15, 2008

Software Testing Glossary

  • Load Testing:Load testing is the act of testing a system under load. It generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program's services concurrently. This testing is most relevant for multi-user systems, often one built using a client/server model, such as web servers .
  • Localization Testing:This term refers to making software specifically designed for a specific locality .
  • Logic coverage Testing / Logic driven Testing / Structural test case design:Test case selection that is based on an analysis of the internal structure of the component. Also known as white-box testing .
  • Loop Testing:A white box testing technique that exercises program loops .
  • Maintainability Testing / Serviceability Testing:Testing whether the system meets its specified objectives for maintainability.
  • Maintenance testing:Testing the changes to an operational system or the impact of a changed environment to an operational system .
  • Model Based Testing:Model-based testing refers to software testing where test cases are derived in whole or in part from a model that describes some (usually functional) aspects of the system under test.
  • Monkey Testing:Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out .
  • Mutation testing:A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations .
  • N+ Testing:A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+. The cycles are typically repeated until the solution reaches a steady state and there are no errors .
  • Negative Testing / Dirty Testing:Testing aimed at showing software does not work.
    Operational TestingTesting conducted to evaluate a system or component in its operational environment.
  • Pair testing:Two testers work together to find defects. Typically, they share one computer and trade control of it while testing .
  • Parallel Testing:The process of feeding test data into two systems, the modified system and an alternative system (possibly the original system) and comparing results.
  • Path coverage: Metric applied to all path-testing strategies: in a hierarchy by path length, where length is measured by the number of graph links traversed by the path or path segment; e.g. coverage with respect to path segments two links long, three links long, etc. Unqualified, this term usually means coverage with respect to the set of entry/exit paths. Often used erroneously as synonym for statement coverage .
  • Path Testing:Testing in which all paths in the program source code are tested at least once.
  • Penetration Testing:The portion of security testing in which the evaluators attempt to circumvent the security features of a system .
  • Performance Testing:Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload.Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly .
  • Playtest:A playtest is the process by which a game designer tests a new game for bugs and improvements before bringing it to market .
  • Portability Testing:Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.
  • Post-conditions: Cleanup steps after the test case is run, to bring it back to a known state.
  • Precondition: Dependencies that are required for the test case to run .
  • Progressive Testing: Testing of new features after regression testing of previous features .
  • Quality Control: Quality control and quality engineering are involved in developing systems to ensure products or services are designed and produced to meet or exceed customer requirements and expectations .
  • Ramp Testing: Continuously raising an input signal until the system breaks down .
  • Random Testing: Testing a program or part of a program using test data that has been chosen at random .
  • Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions .
  • Regression Testing: Regression testing is any type of software testing which seeks to uncover bugs that occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned.
  • Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs .
  • Reliability Testing: Testing to determine whether the system/software meets the specified reliability requirements.
  • Requirements based Testing: Designing tests based on objectives derived from requirements for the software component .
  • Resource utilization testing:The process of testing to determine the Resource-utilization of a software product .
  • Risk-based testing: Testing oriented towards exploring and providing information about product risks .
  • Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational .
  • Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load .
  • Scenario Testing: A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose.
  • Security Testing: Tests focused on ensuring the target-of-test data (or systems) are accessible only to those actors for which they are intended.
  • Session-based Testing: Session-based testing is ideal when formal requirements are non present, incomplete, or changing rapidly. It can be used to introduce measurement and control to an immature test process, and can form a foundation for significant improvements in productivity and error detection. It is more closely related to Exploratory testing. It is a controlled and improved ad-hoc testing that is able to use the knowledge gained as a basis for ongoing, product sustained improvement .
  • Simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs .
  • Smart testing: Tests that based on theory or experience are expected to have a high probability of detecting specified classes of bugs; tests aimed at specific bug types .
  • Smoke Testing: A sub-set of the black box test is the smoke test. A smoke test is a cursory examination of all of the basic components of a software system to ensure that they work. Typically, smoke testing is conducted immediately after a software build is made. The term comes from electrical engineering, where in order to test electronic equipment, power is applied and the tester ensures that the product does not spark or smoke.
  • Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed .
  • Soap-opera testing: A technique for defining test scenarios by reasoning about dramatic and exaggerated usage scenarios. When defined in collaboration with experienced users, soap operas help to test many functional aspects of a system quickly and-because they are not related directly to either the systems formal specifications, or to the systems features-they have a high rate of success in revealing important yet often unanticipated problems.
  • Software Quality Assurance: Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can only find defects, not prove that there are none.
  • Stability Testing: Stability testing is an attempt to determine if an application will crash.
  • State Transition Testing: A test case design technique in which test cases are designed to execute state transitions.
  • Statement Testing:Testing designed to execute each statement of a computer program.
  • Static Testing: Analysis of a program carried out without executing the program .
  • Statistical Testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.
  • Storage Testing: Testing whether the system meets its specified storage objectives.
  • Stress Testing: Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.Stress testing a subset of load testing.
  • Structural Testing: White box testing, glass box testing or structural testing is used to check that the outputs of a program, given certain inputs, conform to the structural specification of the program .
  • SUT: System Under Test
  • Syntax Testing: A test case design technique for a component or system in which test case design is based upon the syntax of the input.
  • System Testing: System testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of Black box testing
  • Technical Requirements Testing: Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.
  • Test Approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed .
  • Test Automation: Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
  • Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. Same as Test environment
  • Test Case: The specification (usually formal) of a set of test inputs, execution conditions, and expected results, identified for the purpose of making an evaluation of some particular aspect of a Target Test Item.
  • Test Cycle: A formal test cycle consists of all tests performed. In software development, it can consist of, for example, the following tests: unit/component testing, integration testing, system testing, user acceptance testing and the code inspection.
  • Test Data: The definition (usually formal) of a collection of test input values that are consumed during the execution of a test, and expected results referenced for comparative purposes .
  • Test Driven Development: Test-driven development (TDD) is a Computer programming technique that involves writing tests first and then implementing the code to make them pass. The goal of test-driven development is to achieve rapid feedback and implements the "illustrate the main line" approach to constructing a program. This technique is heavily emphasized in Extreme Programming.
  • Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness .
  • Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
  • Test Harness: In software testing, a test harness is a collection of software tools and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs.
  • Test Idea: A brief statement identifying a test that is potentially useful to conduct. The test idea typically represents an aspect of a given test: an input, an execution condition or an expected result, but often only addresses a single aspect of a test.
  • Test Log: A collection of raw output captured during a unique execution of one or more tests, usually representing the output resulting from the execution of a Test Suite for a single test cycle run.
  • Test Plan:A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
  • Test Procedure: The procedural aspect of a given test, usually a set of detailed instructions for the setup and step-by-step execution of one or more given test cases. The test procedure is captured in both test scenarios and test scripts .
  • Test Report: A document that summarizes the outcome of testing in terms of items tested, summary of results , effectiveness of testing and lessons learned.
  • Test Scenario: A sequence of actions (execution conditions) that identifies behaviors of interest in the context of test execution.
  • Test Script: A collection of step-by-step instructions that realize a test, enabling its execution. Test scripts may take the form of either documented textual instructions that are executed manually or computer readable instructions that enable automated test execution.
  • Test Specification:A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests .
  • Test Strategy: Defines the strategic plan for how the test effort will be conducted against one or more aspects of the target system.
  • Test Suite: A package-like artifact used to group collections of test scripts , both to sequence the execution of the tests and to provide a useful and related set of Test Log information from which Test Results can be determined .
  • Test Tools:Computer programs used in the testing of a system, a component of the system, or its documentation .
  • Testalibity: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met .
  • Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels .
  • Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested .
  • Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases .
  • Unit Testing: A unit test is a procedure used to verify that a particular module of source code is working properly .
  • Usability Testing: Usability testing is a means for measuring how well people can use some human-made object (such as a web page, a computer interface, a document, or a device) for its intended purpose, i.e. usability testing measures the usability of the object. If usability testing uncovers difficulties, such as people having difficulty understanding instructions, manipulating parts, or interpreting feedback, then developers should improve the design and test it again .
  • Use case testing: A black box test design technique in which test cases are designed to execute user scenarios .
  • Validation: The word validation has several related meanings:* In general, validation is the process of checking if something satisfies a certain criterion. Examples would be: checking if a statement is true, if an appliance works as intended, if a computer system is secure, or if computer data is compliant with an open standard. This should not be confused with verification.
  • Verification: In the context of hardware and software systems,formal verification is the act ofproving or disproving the correctness of a systemwith respect to a certain formal specification or property,using formal methods.
  • Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner .
  • White Box testing / Glass box Testing: White box testing, glass box testing or structural testing is used to check that the outputs of a program, given certain inputs, conform to the structural specification of the program. It uses information about the structure of the program to check that it performs correctly.
  • Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

No comments: