STLC – Software Testing Life Cycle Phases & Entry, Exit Criteria

01/10/2023 0 By indiafreenotes

What is Software Testing Life Cycle (STLC)?

The Software Testing Life Cycle (STLC) is a structured sequence of activities executed throughout the testing process, aimed at achieving software quality objectives. It encompasses both verification and validation activities. It’s important to note that software testing isn’t a standalone, one-time event. Rather, it’s a systematic process involving a series of methodical activities to ensure the certification of your software product. STLC, which stands for Software Testing Life Cycle, is instrumental in ensuring software quality goals are successfully met.

STLC Phases

  • Requirement Analysis:

In this initial phase, the testing team thoroughly reviews and analyzes the requirements and specifications of the software. This helps in understanding the scope of testing and identifying potential areas for testing focus.

  • Test Planning:

Test planning involves creating a detailed test plan that outlines the scope, objectives, resources, schedule, and deliverables of the testing process. It also defines the testing strategy, entry and exit criteria, and risks.

  • Test Design:

During this phase, test cases and test scripts are created based on the requirements and design documents. Test data and environment setup requirements are also defined in this phase.

  • Test Environment Setup:

This phase involves preparing the necessary test environment, which includes configuring hardware, software, network settings, and any other infrastructure needed for testing.

  • Test Execution:

In this phase, the actual testing of the software is performed. Test cases are executed, and the results are recorded. Both manual and automated testing may be employed, depending on the project requirements.

  • Defect Reporting:

When defects or issues are identified during testing, they are documented in a defect tracking system. Each defect is assigned a severity and priority level, and relevant details are provided for resolution.

  • Defect ReTesting and Regression Testing:

After defects have been fixed by the development team, they are re-tested to ensure they have been successfully resolved. Additionally, regression testing is conducted to ensure that no new defects have been introduced as a result of the fixes.

  • Closure:

This phase involves generating test summary reports, which provide an overview of the testing activities, including test execution status, defect metrics, and other relevant information. The testing team also conducts a review to assess if all test objectives have been met.

  • Post-Maintenance Testing (Optional):

In some cases, after the software is deployed, there may be a need for additional testing to verify that any maintenance or updates have not adversely affected the system.

What is Entry and Exit Criteria in STLC?

Entry and Exit Criteria are key elements of the Software Testing Life Cycle (STLC) that help define the beginning and end of each testing phase. They provide specific conditions that must be met before a phase can begin (entry) or be considered complete (exit). These criteria help ensure that testing activities progress in a structured and organized manner. Here’s a breakdown of both:

Entry Criteria:

Entry criteria specify the conditions that must be satisfied before a particular testing phase can commence. These conditions ensure that the testing phase has a solid foundation and can proceed effectively. Entry criteria typically include:

  • Availability of Test Environment:

The required hardware, software, and network configurations must be set up and ready for testing.

  • Availability of Test Data:

Relevant and representative test data should be prepared and available for use in the testing phase.

  • Completion of Previous Phases:

Any preceding phases in the STLC must be completed, and their deliverables should be verified for accuracy.

  • Approval of Test Plans and Test Cases:

The test plans, test cases, and other relevant documentation should be reviewed and approved by relevant stakeholders.

  • Availability of Application Build:

The application build or version to be tested should be available for testing. This build should meet the specified criteria for test readiness.

  • Resource Availability:

Adequate testing resources, including skilled testers, testing tools, and necessary infrastructure, must be in place.

Exit Criteria:

Exit criteria define the conditions that must be met for a testing phase to be considered complete. Meeting these conditions indicates that the testing objectives for that phase have been achieved. Exit criteria typically include:

  • Completion of Test Execution:

All planned test cases must be executed, and the results should be documented.

  • Defect Closure:

All identified defects should be resolved, re-tested, and verified for closure.

  • Test Summary Report:

A comprehensive test summary report should be prepared, providing an overview of the testing activities, including execution status, defect metrics, and other relevant information.

  • Stakeholder Approval:

Relevant stakeholders, including project managers and business owners, should review and approve the testing phase’s outcomes.

  • Achievement of Test Objectives:

The testing phase must meet its defined objectives, which could include coverage goals, quality thresholds, or specific criteria outlined in the test plan.

Requirement Phase Testing

Requirement Phase Testing, also known as Requirement Review or Requirement Analysis Testing, is a critical aspect of the Software Testing Life Cycle (STLC). This phase focuses on reviewing and validating the requirements gathered for a software project.

Activities involved in Requirement Phase Testing:

  • Reviewing Requirements Documentation:

Testers carefully examine the documents containing the software requirements. This may include Business Requirement Documents (BRD), Functional Specification Documents (FSD), User Stories, Use Cases, and any other relevant documents.

  • Clarifying Ambiguities:

Testers work closely with business analysts, stakeholders, and developers to seek clarification on any unclear or ambiguous requirements. This ensures that everyone has a shared understanding of what needs to be delivered.

  • Verifying Completeness:

Testers ensure that all necessary requirements are documented. They check for any gaps or missing information that could lead to misunderstandings or incomplete development.

  • Identifying Conflicts or Contradictions:

Testers look for conflicting requirements or scenarios that could potentially lead to issues during development or testing. Resolving these conflicts early helps prevent rework later in the process.

  • Checking for Testability:

Testers assess whether the requirements are specific, clear, and structured in a way that allows for effective test case design. They flag requirements that may be difficult to test due to their ambiguity or complexity.

  • Traceability Matrix:

Testers may begin creating a Traceability Matrix, which is a document that maps each requirement to the corresponding test cases. This helps ensure that all requirements are adequately covered by testing.

  • Risk Analysis:

Testers conduct a risk assessment to identify potential challenges or areas of high risk in the requirements. This helps prioritize testing efforts and allocate resources effectively.

  • Requirement Prioritization:

Based on business criticality and dependencies, testers may assist in prioritizing requirements. This helps in planning testing efforts and allocating resources appropriately.

  • Feedback and Documentation:

Testers provide feedback on the requirements to the relevant stakeholders. They also document any issues or concerns that need to be addressed.

  • Approval:

Once the requirements have been reviewed and validated, testers may participate in the formal approval process, which involves obtaining sign-offs from stakeholders to confirm that the requirements are accurate and complete.

Test Planning in STLC

Test Planning is a crucial phase in the Software Testing Life Cycle (STLC) that lays the foundation for the entire testing process. It involves creating a detailed plan that outlines how testing activities will be conducted. Here are the key steps involved in Test Planning:

  • Understanding Requirements:

Review and understand the software requirements, including functional, non-functional, and any specific testing requirements.

  • Define Test Objectives and Scope:

Clearly articulate the testing objectives, including what needs to be achieved through testing. Define the scope of testing, specifying what will be included and excluded.

  • Identify Risks and Assumptions:

Identify potential risks that may impact the testing process, such as resource constraints, time constraints, or technological challenges. Document any assumptions made during the planning phase.

  • Determine Testing Types and Techniques:

Decide which types of testing (e.g., functional, non-functional, regression) will be conducted. Select appropriate testing techniques and approaches based on project requirements.

  • Allocate Resources:

Determine the resources needed for testing, including testers, testing tools, test environments, and any other necessary infrastructure.

  • Define Test Deliverables:

Specify the documents and artifacts that will be produced during the testing process, such as test plans, test cases, test data, and test reports.

  • Set Entry and Exit Criteria:

Establish the conditions that must be met for each testing phase to begin (entry criteria) and conclude (exit criteria). This ensures that testing activities progress in a structured manner.

  • Create a Test Schedule:

Develop a timeline that outlines when each testing phase will occur, including milestones, deadlines, and dependencies on other project activities.

  • Identify Test Environments:

Determine the necessary testing environments, including hardware, software, and network configurations. Ensure that these environments are set up and available for testing.

  • Plan for Test Data:

Define the test data requirements, including any specific data sets or scenarios that need to be prepared for testing.

  • Risk Mitigation Strategy:

Develop a strategy for managing identified risks, including contingency plans, mitigation measures, and escalation procedures.

  • Define Roles and Responsibilities:

Clearly outline the roles and responsibilities of each team member involved in testing. This includes testers, test leads, developers, and any other stakeholders.

  • Communication Plan:

Establish a communication plan that outlines how and when information will be shared among team members, stakeholders, and relevant parties.

  • Review and Approval:

Present the test plan for review and approval by relevant stakeholders, including project managers, business analysts, and other key decision-makers.

Test Case Development Phase

The Test Case Development Phase is a crucial part of the Software Testing Life Cycle (STLC) where detailed test cases are created to verify the functionality of the software. Steps involved in this phase:

  • Review Requirements:

Carefully review the software requirements documents, user stories, or any other relevant documentation to gain a deep understanding of what needs to be tested.

  • Identify Test Scenarios:

Break down the requirements into specific test scenarios. These scenarios represent different aspects or functionalities of the software that need to be tested.

  • Prioritize Test Scenarios:

Prioritize test scenarios based on their criticality, complexity, and business importance. This helps in allocating time and resources effectively.

  • Design Test Cases:

For each identified test scenario, design individual test cases. A test case includes details like test steps, expected results, test data, and any preconditions.

  • Define Test Data:

Specify the data that will be used for each test case. This may involve creating specific datasets, ensuring they cover various scenarios.

  • Incorporate Positive and Negative Testing:

Ensure that test cases cover both positive scenarios (valid inputs leading to expected results) and negative scenarios (invalid inputs leading to error conditions).

  • Review and Validation:

Have the test cases reviewed by peers or relevant stakeholders to ensure completeness, accuracy, and adherence to requirements.

  • Assign Priority and Severity:

Assign priority levels (e.g., high, medium, low) to test cases based on their importance. Additionally, assign severity levels to defects that may be discovered during testing.

  • Create Traceability Matrix:

Establish a mapping between the test cases and the corresponding requirements. This helps ensure that all requirements are covered by testing.

  • Prepare Test Data:

Gather or generate the necessary test data that will be used during the execution of the test cases.

  • Organize Test Suites:

Group related test cases into test suites or test sets. This helps in efficient execution and management of tests.

  • Review Test Cases with Stakeholders:

Share the test cases with relevant stakeholders for their review and approval. This ensures that everyone is aligned with the testing approach.

  • Document Assumptions and Constraints:

Record any assumptions made during test case development, as well as any constraints that may impact testing.

  • Version Control:

Maintain version control for test cases to track changes, updates, and ensure that the latest versions are used during testing.

  • Document Dependencies:

Identify and document any dependencies between test cases or with other project activities. This helps in planning the execution sequence.

Test Environment Setup

Test Environment Setup is a critical phase in the Software Testing Life Cycle (STLC) where the necessary infrastructure, software, and configurations are prepared to facilitate the testing process. Steps involved in Test Environment Setup:

  • Hardware and Software Requirements:

Identify the hardware specifications and software configurations needed to execute the tests. This includes servers, workstations, operating systems, databases, browsers, and any other relevant tools.

  • Separate Test Environment:

Establish a dedicated and isolated test environment to ensure that testing activities do not interfere with production systems. This may include setting up a separate server or virtual machine.

  • Configuration Management:

Implement version control and configuration management practices to ensure that the test environment is consistent and matches the required specifications for testing.

  • Installation of Software Components:

Install and configure the necessary software components, including the application under test, testing tools, test management tools, and any other required applications.

  • Database Setup:

If the application interacts with a database, set up the database environment, including creating the required schemas, tables, and populating them with test data.

  • Network Configuration:

Configure the network settings to ensure proper communication between different components of the test environment. This includes firewall rules, IP addresses, and network protocols.

  • Security Measures:

Implement security measures to protect the test environment from unauthorized access or attacks. This may include setting up firewalls, access controls, and encryption.

  • Test Data Preparation:

Prepare the necessary test data to be used during testing. This may involve creating data sets, importing data, or generating synthetic test data.

  • Browser and Device Configuration:

If web applications are being tested, ensure that the required browsers and devices are installed and configured in the test environment.

  • Tool Integration:

Integrate testing tools, such as test management tools, test automation frameworks, and performance testing tools, into the test environment.

  • Integration with Version Control:

Integrate the test environment with version control systems to ensure that the latest versions of code and test scripts are used during testing.

  • Backup and Recovery:

Implement backup and recovery procedures to safeguard the test environment and any critical data in case of system failures or unforeseen issues.

  • Environment Documentation:

Document the configurations, settings, and any special considerations related to the test environment. This documentation serves as a reference for future setups or troubleshooting.

  • Environment Verification:

Perform a thorough verification of the test environment to ensure that all components are functioning correctly and are ready for testing activities.

  • Environment Sandbox:

Create a controlled testing environment where testers can safely execute tests without affecting the integrity of the production environment.

Test Execution Phase

The Test Execution Phase is a pivotal stage in the Software Testing Life Cycle (STLC) where actual testing activities take place. Steps involved in this phase:

  • Execute Test Cases:

Run the test cases that have been developed in the previous phases. This involves following the predefined steps, entering test data, and comparing the actual results with expected results.

  • Capture Test Results:

Document the outcomes of each test case. This includes recording whether the test passed (meaning it behaved as expected) or failed (indicating a deviation from expected behavior).

  • Log Defects:

If a test case fails, log a defect in the defect tracking system. Provide detailed information about the defect, including steps to reproduce it, expected and actual results, and any relevant screenshots or logs.

  • Assign Priority and Severity:

Assign priority levels (e.g., high, medium, low) to defects based on their impact on the system. Additionally, assign severity levels to indicate the seriousness of the defects.

  • Retesting:

After a defect has been fixed by the development team, re-run the specific test case(s) that initially identified the defect to ensure it has been successfully resolved.

  • Regression Testing:

Conduct regression testing to ensure that the recent changes (bug fixes or new features) have not caused any unintended side effects on existing functionality.

  • Verify Integration Points:

Test the integration points where different modules or components interact to ensure that they work as expected when combined.

  • Verify Data Integrity:

If the application interacts with a database, validate that data is being stored, retrieved, and manipulated correctly.

  • Perform End-to-End Testing:

Execute end-to-end tests that simulate real-world user scenarios to verify that the entire system works seamlessly.

  • Security and Performance Testing:

If required, conduct security testing to identify vulnerabilities and performance testing to evaluate system responsiveness and scalability.

  • Stress Testing:

Evaluate the system’s behavior under extreme conditions, such as high load or resource constraints, to ensure it remains stable.

  • Capture Screenshots or Recordings:

Document critical steps or scenarios with screenshots or screen recordings to provide visual evidence of testing activities.

  • Document Test Execution Status:

Maintain a record of the overall status of test execution, including the number of test cases passed, failed, and any outstanding or blocked tests.

  • Report Generation:

Generate test summary reports to provide stakeholders with a clear overview of the testing activities, including execution status, defect metrics, and any important observations.

  • Obtain Sign-offs:

Seek formal sign-offs from relevant stakeholders, including project managers and business owners, to confirm that the testing activities have been completed satisfactorily.

Test Cycle Closure

Test Cycle Closure is a critical phase in the Software Testing Life Cycle (STLC) that involves several key activities to formally conclude the testing activities for a specific test cycle or phase. Steps involved in Test Cycle Closure:

  • Completion of Test Execution:

Ensure that all planned test cases have been executed, and the results have been recorded.

  • Defect Status and Closure:

Review the status of reported defects. Ensure that all critical defects have been addressed and closed by the development team.

  • Test Summary Report Generation:

Prepare a comprehensive Test Summary Report. This report provides an overview of the testing activities, including test execution status, defect metrics, and any important observations.

  • Metrics and Measurements Analysis:

Analyze the metrics and measurements collected during the test cycle. This could include metrics related to test coverage, defect density, and other relevant KPIs.

  • Evaluation of Test Objectives:

Assess whether the testing objectives for the cycle have been achieved. Verify if the testing goals set at the beginning of the cycle have been met.

  • Comparison with Entry Criteria:

Compare the current state of the project with the entry criteria defined at the beginning of the cycle. Ensure that all entry criteria have been met.

  • Lessons Learned:

Conduct a lessons learned session with the testing team. Discuss what went well, what could be improved, and any challenges faced during the cycle.

  • Documentation Review:

Review all testing documentation, including test plans, test cases, and defect reports, to ensure they are accurate and complete.

  • Resource Release:

Release any resources that were allocated for testing but are no longer required. This may include test environments, testing tools, or testing personnel.

  • Feedback and Sign-offs:

Seek feedback from stakeholders, including project managers, business analysts, and developers, regarding the testing activities. Obtain formal sign-offs to confirm that testing activities for the cycle are complete.

  • Archiving Test Artifacts:

Archive all relevant test artifacts, including test plans, test cases, defect reports, and test summary reports. This ensures that historical testing data is preserved for future reference.

  • Handover to Next Phase or Team:

If the testing process is transitioning to the next phase or a different testing team, provide them with the necessary documentation and information to continue testing activities seamlessly.

  • Closure Report and Documentation:

Prepare a formal Test Cycle Closure Report that summarizes the activities performed, the status of the test cycle, and any relevant observations or recommendations.

  • Final Approval and Sign-off:

Obtain final approval and sign-off from relevant stakeholders, indicating that the test cycle has been successfully closed.

STLC Phases along with Entry and Exit Criteria

Phase Objective Entry Criteria Exit Criteria
Requirement Analysis Understand and analyze software requirements. – Availability of well-documented requirements. – Requirement documents reviewed and understood. – Requirement traceability matrix created.
Test Planning Create a comprehensive test plan. – Completion of Requirement Analysis phase. – Availability of finalized requirements. – Availability of test environment. – Availability of necessary resources. – Approved test plan. – Test schedule finalized. – Resource allocation finalized. – Test environment set up.
Test Design Develop detailed test cases and test data. – Completion of Test Planning phase. – Availability of finalized requirements. – Availability of test environment. – Availability of necessary resources. – Test cases and test data created. – Test cases reviewed and approved. – Test data prepared.
Test Environment Setup Prepare the necessary infrastructure and configurations. – Completion of Test Design phase. – Availability of test environment specifications. – Test environment set up and verified. – Test data ready for use.
Test Execution Execute the test cases and record results. – Completion of Test Environment Setup phase. – Availability of test cases and test data. – Availability of test environment. – Test cases executed. – Test results recorded. – Defects logged (if any).
Defect Reporting and Tracking Log and manage identified defects. – Completion of Test Execution phase. – Defects identified during testing. – Defects logged with necessary details. – Defects prioritized and assigned for resolution.
Defect Resolution and Retesting Fix reported defects and retest fixed functionality. – Completion of Defect Reporting and Tracking phase. – Defects assigned for resolution. – Defects fixed and verified. – Corresponding test cases re-executed.
Regression Testing Verify that new changes do not negatively impact existing functionality. – Completion of Defect Resolution and Retesting phase. – Availability of regression test cases. – Regression testing completed successfully.
System Testing Evaluate the entire system for compliance with specified requirements. – Completion of Regression Testing phase. – Availability of system test cases. – Availability of test environment. – System test cases executed and verified.
Acceptance Testing Confirm that the system meets business requirements. – Completion of System Testing phase. – Availability of acceptance test cases. – Availability of test environment. – Acceptance test cases executed successfully.
Deployment and Post-Release Prepare for the software release and monitor post-release activities. – Completion of Acceptance Testing phase. – Approval for software release obtained. – Software deployed successfully. – Post-release monitoring and support in place.
Test Cycle Closure Formally conclude the testing activities for a specific test cycle or phase. – Completion of Deployment and Post-Release phase. – Availability of all testing documentation. – Test Cycle Closure report generated. – Test artifacts archived. – Lessons learned documented. – Formal sign-offs obtained.

Disclaimer: This article is provided for informational purposes only, based on publicly available knowledge. It is not a substitute for professional advice, consultation, or medical treatment. Readers are strongly advised to seek guidance from qualified professionals, advisors, or healthcare practitioners for any specific concerns or conditions. The content on intactone.com is presented as general information and is provided “as is,” without any warranties or guarantees. Users assume all risks associated with its use, and we disclaim any liability for any damages that may occur as a result.