1.0 INTRODUCTION
This Test Plan has been written to communicate the school easy software development and testing procedures with the client. It includes the objectives, scope, testing methods, schedule, risks, and approach. This document will clearly identify what the test deliverables will be and what is deemed in and out of scope.
2.0 OBJECTIVES AND TASKS
The School Easy is a school grade and day to day management tool used to let the school and teachers communicate with their students. This software is a new product written with Java to be platform independent. The Greater Software is responsible for testing the product and ensuring it meets the client’s needs.
2.2 Tasks
The initial phase of this project will deliver School Easy software with functionality to the client so they can create and store the results from the alpha tests. These result will allow Greater Software to improve the updated versions of the software. The School Easy must have a full functionality by the delivery date.
3.0 SCOPE
The followings are the must have requirements from clients. These and any supplementary request from the client must be all included in the final product.
- Create initial criteria with detailed sub-steps.
- Create the transfer schedule.
- Report and receive feedbacks from the client.
- Establish transition team.
- Conduct the tests.
- Create the final product.
- Conduct the final test.
- Deliver the product to the client.
First of all, the liaison teacher or account will be appointed to ease the communication between the client and company. She or he will be the main line of communication with the client. The Greater Software will work with School Easy until the client will accept and sign off the final product. Rewriting, moving or porting existing test cases from the existing testing documents is not considered part of this project. Since the client is consisting of three major group of users -teachers, exam departments, and accounts – and will probably include students in the future, the Beta testing period will be expected to take much longer than Alpha testing.
The Greater Software is committed to delivering the best software to satisfy our client’s requirements. In order to experience our pledges, the clients are asked to join our development process from the initial phases of the software development. The following approaches will describe our commitment detail:
- Permit our developers to own and prove the quality of our software.
- Engage our clients by making their feedback process easier and readily accessible to gather early possible feedback data.
- Conduct testing while enable the larger testing team to track faster, more flexible, and more engaging mixed development environment.
These effective testing strategies will include automated, manual, and exploratory tests to reduce risk and tighten release cycles. The following tests will be conducted during the development:
- Unit tests: It validates the smallest components of the system, ensuring they handle known input and outputs correctly. It will individually test classes in the application to verify they work under expected, boundary, and negative cases.
- Integration tests: It exercises an entire subsystem and ensure that a set of components play nicely together.
- Functional tests: It verifies end-to-end scenarios that the client will engage in.
Definition:
These tests are basically written and executed by Greater Software to make sure that code meets its design and requirements and behaves as expected. The goal is to segregate each part of the program and test that the individual parts are working correctly. This means that for any function or procedure when a set of inputs are given then it should return the proper values. It should handle the failures gracefully during the course of execution when any invalid input is given. It also must provide a written contract that the piece of code must assure. It is basically done before integration after Code and Debug development.
Participants:
Examiners, Programmers, Teachers
Methodology:
The test will be conducted in the classroom setting while the feedbacks are sent to programmers with error messages.
Definition:
System Integration Testing(SIT) is a black box testing technique that evaluates the system’s compliance with specified requirements. It is usually performed on a subset of the system while system testing is performed on a complete system and is preceded by the user acceptance test (UAT). It can be performed with minimum usage of testing tools, verified for the interactions exchanged and the behavior of each data field within the individual layer is investigated. After the integration, there are three main states of data flow:
- Data state within the integration layer
- Data state within the database layer
- Data state within the Application layer
Participants:
Examiners, Programmers, Teachers
Methodology:
Programmers will write codes according to the specification established by the client’s requirements. There are four different system integration test techniques:
- Top-down Integration Testing
- Bottom-up Integration Testing
- Sandwich Integration Testing
- Big-bang Integration Testing
Definition:
Performance testing, a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under the various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.
Participants:
Examiners, Programmers, Teachers
Methodology:
Programmers will write codes according to the specification established by the client’s requirements. There are four different Performance Testing Techniques:
- Load testing – It is the simplest form of testing conducted to understand the behavior of the system under a specific load. Load testing will result in measuring important business critical transactions and load on the database, application server, etc., are also monitored.
- Stress testing – It is performed to find the upper limit capacity of the system and also to determine how the system performs if the current load goes well above the expected maximum.
- Soak testing – Soak Testing also known as endurance testing, is performed to determine the system parameters under continuously expected load. During soak tests, the parameters such as memory utilization are monitored to detect memory leaks or other performance issues. The main aim is to discover the system’s performance under sustained use.
- Spike testing – Spike testing is performed by increasing the number of users suddenly by a very large amount and measuring the performance of the system. The main aim is to determine whether the system will be able to sustain the workload.
Definition:
User acceptance testing, a testing methodology where the clients involved in testing the product to validate the product against their requirements. It is performed at client location at developer’s site. UAT is context dependent and the UAT plans are prepared based on the requirements and not mandatory to execute all kinds of user acceptance tests and even coordinated and contributed by the testing team.
Participants:
Accountants, Programmers, Students, Teachers
Methodology:
The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.
4.5 Batch Testing
Batch Testing of the software will be done as needed base.
Definition:
Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still works as specified in the requirements. It makes use of specialized tools to control the execution of tests and compares the actual results against the expected result. Usually, regression tests, which are repetitive actions, are automated.
Testing Tools not only help to perform regression tests but also helps us to automate data set up generation, product installation, GUI interaction, defect logging, etc.
For automating any application, the following parameters should be considered.
- Data-driven capabilities
- Debugging and logging capabilities
- Platform independence
- Extensibility & Customizability
- E-mail Notifications
- Version control friendly
- Support unattended test runs
Participants:
Accountants, Examiners, Teachers
Methodology:
Typically, there are 4 test automation frameworks that are adopted while automating the applications.
- Data Driven Automation Framework
- Keyword Driven Automation Framework
- Modular Automation Framework
- Hybrid Automation Framework
4.7 Beta Testing
Definition:
Beta testing is also known as user testing takes place at the end users site by the end users to validate the usability, functionality, compatibility, and reliability testing. Beta testing adds value to the software development life cycle as it allows the “real” customer an opportunity to provide inputs into the design, functionality, and usability of a product. These inputs are not only critical to the success of the product but also an investment into future products when the gathered data is managed effectively.
Participants:
Accountants, Examiners, Programmers, Students, Teachers
Methodology:
There are a number of factors that depends on the success of beta testing:
- Test Cost
- Number of Test Participants
- Shipping
- Duration of Test
- Demographic coverage
Task Name |
Start |
Finish |
Effort |
Comments |
Test Planning |
||||
Review Requirements documents |
||||
Create initial test estimates |
||||
Staff and train new test resources |
||||
First deploy to QA test environment |
||||
Functional testing – Iteration 1 |
||||
Iteration 2 deploy to QA test environment |
||||
Functional testing – Iteration 2 |
||||
System testing |
||||
Regression testing |
||||
UAT |
||||
Resolution of final defects and final build testing |
||||
Deploy to Staging environment |
||||
Performance testing |
||||
Release to Production |
Cite This Work
To export a reference to this article please select a referencing style below: