The STEP process defines evaluations according to three main phases: (1) Scoping and Test Strategy, (2) Test Preparation, (3) Testing, Results, and Final Report, and a fourth, optional phase (4) Integration and Deployment that is determined by the sponsor on a case-by-case basis. Each STEP phase has different objectives, actions and associated document deliverables.
Checkpoints, or control gates, separate the phases, and each phase must be completed before the next one is begun. These control gates help to ensure evaluation integrity. For instance, teams must establish their evaluation criteria and test strategy before installing or testing the evaluation products. It is critical that the team solidify their evaluation criteria before starting hands-on product testing. This avoids the potential for introducing bias into the evaluation criteria based on a priori knowledge of a given product’s features or design.
1. Scoping and Test Strategy.
During this phase, the evaluation team gains an understanding of the mission objectives and technology space, and settles on key requirements through scoping with the government sponsor. The team produces a project summary to help clarify the objectives and scope, and performs a market survey to identify potential products in the technology area. The evaluation team works with the sponsor to select a list of products for further evaluation based on the market survey results, evaluation timeline, and resources available. To prepare for testing, the team produces a project summary and high-level test plan.
2. Test Preparation.
After selecting the products to evaluate and obtaining concurrence from the sponsor, the evaluation team works to acquire the evaluation products from the vendors, and any additional infrastructure that is required for testing. This includes signing non-disclosure agreements (NDAs), establishing vendor points of contact, and meeting with the vendor to discuss the test plan. At the same time, the team develops a full set of evaluation criteria that the products will be tested against and any scenario tests that will be performed. The evaluation team then installs the products in the test environment, and engages the vendor as technical questions arise. The team may wish to hold a technical exchange meeting (TEM) to gain further insight and background from subject matter experts.
3. Testing, Results, and Final Report.
In this phase, the evaluation team tests and scores the products against all of the test criteria. The team must ensure that testing for each product is performed under identical conditions, and must complete a full crosswalk of the scores for each product requirement after testing to ensure scoring consistency. Following the crosswalk, evaluation team members conduct individual meetings with each vendor to review their findings, correct any misunderstandings about their product’s functionality, and retest if necessary. The team produces a final report that incorporates the evaluation results and any supporting information.
4. Integration and Deployment.
The final evaluation report submitted to the government provides a data source to assist in decision-making, but is not a mandate to purchase specific products. If the government decides to purchase a product, the evaluation team works with the government and other commercial contractors to assist in deploying and integrating the solution into the operational environment. Actions in this phase may include developing configuration guidance and supporting documentation.