Project Metadata | Keywords | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
Support the US Army RDECOM CERDEC Intelligence and Information Warfare (I2WD) in Verification & Validation (V&V) of the Human Social Culture Behavior (HSCB) Testbed.
The I2WD organization at Fort Monmouth, NJ, is providing the HSCB program with a testbed for HSCB models. HSCB models are sometimes referred to as DIME/PMESII models. The acronym PMESII refers to the Political, Military, Economic, Social, Information, and Infrastructure variables that describe the status of a situation (state vector). There have been arguments that other categories should be included in the taxonomy; however, for our purposes, we will use PMESII to refer to all state vector variables, regardless of taxonomy details. The acronym DIME refers to the levers of power that a state has to influence the PMESII state, Diplomatic, Information, Military, and Economic. As with PMESII, we will use DIME to refer to all such interventions, regardless of taxonomy details.
Several other acronyms are pertinent. OOTW (or MOOTW) refers to Operations Other Than War (or Military Operations Other Than War); SASO refers to Stability and Support Operations; SSTR refers to Stability, Support, Transition, and Reconstruction operations, IW refers to Irregular Warfare; COIN refers to Counter Insurgency, and CT refers to Counter Terrorism.
When using the acronym, HSCB, the focus is on the theoretical basis of a model; whereas, DIME/PMESII (or PMESII for short) focuses on the technical details needed to implement a model. When the focus is on the operations being modeled, models may be cited as OOTW, SASO, etc., models. These definitions are not synonyms, as shown in the figure below; however, it has become clear that most of the operations listed above will require DIME/PMESII modeling techniques, supported by a firm HSCB basis. The general DIME/PMESII background can be found in the discussion of Analytical Tools for OOTW.
|
The I2WD HSCB Testbed is a framework that is being built to support HSCB models. The 2009 version of the testbed is a prototype. Its purpose is to evaluate how HSCB models can be integrated with each other and how they can support military operations and end users. The I2WD HSCB Testbed will conduct the Integration Demonstrations (ID), the Operational Feasibility Demonstrations (OFD), and associated efforts in support of transitioning HSCB products to field Users for a variety of assessment events, to include Limited Objective Evaluations (LOE) and Field Exercises. Later versions of the testbed will support and facilitate transition of HSCB program products to Programs of Record (PORs) and other field Users, including the Combatant Commands (COCOMs).
The figure below shows a diagram of the Oct 09 demonstration system. The system is comprised of four components: the HSCB PMESII Model Framework (HPMF), the VirtualWorld Model, the Senturion Model, and the PowerStructure Model. For convenience, all components at this level will be referred to as “models.” The components of the models will be called “modules.”
|
The figure also shows an additional model-level component called the HSCB Diagnostics. This component is not part of the system to be tested, but rather is part of the testing apparatus. The HSCB Diagnostics and the HPMF (minus several modules) comprise the HSCB Testbed.
From a V&V perspective, there are several important points that must be made with reference to this figure. The first is that each of the components of the system must be subjected to testing. The second point is that each component must have version identification: test results for one version do not necessarily carry over to a new version. The third point is that the modules of the HPMF that are not in the Testbed depend on the identities of the models outside of the HPMF: not only will different models require different service modules in the HPMF, but also the contents of the Wiring Expressions and HPMF Data Structures modules will be different for different model sets.
Because the demonstration system was a prototype, the actual results of verification and validation would not be significant; however, defining a useful process for V&V was significant. Therefore, various parts of the process were exercised to test its adequacy. The VV&A support used the DIME/PMESII VV&A Tool and the process flow illustrated in the figure below. The process blocks are marked in green to indicate the extent of the process testing.
|
As indicated, the static tests were only minimally exercised. The models that were included in the integrated system were specifically excluded from the testing process because they were meant to be used to illustrate the integration process, not as proposed valid models. For this reason, only one module was subjected to static testing to illustrate that process.
The dynamic testing flow is shown as more completely exercised, although the dynamic tests were also restricted to tests of the integration process and of the integration concepts, with only a few validity checks for illustrative purposes. The first step, "Define the System," resulted in the Testbed diagram (first figure). This step has an orange portion to indicate that some problems in configuration management were discovered. The remainder of the steps proceeded as planned.
The recommendations were divided into four parts: General Procedures, Test Procedures, Integration Challenges, and Testbed Needs.
The testing process revealed a number of valuable insights. For example, the original intent of the Testbed included the concept of a "plugfest," in which model vendors would plug their models into the Testbed for rapid evaluations. This concept had to be abandoned, as the integration process requires expert modeling and considerable time and effort on the part of both the Testbed personnel and the model developer personnel. The testing also revealed a problem with the concept of separate Technical Assessment (TA) of each individual model, Technical Performance Evaluation (TPE) of each individual model, and Integration Demonstration (ID) test processes. The V&V of the ID process requires V&V data on the individual models, which should be gathered during the TA and TPE processes. However, the TA and TPE events are not designed to provide the knowledge that the ID process needs.
Several insights with respect to the test procedures were also revealed. For example, the tests should be broken into fairly small parts so that any problems are immediately traceable back to their causes. Also, the model developers need to be involved in the test designs, as they have the expertise to design the most efficient tests for each purpose. It was discovered (as part of the "Define the System" step) that the integration process generated model specific code in two separate places and that these code blocks should be tested as part of the integration testing.
In the integration challenges category, the testing revealed some fundamental questions that should be addressed. First, the concept of integration needs to be defined. Does it mean integrating models with data sets, composing models, or integrating models with external applications (e.g., real-time systems)? Second, there is a need to address functionality overlap of models proposed for a single system. Third, there is a need to consider whether integration of different types of models (as opposed to a restriction to simulations) or integration that doesn't require code connections (loosely coupled systems) should be considered.
Several issues were revealed that had to do with how the Testbed should be created. For example, some models were more susceptible to the current visualization tool than others and others had custom visualization modules that were not integrated. The Testbed also needs more thorough configuration management controls. Finally, the Testbed needs improved analysis support, such as the ability to create custom combinations of model outputs to represent standard Measures of Merit.
The project was a clear success. It introduced and tested V&V concepts and demonstrated their value. It introduced and tested a large number of test criteria for evaluating a system within the Testbed and demonstrated their good and bad points. The V&V strategy and plan was based on the best practices for V&V of DIME/PMESII models. The team performed broad level V&V on the system and its models and found that they performed reasonably well in the context of a prototype system. All of the data was captured in a database that was designed to support ongoing V&V efforts. The process generated extremely valuable data for the HSCB Program as a whole and for the Testbed Program in particular.
If you arrived here using a keyword shortcut, you may use your browser's "back" key to return to the keyword distribution page.
Return to Hartley's Projects Page