Friday, September 16, 2016

Operational in the lab doesn't mean ready for combat

 "one of the most spectacular acquisition debacles in recent memory."
Trials, technology, will test aircraft carrier Ford - Daily Press
The aircraft carrier Gerald R. Ford stands tall at Newport News Shipbuilding, its construction nearly complete and more sailors arriving every week.
But in other ways, things are far from settled. The first-in-class ship faces significant challenges before it becomes combat ready and can deploy from its future home of Naval Station Norfolk, experts say.
At a recent Senate hearing, the Defense Department's top weapons tester cited "significant risks" for the Ford when it comes to passing a key, pre-deployment hurdle known as the Initial Operational Test and Evaluation (IOT&E), an independent assessment of the ship and crew during combat scenarios meant to be as realistic as possible.
Critical new systems have reliability questions or lack enough data to form judgments about performance, according to J. Michael Gilmore, director of Operational Test and Evaluation in the Defense Department.

Moving Directly to Production with New Technology is Risky

Navy Matters: Just Because We Have The Capability ...
technology is not a bad thing. What’s bad is forcing immature technology into service before the reliability and maintainability have caught up with the bleeding edge technology. When war comes, we need technology that works and is robust enough to function under the dirty, maintenance deprived conditions of a battlefield. Aegis is nice but if it breaks down 30 days into combat due to lack of sophisticated maintenance, wouldn’t we be better off with old fashioned rotating radars? If the F-35 that, in peacetime, barely managed to function with the aid of Ph.D technicians working in sterile conditions resembling a hospital operating room can’t be maintained in a dirty carrier hangar and covered in salt water (have you seen some of the pictures of carrier aircraft? – they get awfully dirty!) then what’s the point of having it?

We need to return to the concept of prototypes. We need to build them and operate them. Prototypes allow us to learn from our mistakes and grow and mature the technology. Hand in hand with that is the opportunity to learn about the maintenance and operating procedures of new technology before it enters production. The Navy’s rush to push the latest technology into production is unwise, costly in the long run, and dangerous in that it leaves us with production ships and aircraft that have failed and unmaintainable technology installed. That F-35 that the Marines declared ready for combat, isn’t really – in fact, it’s not even close.

The Navy needs to relax, be patient, and let technology mature and allow maintenance to catch up. The way to do this is prototypes.

Just because we have the capability doesn’t mean we should try to use it. If the average sailor can’t operate and maintain it, it shouldn’t be in the fleet.

Operational Assessment (OA) Can Assess Risk/Reward Early - AcqNotes

An Operational Assessment (OA) is an evaluation of operational effectiveness and operational suitability made by an independent Operational Test Agency (OTA), with user support as required, on other than production systems. [2]

The focus of an OA is on significant trends noted in development efforts, programmatic voids, risk areas, adequacy of requirements, and the ability of the program to support adequate operational testing. An OA may be conducted at any time using technology demonstrators, prototypes, mock-ups, Engineering Development Models (EDM) or simulations, but will not substitute for an Initial Operational Test and Evaluation (IOT&E) required for the Full-Rate Production Decision Review (FRPDR). [1]

The Program Manager (PM) should request an Operational Assessment (OA) of weapons system components and/or system level EDMs during the Engineering, Manufacturing and Development (EMD) Phase to meet exit/entrance criteria and to help evaluate system performance and assess technical risk before Milestone C.

Early Operational Assessments

A PM can request the OTA conduct an Early Operational Assessments (EOA) of prototype items of equipment with the purpose to help the identify and reduce program risk before program initiation. EOS’s are conducted primarily to forecast and evaluate the potential operational effectiveness and suitability of the weapon system during development. EOAs start during Concept Refinement (CR) and/or Technology Development (TD) may continue into system integration. [1]
 Statistics, Testing, and Defense Acquisition: New Approaches and Methodological Improvements | The National Academies Press

Better Analysis Needed for Assessing Operational Suitability

Fielding operationally suitable systems is a prime objective of defense acquisition. A suitable weapon system is one that is available for combat when needed, is reliable enough to accomplish its mission, operates satisfactorily with service personnel and other systems, and does not impose an undue logistics burden in peacetime or wartime.1 As noted above, operational test and evaluation is statutorily required to assess the effectiveness and suitability of defense systems under consideration for procurement.
Scarce resources, increasing technological complexity, and increasing attention to the life-cycle costs of defense systems underscore the need for assurance of suitability and its elements. Experience in the Department of Defense (DoD), similar to that of private industry, shows that the life-cycle maintenance cost of a major system may substantially exceed its original acquisition cost. For example, the total procurement cost of the Longbow Apache helicopter is estimated at $5.3 billion, which is slightly more than one-third of the total estimated 20-year lifecycle cost of $14.3 billion.2

Suitability deficiencies have been responsible for many of the field problems with newly acquired systems and have generated concerns about the operational readiness of certain military capabilities. Concern about the department's success in fielding suitable systems was expressed in an October 1990 memorandum from the Deputy Undersecretary of Defense for Acquisition. A 1992 Logistics Management Institute review of seven recently fielded systems found that "several systems have not achieved their availability goals, and they consume significantly more logistics resources than anticipated" (Bridgman and Glass, 1992:ii). That study also found that crucial suitability issues are not adequately identified early, or addressed in operational test plans. Such concerns and findings have led to calls for improved assessment of operational suitability. In this chapter, we discuss statistical issues related to the conduct of operational suitability tests and their evaluations and related information-gathering activities.

Related/Background:

No comments: