Initial capabilities documents: a 10-year retrospective of tools, methodologies, and best practices.Link/Page Citation
Page/Link:
Initial capabilities documents: a 10-year retrospective of tools, methodologies, and best practices..
. 2014 Defense Acquisition University Press 04 Mar. 2023
. S.v. Initial capabilities documents: a 10-year retrospective of tools, methodologies, and best practices.. Retrieved Mar 04 2023 from
Initial capabilities documents: a 10-year retrospective of tools, methodologies, and best practices.. (n.d.)
It may come as a surprise to many acquisition practitioners that the historically unstable, formal written procedures and processes that embody the Defense Acquisition System and Joint Capabilities Integration and Development System (JCIDS) are now over 10 years old. During this time, the Department of Defense (DoD) has published significant revisions and updates to the JCIDS-related documents, including Department of Defense Instruction (DoDI) 5000.02 entitled, Operation of the Defense Acquisition System and the Joint Capabilities Integration and Development System Manual (DoD, 2013; Joint Requirements Oversight Council [JROC], 2012). The current systems longevity may be partially attributable to its utilization of modern management approaches, further enabled by a slow convergence of the Joint Strategic Planning System set in motion by the Goldwater-Nichols Act (Goldwater-Nichols, 1986). With its focus on Joint development and deconfliction of capabilities, JCIDS uses a portfolio management approach and streamlined documentation to elevate user requirements relatively quickly and vet them against current capabilities. Further, its emphasis on knowledge management ensures that all stakeholders can view the process and its outcomes as the key documents percolate through the JCIDS process.
Early analysis of the JCIDS process by the U.S. Government Accountability Office (GAO, 2008) identified variable product quality. Attempts were made at creating users guides to improve document quality (JROC, 2012; Joint Chiefs of Staff [JCS], 2009); however, these documents did not fully address the analysis techniques contained therein. As a key component of process quality, the ability to select, use, and report an appropriate analysis technique is an item of interest for authors, stakeholders, and portfolio managers. Therefore, this effort reviewed the content, tools, and methodologies recorded in the past 10 years Initial Capabilities Documents (ICDs) created as a part of the JCIDS process.
As one of the first products created in JCIDS, ICDs are important because they validate requirements derived through an analysis of current capabilities and capability gaps. Additionally, they are signed by senior service members and are the basis for program acquisitions. Further, due to their recommended brevity, it is important that ICDs contain the correct level of detail to identify the key assumptions, limitations, and boundary conditions contained or referenced in their analyses. A lack of analytical clarity at this stage may lead to misdirected resources further in the process (GAO, 2008).
Of particular interest were the methodologies that implementers and decision makers were choosing to use in developing ICDs. Through this process, it was possible to identify a series of best practices and guidelines to improve ICD quality, and thus aid in the evolution of JCIDS.
The JCIDS process was created as a response to a 2002 memorandum from the Secretary of Defense to the Vice Chairman of the Joint Chiefs of Staff to study alternative ways to evaluate requirements (JCIDS, 2014). At the time of this memorandum, the governing document was Chairman of the Joint Chiefs of Staff Instruction 3170.01B (CJCSI, 2001) and was titled the Requirements Generation System. The purpose of JCIDS was to streamline and standardize the methodology to identify and describe capabilities gaps across the DoD, and to engage the acquisition community early in the process while improving coordination between departments and agencies.
The GAOs (2008) report indicated that the JCIDS process has not yet been effective in identifying and prioritizing warfighting needs from a joint, department-wide perspective (GAO, 2008, para. 1). This report outlined the shortfalls and gaps in the JCIDS process in its 5-year life span, furthering the redesign of the process. Additionally, the report outlined several recommendations for the DoD, including developing a more analytical approach within JCIDS to better prioritize and balance capability needs as well as allocating the appropriate resources for capabilities development planning.
The current documentation for both creating and implementing ICDs are the Capabilities-BasedAssessment (CBA) Users Guide and the JCIDS Manual. These documents were released in 2009 and 2012 respectively as a part of the process to address the issues found by the 2008 GAO report. The impact of these documents in improvements to the JCIDS process has yet to be determined, but will be discussed in this article.
The research team used the Knowledge Management/Decision Support (KM/DS) system to examine the JCIDS process. The KM/DS Web site is the repository for the documents created through or as a byproduct of the JCIDS process. Included in this study are ICDs, Joint Capabilities Documents (JCDs), Capability Development Documents, and other supporting documents that are a part of this process. To focus this research, the team specifically studied the core documents--ICDs and JCDs--to better understand what kinds of methodologies are being implemented by the various Services to convey the gap information under study.
Of those entered in the KM/DS system, over 1,000 ICDs and JCDs were in various phases of the JCIDS process covering the period January 1, 2002, to December 31, 2012. The team decided to focus on only those documents that were considered Validated and Final, with the expectation of little to no revision remaining for these documents in the near future. These criteria reduced the number of the documents under review to 225 ICDs/JCDs. The team of four researchers split the ICDs/JCDs evenly across year and type to ensure similar exposure to the complete population available. At the completion of the review, the researchers met and discussed commonalities and anomalies found in documents of interest, and in the population in general. For purposes of this article, the term ICD will be used to describe both the ICDs and JCDs unless specified otherwise.
The team formulated an initial set of generally accepted methodologies for a baseline to identify, categorize, and sort the currently used methodologies within the ICDs. They did not solely consider this set of techniques, but allowed for an expansion of the list to detect emergent techniques.
Additionally, an analysis was performed on key metrics and areas of interest to see if there were any correlations or observations that could be made about various components of the ICDs. These attributes were chosen as they were key areas of interest or sections in the Capabilities-Based Assessment (CBA) Users Guide and the JCIDS Manual. By examining these attributes, the team was able to determine to what extent past ICDs have followed current guidance. Some of the components considered in the analysis can be found in Table 1.
Ultimately, it was the intention of the research team to observe and report on best practices for future ICD writers. As such, we focused on finding those ICDs that best embodied the intentions found in the Capabilities-BasedAssessment (CBA) Users Guide (JCS, 2009) and the JCIDS Manual (JROC, 2012).
The team examined several ICD characteristics that are presented in the JCIDS Manual and were expected to be used in most ICDs (Figure 1). The team found that of the features prescribed by the JCIDS Manual, many were not present in the majority of ICDs reviewed. Less than half of the ICDs described what analysis was done to identify capability gaps. Over 90 percent of the ICDs reviewed define a specific capability while some ICDs do not have a well-defined end state.
Nearly half of the ICDs analyzed defined their Measures of Effectiveness (MOE), described their analysis, prioritized gaps and capabilities, and defined minimum values for required capability attributes. The presence of these characteristics provides additional information to the reader and improves the fidelity of the ICD; their absence leaves commonly questioned areas open for discussion. The 2012 JCIDS Manual requires threshold values, but description of the analysis has been left open to the document creator, and many choose not to describe it. In fact, the manual states a preference to avoid unnecessary rigor and time-consuming detail. Applying and documenting some level of rigor seems necessary and useful for documenting how gaps were identified and showing how the capability requirements were justified. The prioritization of gaps and capabilities helps decision makers understand those components that are critical when resources are limited to address the full capability gap, but allows for partial capability fulfillment or a subset of smaller gaps to be filled.
The inclusion of an Analysis of Alternatives (AoA) is an interesting additional piece of content as it is no longer part of the Capabilities-Based Assessment (CBA) Users Guide, and is done in subsequent work of the JCIDS process. Nearly one-third of all ICDs included some form of an AoA, whether in the form of a brief paragraph or a full documentation found in attachments or enclosures. Most documents that contained a complete AoA were from the first 5 years, a period of time in which the content of ICDs was still in flux. Including an AoA would presuppose a preferred materiel solution-something not within the scope of documenting a capability gap.
Also, less than 25 percent of the ICDs surveyed contained objective values for the capabilities to be met. While it has become more common for threshold values to be defined for capabilities, objective values can only be seen in less than half of those cases. One might expect to see objective values used more frequently to quantify desired capabilities beyond the minimums. Including objective values is expected to aid the process owner in determining if a recommended solution is able to meet the objective of closing the specified gap.
Identifying the Functional Capabilities Boards (FCBs) to which ICDs were assigned provided insight as to what types of capabilities have been defined and what priorities have been dictated. FCB and associated Joint Capability Area (JCA) categories include Force Support (formerly Force Support and Building Partnerships); Battlespace Awareness; Force Application; Logistics; Command, Control, Communications, and Computers (C4)/Cyber (formerly Net-Centric, Command and Control, and C4/Cyber); and Protection. Previous FCBs, including Special Operations and Test, are listed in Figure 2 under Other Legacy FCBs.
Each ICD is assigned a lead and supporting FCB. Figure 2 shows ICDs arranged by lead FCB with Force Application being the most prominent lead FCB. The prominence of Force Application over Force Support led the team to conclude that validated ICDs are more likely to focus on the direct needs of the warfighter and less likely to focus on capabilities of supporting processes. At the same time, a significant number of ICDs listed net-centricity and C4/Cyber as supporting FCBs.
The research team decided early on to capture the length of ICDs as the Capabilities-Based Assessment (CBA) Users Guide specifically states that ICDs should be no longer than 10 pages, with separate allowance for appendices (JCS, 2009). Figure 3 presents the average ICD page length without appendices; quality and meticulousness were not necessarily correlated with quantity of pages. ICDs were meant to be concise documents that outline the necessary capabilities while still answering the required content.
The drastic increase in length of ICDs is potentially a result of a change in the process by which capability gaps were outlined. As with most processes, uncertainty in a new method allows for an increase in the breadth and depth of the information found within ICDs. As page length has been steadily decreasing over the last few years, it would suggest that sponsors have become more comfortable with the process and have become more efficient at outlining the information needed.
One final note concerning page length was to evaluate the relation of page length to Acquisition Category (ACAT) level. Would larger projects lend themselves to taking more pages to explain the research and identify the gaps? These two factors were examined, and between ACAT Levels I, II, and III the mean page length was 25.53, 23.35, and 21.02 respectively. While the difference between ACATs I and III are statistically significant using a t-test with an alpha of .05, the difference (on average) is roughly four pages.
Within the time period analyzed, a total of 2,779 gaps were identified; the average number of gaps identified in an ICD are shown in Figure 4. Additionally, Figure 4 illustrates the fluctuation in the number of ICDs validated each year. The GAO (2008) report noted that JCIDS was ineffective in properly prioritizing capabilities and suggested that nearly all ICDs submitted were accepted. Since the inception of the JCIDS process, 2012 was the first year that the average number of gaps exceeded number of ICDs validated. This suggests that ICDs are identifying more gaps per document, creating documents that are tackling larger and more complex problems than before. It appears that the JCIDs process has matured, and the process has become more efficient as a result of the GAO report.
The research team noted that many ICDs had too few gaps identified (only one or two, or none at all) leading to the conclusion that the methodology employed was not optimal as there are probably more gaps that have yet to be identified, and several documents identified too many gaps. It was very difficult to understand and prioritize identified gaps when too many were identified (several contained over 50 gaps).
Figure 5 is a representation of the most frequently used methodologies from 2002 to 2012, displaying the percentage of ICDs covered by the methodology. The top five methodologies were chosen for representation as they represented those methodologies that were implemented in greater than 10 percent of ICDs, whereas the remaining methodologies were typically used in one to two ICDs only. Each ICD employed several methodologies so the percentages will not sum to 100 percent. A variety of analytical techniques may be appropriate depending on the type of analysis being conducted. As an example, intelligence-based assessment would likely be an appropriate technique for identifying a strategic capability gap requiring a new weapon system, but not appropriate for identifying the need for a new inventory system for the Defense Commissary Agency.
Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities-Policy
The research team observed at least two interpretations of the Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities-Policy (DOTMLPF-P) analysis within the ICDs. The analysis sometimes took the course where ICDs identified DOTMLPF-P categories of nonmateriel solutions that could satisfy capability gaps, while others took the second interpretation where ICDs considered the DOTMLPF-P implications of their proposed materiel solution. Defense Acquisition University training for DOTMLPF-P distinguishes between these uses and indicates that the ICD should focus on the former approach as the latter is addressed in later stages of the acquisition process (Defense Acquisition University, 2014).
We also observed a wide range of quality in these analyses. Many ICDs contained rote statements declaring the insufficiency of these non-materiel approaches to close capability gaps. To paraphrase an example, several ICDs stated that DOTMLPF solutions were considered..., but adjustments or improvements in these areas will have minimal impact to mission satisfaction. Though not every capability gap can be met with nonmateriel solutions, such box check DOTMLPF-P analyses offer no value to the requirements validation process.
In contrast, several analyses reflected a concerted effort to find nonmateriel solutions to supplement the proposed materiel solution. One example of this level of analysis is the Air Forces Advanced Pilot Training ICD. In its DOTMLPF-P analysis, the Service employed a three-phase process: first, brainstorming and combining possible solutions; second, conducting quantitative analysis on a subset of the best of the proposed solutions; and third, conducting a qualitative assessment of the final list of proposed solutions. Not all of the nonmateriel solutions were deemed feasible or prudent, but several were included as part of the final recommendations. Further explanations of how the Air Force conducted this analysis are found in the ICD and its attachments on KM/DS.
Through the analysis the team observed a variety of interpretations of how to write an ICD. In general, analytical rigor could be stronger. In a fiscally constrained environment, the importance of documenting analysis is magnified, and many ICDs fell short of careful documentation of analysis. Another observation is that most of the ICDs were submitted by the Services and very few by Joint sponsors. This is not surprising as individual Services organize, train, and equip their forces; it is expected that capability gaps will continue to be identified by the Services.
Several ICDs utilized subject matter experts (SMEs) to identify capability gaps and recommend solutions. One way to incorporate SME input into a more rigorous fashion is by employing the Delphi Technique. In this method, the researcher works with 10-15 experts to identify, further define, and determine the importance of an issue in their area of expertise (Linstone & Turoff, 1975). Using the Delphi method when SMEs are available is one way to add analytical rigor to the ICD process.
Though not possible for all ICDs, several documents included a lifecycle cost summary that was effective in communicating the costs of the capability gap. If the proposed solution is expected to reduce some recurring cost, presenting those numbers can make a convincing case to the reader.
In the Appendix to this article, the authors provide a list of additional analytical techniques along with a short description of each. This resource is intended to assist ICD writers and project managers in selecting a methodology or methodologies appropriate for their document or project. References are provided to direct interested readers to source documents with additional descriptions of each methodology.
Nearly all existing ICDs present a High-Level Operational Concept Graphic (OV-1) depicting the proposed solution(s). A previous Air Force Institute of Technology researcher identified several additional Department of Defense Architecture Framework (DoDAF) products that could be useful to present within the ICD (Hughes, 2010). The Capability Taxonomy (CV-2), Capability Dependencies (CV-4), Capability to Operational Activities Mapping (CV-6), as well as the Operational Resource Flow Description (OV-2) and Operational Activity Decomposition Tree (OV-5a) are products now required by JCIDS for the ICD.
Hughes also found value in including the Operational Activity Model (OV-5b) and Operational Activity to Systems Function (SV-5). The OV-5b presents capabilities and activities and their relationship among activities, inputs, and outputs. The SV-5 maps systems back to capabilities or operational activities. Neither is currently recommended in the JCIDS Manual, but could be presented there as optional architecture products.
Based upon analysis of the data that were examined during the study, several guidelines or best practices emerged. The best written ICDs provided detailed, but relevant analysis without being too wordy. Here, we propose the contents of a model ICD.
The most fundamental building block of an ICD is conformance to JCIDS standards of format and content. The JCIDS Manual presents a logical flow of the document from gap identification to final recommendations. The Concept of Operations should illustrate how the described capability will support the Joint Force Commander. The JCAs or Universal Joint Task List pedigree should be clear, but not overly detailed. Documents that rolled up capability gaps to Tier 2 or Level 2 components seemed more readable than those that traced capabilities to lower levels. A document that acknowledges extant systems is more convincing in establishing a capability gap.
The team believes that a concise ICD may be written with 5-12 gaps identified. Page lengths may vary by ACAT level, with more complex proposed solutions demanding more explanation, but the ideal ICD would be 15-25 pages in length. In short, a well-written ICD will follow the prescribed format, clearly define its necessity to the Joint mission, and be presented in a clear and logical manner. Additionally, the ICD should present clear MOEs with minimum and desired values. Good MOEs allow the reader or evaluator to know when the new capability has delivered on its design promises. MOEs are sometimes confused with measures of performance (MOPs). Noel Sproles states, MOEs are concerned with the emergent properties or outcomes of a solution. They take an external view of a solution and as such are different from MOPs, which are concerned with the internal workings of a solution (Sproles, 2002).
Table 2 compares ICD content required by the Capabilities-Based Assessment (CBA) Users Guide, the JCIDS Manual, and recommendations based on our analysis. As part of the analysis, the team identified those ICDs that implemented and followed the best practices identified by the team. These ICDs, shown in Table 3, are identified to give future ICD writers and functional groups examples of what they can strive toward to make clear and concise documents that are both effective and efficient.
Future research could focus on the relationship between the ICD and the program it generates. Can the utility or performance of a program be traced to the description of the initial capability gap and requirement definition? Are there characteristics of an ICD that indicate how well a program will adhere to cost, performance, and schedule expectations?
Since 2002, the JCIDS process has been refined and enhanced. There appears to be a convergence in the formatting and content of many ICD/ JCDs since 2008. While the quality of historical ICDs varies, marked improvements to the analysis have been documented since 2008, possibly due to the GAO report from the same year.
Through research of the current methodologies used in ICDs since the inception of the process, the research team has formulated an outline of proposed areas upon which writers and implementers can focus. Future writers may use this outline as well as a series of DoD guidelines to provide the Joint community with superior ICDs that achieve their goals in a more efficient manner with minimal processing time.APPENDIX Additional Analytical Techniques to Assist Initial Capabilities Document (ICD) Writers and Project Managers Method Source(s) Explanation Pre-Capabilities-Based Assessment (CBA) Scenario-based Capabilities- Technique using scenarios to Planning Based define-give structure to an Assessment otherwise murky strategic (CBA) Users future. A type of Guide, p. 87 brainstorming, which may use (Ringland & nominal group technique or Schwartz, 1998) another group problem-solving (Hiam, 1990, p. technique. 284) * Assumptions/drivers of change (identify key variables and historical trends) * Develop framework for drivers * Produce initial miniscenarios (vary the type: surprise-free, radical, and in-between) * Reduce to 2 or 3 scenarios * Write scenarios * Identify issues arising (sensitivity analysis with scenarios impact on key variables) Strengths, (Helms & Analyzes internal (strengths- Weaknesses, Nixon, 2010) weaknesses) and external Opportunities (opportunities-threats) and Threats factors to help guide (SWOT) corporate strategy Analysis development. Useful in a group strategy setting, using nominal group technique, or another group problem-solving technique (like a Group Decision Support System, or GDSS). See also Porters 5 Forces and Barneys Resource- based View for more specific analyses. Porters 5 (Porter, 2008) Builds on the Forces Analysis threats/opportunities side of SWOT to explain how market structure, defined by five market forces (threat of entrants, supplier power, buyer power, intensity of rivalry, threat of substitutes) and one additional force (complementors/government/ public) drive the content and performance of firms. Barneys (Barney, 1991) Builds on the Resource-based strengths/weaknesses side of View (RBV) SWOT to explain how a firms internal resources (value [V], rareness [R], nonsubstitutability [NS], imperfect imitability [//]), lead to sustainable competitive advantage (SCA). SCA = V + R + NS + //Must have first three to achieve competitive advantage, and all four to achieve SCA. The Project (Shenhar & Uses four quadrants of Management Dvir, 2007) Technology, Complexity, Diamond Novelty, and Pace to define Approach the size, scope, and risk of a systems engineering product/project. Market GAO Report A grid that compares four Segmentation No. 07-388, markets (current/new customers Grid p. 11 in existing segments/customers in new segments/new customer wants and needs) to four offering types (current business/enhancement to current business/new business/new to industry) to position portfolio projects into four categories (strike zone/traditional/pushing the envelope/ white space opportunity). A method of analyzing business risk that encourages businesses to find the right mixture of categories of projects. Similar to Risk/Rewards Matrix. Risk-rewards GAO Report A grid that plots risks vs. Matrix No. 07-388, rewards of projects. Similar p. 16 to Market Segmentation Grid in (Hiam, 1990, p. that it encourages businesses 377) to find the right mixture of categories of projects. The same tool can be used to compare effectiveness to cost in the AoA Alternatives Comparison step (particularly useful in showing confidence levels and threshold values). The GE matrix version of this maps business strength (internal) vs. industry attractiveness (external). The circles may be subdivided into market share/total market pies to enhance analysis. Augments SWOT. Nominal Group (Sink, 1983) A brainstorming technique that Technique mixes individual and group activities to attempt to increase the amount, diversity, and quality of ideas generated. Many variations, but follows the basic process below: * Individual Brainstorming * Sharing Ideas * Group Brainstorming (divergent) * Group Discussion * Group Brainstorming (convergent) * Voting/ranking Delphi (Goodman, A type of brainstorming that Technique 1987) uses experts to a) identify issues in their area of expertise, b) further define issues in their area of expertise, and c) identify the importance of issues in their area of expertise. Generally uses 3/9 experts, and begins with Nominal Group Technique, using future rounds to refine/ reduce/prioritize issues. CBA/ICD Capabilities- Capabilities- 1) Describes capabilities Based Based required to perform a mission Assessment Assessment (CBA) (CBA) Users 2) Identifies gaps in Guide capabilities and associated operational risks 3) Establishes a requirement to address gaps Initial Capabilities- 1) Describes/summarizes Capabilities Based Concept of Operations (CONOPS) Document (ICD) Assessment (-1 page explanation of (CBA) Users CONOPS) Guide 2) Describes guidance (see Requirements Traceability Matrix) 3) Describes capabilities required (includes MOEs/threshold values) 4) Describes capability gaps (prioritized, if possible) 5) Summarizes relevant threats/operational environment 6) Proposes nonmateriel and materiel solutions (see Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities-Policy [DOTMLPF-P] Analysis) 7) Final recommendation (normally, but not necessarily, a materiel solution) Requirements Air Force Also known as house of Traceability Instruction quality, traces system Matrix (AFI) 10-601 attributes to operational/user/strategic requirements. Multiple levels. Paired (Blanchard & To build a rank-ordered list, Comparisons Fabrycky, 2010, each of the options is p. 182) presented to the decision maker two at a time (instead of all at once). For N criteria to be ranked, N(N - 1)/2 pairs must be compared. Assumes transitivity of preferences. Porters Value (Hiam, 1990, 1) Select unit of analysis, Chain Analysis p. 415) both for your organization and (Porter, 1980) for competitors 2) Identify primary value-adding activities (direct/ indirect/quality assurance) Inbound/outbound logistics, operations, marketing/sales, service 3) Identify support activities (direct/indirect/ quality assurance) Procurement, technical development, human resource management, firm infrastructure 4) Identify linkages between value chain activities 5) Study the value chain to identify sources of competitive advantage Systems (Sage & Applies general systems theory Definition Armstrong, to define both the SCOPE Matrix 200.
Copyright © 2002-2022 Thb999 All Rights Reserved