Designing an appropriate set of command arrangements for coalition peace operations requires a clear understanding of the essential functions to be performed and the qualities desired - the objective criterion for success.
Command arrangements are the systems by which military and political-military organizations make and implement decisions in an operating environment. Figure 19 shows the essential elements of this process (Hayes, 1983a).Note that the command arrangements always exist in the context of a larger environment, which includes military elements (own, adversary, and potentially other forces which are not directly included in the network), physical and ecological factors (terrain, weather, and so forth), as well as the political, social, and economic. The purpose of the system of command arrangements is to control some selected features of this environment (for peace operations this might include keeping military forces out of demilitarized zones, preventing the flow of arms across a border, or other explicit tasks), which is the equivalent of accomplishing assigned missions.
However, the system of command arrangements and the decision makers it serves do not, in and of themselves, execute operations or accomplish missions. Rather they create favorable circumstances, develop plans, ensure that the materials needed are available, coordinate activities, and undertake representational and decision functions that enable other (usually subordinate) organizations to accomplish missions. The plans they create consist of five key elements:
Success (effectiveness) consists of creating directives and coordinating requests for assistance from actors who are not subject to military command. Such directives should (1) reflect the planning process,(2) be implemented successfully without change beyond the contingencies explicitly built into them, and (3) have the desired impact on the environment.
The processes inherent in command arrangements (which are always part of the process, whether explicitly or not) are also illustrated in Figure 19. They include:
While these six steps are inherent in any system of command arrangements, four other processes are also normally involved and contribute to success:
These four additional activities are particularly crucial in peace operations where the number and variety of actors, their lack of prior experience working with one another, and the absence of common, reliable communications systems often make timely information collection and dissemination very difficult.
Given an understanding of what command arrangements are and the different ways they can be structured, the issue of how their performance should be assessed must still be addressed before better command arrangements can be designed for coalition peace operations. Command arrangements operate to determine both the flow of information within and among the actors and the nature and process of decision making. Assessment should, therefore, start by examining the structures, functions, and capacities of the support systems that provide and process the information needed for achieving goals and missions.
As Figure 20 illustrates, there are at least three distinct level at which the value of these information management command arrangements could be assessed: (1) system performance (the qualities of the elements that make up the system), (2) attributes (or qualities) of the information provided to decision makers, and (3) the overall value of the information within the decision making system. These three levels interact, with problems at the poorer levels almost always leading to lower performance at the higher levels. If, for example, the information available is out of date (level 2), then good quality decisions (level 3) become unlikely. Similarly, if the systems that must move information around among the actors are unreliable (level 1), the information available to decision makers will tend to be out of date (level 2). Hence, performance at all three levels should be assessed so that diagnosis of causes for problems is possible.
System performance measures describe the individual elements of command arrangements. Communications speed and capacity between important headquarters or actors, the size and reliability of the memory located at each node in the system, and the reliability of communications systems (mean time between failure, percentage time down, etc.) are simple system performance characteristics.
Information attributes deal with the quality of the information available in the system of command arrangements. They include such things as:
Note that "information" here means not only factual data, but also the capture, storage, selection, integration, and interpretation of information that support the essential command arrangements processes.
The third level, measures of information value, is much more difficult to operationalize than the lower levels. Information value is measured in terms of impact on the environment. The core measure is "effectiveness,"having the desired impact in the environment. The speed of the command arrangements versus the pace of change in the environment (timeliness of decision processes) must also be considered a measure of information value. More-over, the efficiency of the process (what it costs to be effective) is also an overall measure of the system's performance, particularly in peace operations that must be conducted under austere fiscal conditions. Several other attributes should also be considered when assessing command arrangements from an overall perspective, including user acceptance, the capability required (experience, training, mental capacity, etc.) to operate the system, and its security, which is particularly critical in the Information Age when we need to protect our C2 from attacks. (See Hayes  for a discussion of the full set of attributes that should be considered when assessing a system of command arrangements.)
As Figure 20 illustrates, these three levels of analysis are believed(hypothesized) to be related and the form of that relationship is assumed to be a positive correlation. That is, better system performance leads to better information, better information to better decision making. However, the relationships between these levels are not always well understood and are certainly not simple linear patterns. For example, there are hundreds or thousands of relevant actors and platforms on a modern battlefield, but providing complete information about their location and identity will overwhelm any human decision maker; so there is a level at which completeness becomes counter-productive. Similarly, understanding that multiple futures are possible does not mean that good command arrangements explore each and every one of them in detail _ the workload would overwhelm the system.
Note that failure at any level makes success at the next higher level more difficult and only the highest level (value of information) reflects the utility of the command arrangements. The lower-level measures(system performance and information attributes) are diagnostic - when top-level problems occur, they can almost always be traced back to lower levels.
Direct measurement of overall value is difficult, so intermediate decision variables are often used as surrogates for overall value(good decision process being assumed to increase the likelihood of good decisions), as diagnostics, or as cross-checks on the more abstract efforts to judge overall effectiveness, timeliness, or efficiency. Figure 21, adapted from Alberts (1980), illustrates this practice and also shows the linkages between levels in the assessment process.
Good decision processes correlate with good decisions. For example, organizations that believe there is only a single future possible and that they know what it is are very vulnerable to poor decision making (see Dixon  and Janis ). This tendency has also been documented in US C2 systems under a variety of conditions (Hayes, 1990). Hence, the number of possible futures considered, and particularly the number of decisions made where only a single future is considered, are indices of decision process quality worth monitoring. Other such indices include the variety of options generated for consideration, the variety of viewpoints entertained, and the accuracy of predictive statements about future developments. As is discussed in detail below, the time spent making decisions is itself a factor in making those decisions easier or more difficult, because slower processes force decision makers to deal with a greater range of uncertainty.
While system performance and attributes of information can be measured directly, the overall value of information - which is the true value of a set of command arrangements and the only way to compare alternative sets - are inherently multi-dimensional, not always directly measurable, and will vary across operating environments.
These points are illustrated in Figure 22, which is also drawn from Alberts (1980). First, any system of command arrangements must be given an overall utility value, particularly in order to compare it with other alternatives, that is focused on several key attributes, including system-wide value added, system life-cycle costs, and system flexibility and adaptability. Hence, there is no single dimension for evaluation. Given the vicious tradeoffs present in peace operations(costs versus military capability, etc.) this fact is particularly important for these analyses.
These key dimensions cannot be estimated from a single context, but rather must be seen across the range of situations (scenarios) considered relevant. Failure to take into account the wide range of situations where peace operations coalitions may have to operate or the pace of change within the context of any one such operation would fatally flaw the analysis. The inclusion of a range of experiences in this paper is an effort to ensure consideration of an adequate range of situations and the inherent dynamic patterns.
Equally important, direct measurement of value added is impossible. Good command arrangements can be recognized by a variety of indicants, or measures, that reflect good process but are not success in and of themselves. For example, good decision making is associated with:
In essence, these types of measures are defenses against "Groupthink"(Janis, 1982 and 1989) and other errors that creep into complex decision-making systems.
Value added is more directly measurable in terms of the effectiveness, timeliness, and efficiency of decision making. Even here, however, measurement is a complex process. For example, plans that can be implemented within the contingencies built into them are desirable because they allow the entire force, organization, or set of organizations involved to work together according to "pre-real time" decisions. Plans can enable a commander to achieve several different levels of control over the environment.
Reflexive control is achieved by command arrangements that provide such a rich understanding of the situation that the commander can predict and take advantage of adversaries' capabilities and actions. Cold War era Soviet doctrine sought to achieve this level of control. The current advocates of information warfare maintain that this level of insight will soon be technically possible. Systems that seek this level of control are always risky because of potential errors in (a)their information and (b) their projections of adversary actions.
When in adaptive control, the commander understands that the battlefield is not fully predictable, but that the range of future developments is limited. By monitoring the battle and understanding which situations are emerging, the commander can design contingency plans to ensure success regardless of what actually occurs. While less efficient than reflexive control, adaptive control is also less risky because it takes into account changes in the environment, including alternative adversary courses of action. This level of control has been sought by US doctrine since World War II and is necessary for successful peace operations.
Direct control occurs when the commander understands the battle well enough to exert pressure (moral and physical force), but has no clear sense of how much is required to accomplish the objective. Hence, the system seeks primarily to monitor the status of the battle and to ensure continuous application of force, in the same way that a thermostat continues to signal for heat in a building until the preset temperature is reached. Because it lacks the capacity and flexibility to use alternative courses of action, direct control is inferior to adaptive control. In peace operations, direct control implies lack of flexibility and agility, which threatens mission accomplishment.
Trial and error is what management systems do when confronted with novel circumstances and limited understanding of the situation. It provides only minimal control. An ignorant system acts on its environment(or fails to act), observes the consequences, then reacts. Often its initial actions take the form of the familiar, which is predictable to an adversary and inappropriate for novel situations. Predictable actions may, however, be appropriate in peace operations where adversary uncertainty is dangerous. When challenged, however, the initial trial and error plan tends to fail rapidly and must be replaced. Over time, trial and error systems are replaced by direct control and even more advanced levels of control, but only if they survive enough interactions to "learn" useful rules. This is the challenge facing many peace operations, particularly when their own initial success alters the basic situation to be controlled (as occurred in Somalia).
Finding ways to measure the effectiveness of C2 objectively was the most challenging aspect of the initial Headquarters Effectiveness Assessment Tool effort (Hayes, et.al [1983a]) to develop valid, reliable quantitative indicators of C2 quality. However, once the background research was completed, a very powerful answer became obvious. Since head-quarters are supposed to create plans (in the form of directives)that work and since "working" means keeping the environment within anticipated boundaries, the key to objective assessment is to examine the degree to which the plan accomplishes its stated mission. The headquarters will abandon or modify the plan if it perceives that plan is failing or will fail. Observers or analysts can recognize failure by the fact that the headquarters changes one or more of the basic elements of the original plan (missions, assets, boundaries, or schedules) beyond those contingencies explicitly built into that original plan. The pattern of interactions with the environment overtime and across a series of decision cycles provides evidence of the typical level of control achieved. The greater the level of control achieved, the more successful the command arrangements. HEAT research has also shown that success is contagious _ effective performance in earlier periods is associated with success in later periods; success in some functional arenas is associated with success in others (Hayes, et. al, 1993).
Moreover, greater control also implies improved performance on other crucial types of performance - timeliness, flexibility, and efficiency of the system. Command and control systems have generally been considered to be better when they are faster. John Boyd, drawing on experience in air combat, postulated that, in order to be successful (i.e. effective, or win in battle), C2 systems need to be faster than the C2 systems opposing them. Using dogfights between aircraft as his metaphor, he argues that "turning inside the enemy's decision loop" is the key to success. Note that his position is not that speed is an unmitigated good, only that C2 systems that are faster than those of the opponent will be successful. Boyd's argument is consistent with HEAT theory, which postulates a decision and action C2 cycle in the con- text of a potentially hostile dynamic environment. This argument and some of its implications are reflected in Figure 23. They include:
Moreover, when plans fail and the headquarters must go beyond the contingencies built into them, the command and control process must be repeated. This further slows the ability of the commander to control the battle and increases the workload on the C2 system. Hence efficiency(again, the price or effort required to be effective) is lost when the command and control cycle is slow, all other things being equal.
However, things are not always equal. Boyd's formulation recognizes that the adversary's C2 systems vary in speed, which immediately implies that the C2 speed needed for success is not a constant. Equally important, but outside Boyd's original theory, is the fact that other features of the environment (such as the weather or the political context)can also affect the need for C2 speed. Finally, as the figure indicates, the ability to see ahead also changes the need for speed. Successful information war, which either immobilizes an enemy or provides clear indications that no immediate threat exists, provides time for performing more complex and detailed planning. Sometimes physical circum-stances also help. For example, Eisenhower could take the time for inordinately detailed and complex planning for the invasion of Europe because he had the ultimate choice of when to initiate combat. Similarly, the US and its allies were able, through a variety of political, military and intelligence systems, to purchase time to prepare for Desert Storm. Both forces were able to select the crucial times and places for decisive combat and won because they followed the principle of the initiative.
Too rapid a C2 system can even be a disaster. The Japanese at Midway, for example, made a series of rapid decisions about whether they would attack US land-based air or aircraft carriers. These decisions were made so rapidly that the Japanese carriers were still in the process of rearming their aircraft to comply with the latest set of directives received when they were attacked and sunk by American aircraft. The Japanese had effectively immobilized their forces by giving a series of orders with no time between them to allow implementation. Similarly, had the US felt impelled to engage Iraq as soon as it had forces in the Kuwaiti theater, Desert Storm would have been a very different conflict. Hence, speed is not always an unmitigated good in C2 systems. However, speed is an important element in C2 systems:
Under any circumstances, however, rapid C2 systems that do not generate high-quality decisions and plans have little value. Indeed, they ensure rapid failure.
Given the multidimensional nature of evaluation, the fact that a variety of situations must be considered, and the fact that the important evaluation dimensions are somewhat related to one another; structured analysis is important. Key structural issues for such analysis include:
This last point is worth emphasizing. A wonderful set of command arrangements that is so expensive that it cannot be bought, fielded, or maintained is not as useful as a good system that can be counted on when needed.
Generating the evidence for such analyses is a major challenge in itself. As a first approximation, analysis of current experience (such as that offered in this paper) is the only valid way to proceed. However, real world experience is always analytically messy - atypical situations, personalities, and circumstances predominate. Initial findings can be refined and improved in a variety of different ways, each with some very real imperfections in terms of generalizability or validity. Ranging from most to least realistic and costly, the set of sources for research, development, and system refinement include:
As greater control and replicability are achieved (the analysis is made more reliable), losses in validity occur. As validity rises, so do the costs of information collection. A healthy program of research and development will use a range of these approaches, not relying on one or two. Only in this way can validity and reliability be achieved cost effectively.
NEXT CHAPTER | Table of Contents |