Return to Information Operations Center

Two Taxonomies of Deception

 for Attacks on Information Systems

 

 

Neil C. Rowe¹, and Hy S. Rothstein²

 

¹ Department of Computer Science

U.S. Naval Postgraduate School

Code CS/Rp, 833 Dyer Road

Monterey, CA 93943 USA

 

 ² Department of Defense Analysis

U.S. Naval Postgraduate School

 

 

Abstract

 

'Cyberwar' is warfare directed at information systems by means of software.  It represents an increasing threat to our militaries.  We provide two taxonomies of deception methods for cyberwar, making analogies from deception strategies and tactics in conventional war to this new arena.  Some analogies hold up, but many do not, and careful thought and preparations must be applied to any deception effort. 

 

Keywords: Deception, information warfare, information systems, tactics, defense, decoys, honeypots, lying, disinformation.

 

 

INTRODUCTION

 

Today, when our computer networks and information systems are increasingly subject to warfare, it is important to investigate effective strategies and tactics for them.  Traditionally our information systems are seen as fortresses that must be fortified against attack.  But this is only one of several useful military metaphors.  Deception has always been an integral part of warfare.  Can we judiciously use analogs of conventional deceptive tactics to attack and defend information systems?  Defensive such tactics could provide a quite different dimension to the usual access-control defense like user authentication and cryptography, and could be part of an increasingly popular idea called 'active network defense'.  New tactics are especially needed against the emerging threats of terrorism.

 

Deception is usually most effective by a weaker force against a stronger.  The United States has rarely been weaker in engagements in the last fifty years, and consequently has not used deception much.  But cyberwar is different: Most of the arguments of von Clausewitz (von Clausewitz, 1993) for the advantage of defense in warfare do not hold.  Much of the routine business of the developed world, and important portions of their military activities, are easily accessible on the Internet.  Since there are so many access points to defend, and few "fortresses", it is appealing for an enemy to use deception to overwhelm sites, neutralizing them or subverting them for their own purposes.  With resources spread thin, deception may also be essential for defenders. 

 

Historically, deception has been quite useful in war (Dunnigan and Nofi, 2001) for four general reasons.  First, it increases one’s freedom of action to carry out tasks by diverting the opponent’s attention away from the real action being taken.  Second, deception schemes may persuade an opponent to adopt a course of action that is to their disadvantage.  Third, deception can help to gain surprise.  Fourth, deception can preserve one's resources.  Deception does raise ethical concerns, but defensive deception is acceptable in most ethical systems (Bok 1978).

 

CRITERIA FOR GOOD DECEPTION

 

In this discussion we will consider only attacks by a nation or quasi-national organization on the software and data (as opposed to the people) of an information system.  Attacks like this can be several degrees more sophisticated than the amateur attacks ('hacking') frequently reported on systems today.  Nonetheless, many of the same attack techniques must be employed.

 

(Fowler and Nesbitt, 1995) suggest six general principles for effective tactical deception in warfare based on their knowledge of air-land warfare.  We summarize them as follows:

 

  1. Deception should reinforce enemy expectations.
  2. Deception should have realistic timing and duration.
  3. Deception should be integrated with operations.
  4. Deception should be coordinated with concealment of true intentions.
  5. Deception realism should be tailored to needs of the setting.
  6. Deception should be imaginative and creative.

 

For instance in the well-known World War II deception operation, 'Operation Mincemeat' (Montagu, 1954), false papers were put in a briefcase and attached to a corpse that was dumped off the coast of Spain.  Some of the papers suggested an Allied attack at different places in the Mediterranean than Sicily, the intended invasion point.  Other papers, included to increase the convincingness of the deception, were love letters, overdue bills, a letter from the alleged corpse's father, some keys, matches, theater ticket stubs, and even a picture of an alleged fiancé. The corpse’s obituary was put in the British papers, and his name appeared on casualty lists.

 

Let us apply the six principles.  Deception here was integrated with operations (Principle 3), the invasion of Sicily.  Its timing was shortly before the operation (Principle 2) and was coordinated with tight security on the true invasion plan (Principle 4).  It was tailored to the needs of the setting (Principle 5) by not attempting to convince the Germans much more than necessary.  It was creative (Principle 6) since corpses with fake documents are unusual.  Also, enemy preconceptions were reinforced by this deception (Principle 1) since Churchill had spoken of attacking the Balkans and both sides knew the coast of Sicily was heavily fortified.  Mincemeat did fool Hitler (though not some of his generals) and caused some diversion of Axis resources.

 

Now let us apply the principles to information warfare. 

 

  • Principle 1 suggests that we must understand an enemy’s expectations in designing deception and we should pretend to aid them.  Fortunately, there are only a few strategic goals for information systems: Control the system, prevent normal operations ('denial of service'), collect intelligence about information resources, and propagate the attack to neighboring systems.  So deception must focus on these.  And because of the limited communications bandwidth between people and computers, deception can be focused on the messages between them.

 

  • Principle 2 says that, however we accomplish our deceptions, they must not be too slow or too fast compared to the activities they simulate (Bell and Whaley, 1991).  For instance, a deliberate delay in responding to a command should be long enough to make it seem that some work is being done, but not so long that the attacker suspects something unusual is happening.  Information systems avoid most of the nonverbal clues that reveal deceptions (Miller and Stiff, 1993) but timing is important.

 

  • Principle 3 says that deceptions in advance of operations are likely counterproductive because they warn the enemy of the methods of subsequent attack, and surprise is important in cyberwar.  It also argues against use of 'honeypots' and 'honeynets' (The HoneyNet Project, 2002) as primary defensive deception tools.  These are computers and computer networks that serve no normal users but bait the enemy to collect data about enemy methods.  But honeypots do not work against a determined adversary during information warfare since inspection of them will quickly reveal the absence of normal activity.

 

  • Principle 4 suggests that a deception must be comprehensive and consistent.  For instance, if we masquerade as a legitimate user to break into a computer, we should try to act as much like that user as possible once inside.  Similarly, if we wish to convince an attacker that they have downloaded a malicious file, we must maintain this pretense in the file-download utility, the directory-listing utility, the file editors, file-backup routines, the Web browser, and the execution monitor.  So we need to systematically plan our deceptions using such tools as 'software wrappers' (Michael et al, 2001).

 

  • On the other hand, Principle 5 alerts us that we need not always provide details in deceptions if we know our attackers.  For instance, most methods to seize control of a computer system involve downloading modified operating-system components ('rootkits') and installing them.  So it is valuable to make deceptive the file-download utility and the directory-listing utility, not the archiving software nor the debuggers.

 

  • Principle 6 ('creativity') may be difficult to achieve in computerized responses, but degrees of randomness in an automated attack or defense may fake it, and methods from the field of artificial intelligence can produce some convincing simulated activity.

 

EVALUATION OF SPECIFIC DECEPTION TYPES

 

Given the above principles, let us consider specific kinds of deception for information systems under warfare-like attacks.  Several taxonomies of deception in warfare have been proposed, of which that of (Dunnigan and Nofi, 2001) is representative.  Figure 1 shows the spectrum of these methods, and Table 1 summarizes our assessment of them (10 = most appropriate, 0 = inappropriate).

 

  1. Concealment ('hiding your forces from the enemy')
  2. Camouflage ('hiding your troops and movements from the enemy by artificial means')
  3. False and planted information (disinformation, 'letting the enemy get his hands on information that will hurt him and help you')
  4. Lies ('when communicating with the enemy')
  5. Displays ('techniques to make the enemy see what isn't there')
  6. Ruses ('tricks, such as displays that use enemy equipment and procedures')
  7. Demonstrations ('making a move with your forces that implies imminent action, but is not followed through')
  8. Feints ('like a demonstration, but you actually make an attack')
  9. Insight ('deceive the opponent by outthinking him')

 

We evaluate these in order.  Figure 1 presents a way to conceptualize them, and Table 1 summarizes them.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Figure 1: The spectrum of deception types.

 

 

Table 1: Summary of our assessment of deceptive types in information-system attack.

Deception type

Useful for accomplishing an information-warfare attack?

Useful in defending against an information-warfare attack?

concealment of resources

2

2

concealment of intentions

7

10

camouflage

5

3

disinformation

2

4

lies

1

10

displays

1

6

ruses

10

1

demonstrations

3

1

feints

6

3

insights

8

10

 

Concealment

 

Concealment for conventional military operations uses natural terrain features and weather to hide forces and equipment from an enemy.  A cyber-attacker can try to conceal their suspicious files in little-visited places in an information system.  But this will not work very well because automated tools allow us to quickly find things in cyberspace and intrusion-detection systems allow us to automatically check for suspicious activity.  'Steganography' or putting hidden secrets in innocent-looking information is only good for data, not programs.  But concealment of intentions is important for both attack and defense in cyberspace, especially defense where it is unexpected.

 

Camouflage

 

Camouflage aims to deceive the senses artificially.  Examples are aircraft with muffled engines and devices for dissipating heat signatures and flying techniques that minimize enemy detection efforts (Latimer, 2001).  Attackers of a computer system can camouflage themselves by behaving as legitimate users, but this is not very useful because most computer systems do not track user history very deeply.  But camouflaged malicious software has become common in unsolicited ('spam') electronic mail, as for instance email saying 'File you asked for' and 'Read this immediately' with a virus-infested attachment.  Such camouflage could be adapted for information warfare.  Defensive camouflage is not very useful because legitimate users as well as attackers will be impeded by moving or renaming resources and commands, and camouflage won't protect against many attacks such as buffer overflows to gain access privileges.

 

False and planted information

The Mincemeat example used false planted information, and false 'intelligence' could similarly be planted on computer systems or Web sites to divert or confuse attackers or even defenders in a 'campaign of disinformation'.  Scam Web sites (Mintz, 2002) use a form of disinformation.  But most false information about a computer system is easy to check out because software can help: A honeypot is not hard to recognize.  Disinformation must not be easily disprovable, and that is hard.  Attackers will also not likely read disinformation during an attack, so it won't help once an attack is started, only in preparations for an attack.  And only one lie can make an enemy mistrustful of all other statements you make, just as one mistake can destroy the illusion in stage magic (Tognazzini, 1993).

 

Lies

 

Spreading lies and rumors is as old as warfare itself.  The Soviets during the Cold War used disinformation by repeating a lie often, through multiple channels, until it seemed to be the truth.  This was very effective in overrepresenting Soviet military capabilities during the 1970s and 1980s (Dunnigan and Nofi, 2001).

 

Lies might help an attacker a little though stealth is more effective.  However, outright lies about information systems are often an easy and useful defensive deceptive tactic.  Users of an information system assume that, unlike with people, everything the system tells them is true.  And users of today's complex operating systems like Microsoft Windows are well accustomed to annoying and seemingly random error messages that prevent them from doing what they want.  The best things to lie about could be the most basic things that matter to an attacker: The presence of files and ability to open and use them.  So a computer system could issue false error messages when asked to do something suspicious, or could lie that it can't download or open a suspicious file when it really can (Michael et al, 2003).

 

Displays

 

Displays aim to make the enemy see what isn’t there. Dummy positions, decoy targets, and battlefield noise fabrication are all examples.  Past Iraqi deception regarding their 'weapons of mass destruction' used this idea (Kay, 1995).  Clandestine activity was hidden in declared facilities; facilities had trees screening them and road networks steering clear of them; power and water feeds were hidden to mislead about facility use; facility operational states were disguised by a lack of visible security; and critical pieces of equipment were moved at night.  Additionally, Iraqis distracted inspectors by busy schedules, generous hospitality, cultural tourism, and accommodations in lovely hotels far from inspection sites, or simply took inspectors to different sites than what they asked to see.

 

Stealth is more valuable than displays in attacking an information system.  Defenders don't care what an attacker looks like because they know attacks can come from any source.  But defensively, displays have several valuable uses for an information system.  One use is to make the defender see imaginary resources, as with a fake directory Web site we created.  It looks like a Web portal to a large directory (Figure 2), displaying a list of typical-sounding files and subdirectories, but everything is fake.  The user can click on subdirectory names to see plausible subdirectories, and can click on file names to see what appear to be encrypted files in some cases (the bottom of Figure 2) and image-caption combinations in other cases (the top right of Figure 2).  But the 'encryption' is just a random number generator, and the image-caption pairs are drawn randomly from a complete index of all images and captions at our school so they are unrelated to the names used in the directory.  And for some of the files it claims 'You are not authorized to view this page' or gives another error message when asked to open them.  The site is a prototype of a way  to entice spies by encouraging them to think there are secrets here and encouraging them to draw surprising connections between concepts, thereby encouraging them to waste further time here.

 

Another defensive use of displays is to confuse the attacker's damage assessment.  Simply delaying a response to an attacker may make them think that they have significantly slowed your system as is typical with 'denial-of-service' attacks (Julian et al, 2003).  Unusual characters typed by the attacker or attempts to overflow input boxes (classic attack methods for many kinds of software) could initiate pop-up windows that seem to represent debugging facilities or other system-administrator tools, as if the user as 'broke through' to the operating system.  Computer viruses and worms often have distinctive symptoms that are not hard to simulate, as for instance with system slowdowns, distinctive vandalism patterns of files, and so on.  Once we have detected a viral attack, a deceptive system response can remove the virus and then simulate its effects for the attacker, much like faking of damage from bombing a military target.

 

 

Organization Chart

 

Figure 2: Example fake directory and file display from our software.

 

 

Ruses

 

Ruses attempt to make an opponent think he is seeing his own troops or equipment when, in fact, he is confronting the enemy (Bell and Whaley, 1991).  Ruses can be the flying of false flags at sea or wearing of captured enemy uniforms.  One kind involves making friendly forces think their own forces are the enemy's.  Modern ruses can use electronic means, like impersonators transmitting orders to enemy troops.

 

Most attacks on computer systems are ruses that amount to variants of the ancient idea of sneaking your men into the enemy's fortress by disguising them, as with the original Trojan Horse.  Attackers can pretend to be system administrators, and software with malicious modifications can pretend to be unmodified.  Ruses are not much help defensively.  For instance, pretending to be a hacker is hard to exploit.  If you do so to offer false information to the enemy, you have the same problems discussed regarding planted information.  It is also hard to convince an enemy you are an ally unless you actually subvert a computer system since there are simple ways to confirm most effects.

 

Demonstrations

Demonstrations use military power, normally through maneuvering, to distract the enemy. There is no intention of following through with an attack immediately.  In 1991, General Schwarzkopf used deception to convince Iraq that a main attack would be directly into Kuwait, supported by an amphibious assault (Scales, 1998).  Aggressive ground-force patrolling, artillery raids, amphibious feints, ship movements, and air operations were part of the deception.  Throughout, ground forces engaged in reconnaissance and counter-reconnaissance operations with Iraqi forces to deny the Iraqis information about actual intentions.

 

Demonstrations of the strength of an attacker's methods or a defender's protections are likely to be counterproductive for information systems: A adversary gains a greater sense of achievement by subverting the more impressive adversaries.  But bragging might encourage attacks on a honeypot and generate additional useful data.

 

Feints

 

Feints are similar to demonstrations except they are followed by a true attack.  They are done to distract the enemy from a main attack elsewhere.  Operation Bodyguard supporting the Allied Normandy invasion in 1944 was a clever modification of a feint.  The objective of this deception was to make the enemy think the real main attack was a feint (Breuer, 1993).  It included visual deception and misdirection, deployment of dummy landing craft, aircraft, and paratroops, fake lighting schemes, radio deception, sonic devices, and ultimately a whole fake army group consisting of 50 divisions totaling over one million men.

 

Feints by the attacker in information warfare could involve attacks with less-powerful methods, to encourage the defender to overreact and be less prepared for subsequent main attack involving a different method.  Something like this is happening right now at a strategic level, as those methods used frequently by hackers like buffer overflows and viruses in email attachments get overreported in press at the expense of the less-used methods like 'backdoors' that are more useful for offensive information warfare.  Defensive counterattack feints in cyberwarfare face the problem that finding an attacker is very difficult is cyberspace, since attackers can conceal their identities by coming in through chains of hundreds of sites, so threats often will not be taken seriously.  But one could use defensive feints effectively to pretend to succumb to one form of attack to conceal a second less-obvious defense.  For instance, one could deny buffer-overflow attacks on most 'ports' (access points) of a computer system with a warning message, but pretend to allow them on a few for which you simulate the effects of the attack.  This is an analog of the tactic of multiple lines of defense used by, among others, the Soviets in World War II.

 

Insights

 

War is often a battle of wits, of knowing the enemy better than he knows you.  A good understanding of the Israelis gave the Egyptians conditions for their early success in the 1973 Yom Kippur War.  The Egyptian planners wanted to slow down the Israeli response and prevent a preemptive Israeli strike before completion of their own buildup.  The resulting deception plan cleverly capitalized on Israeli and Western perceptions of the Arabs, including a perceived inability to keep secrets, military inefficiency, and inability to plan and conduct a coordinated action.  The Israeli concept for defense of the Suez Canal assumed a 48-hour warning period would suffice, since the Egyptians could not cross the canal in strength and could be quickly and easily counterattacked.  The aim of the Egyptian deception plan was to provide plausible incorrect interpretations for a massive build-up along the canal and the Golan Heights.  It also involved progressively increasing the 'noise' that the Israelis had to contend with by a series of false alerts (Stein, 1982).

 

Attackers can use insights to figure out the enemy's weaknesses in advance.  Hacker bulletin boards support this by reporting exploitable flaws in software.  Defensive deception could involve trying to think like the attacker and figuring the best way to interfere with common attack plans.  Methods of artificial intelligence help (Rowe, 2003).  'Counterplanning' can be done, systematic analysis with the objective of thwarting or obstructing an opponent's plan (Carbonell, 1981).  Counterplanning is analogous to placing obstacles along expected enemy routes in conventional warfare.

 

A good counterplan should not try to foil an attack by every possible means: We can be far more effective by choosing a few related 'ploys' and presenting them well.  Consider an attempt by an attacker to gain control of a computer system by installing their own 'rootkit', a gimmicked copy of the operating system.  This almost always involves finding vulnerable systems by exploration, gaining access to those systems at vulnerable ports, getting administrator privileges on those systems, using those privileges to download gimmicked software, installing the software, testing the software, and using the system to attack others.  We can formulate this precisely and estimate the trouble we will cause the attacker by foiling each of the steps.  Generally it is best to foil the later steps in a plan because we can force the attacker to do more work to repair the damage.  For instance, we could delete a downloaded rootkit after it has been copied.  When the attacker discovers this, they will likely need to redo all the steps of the attack from the original download.  So good deception in information warfare needs a carefully designed plan, just as in conventional warfare.

 

A DEEPER THEORY OF DECEPTION IN INFORMATION SYSTEMS

 

An alternative and deeper theory of deception can be developed from the theory of semantic cases in computational linguistics (Fillmore, 1968).  Every action can be associated with a set of other concepts; these are semantic cases, a generalization of syntactic cases in language.  Various cases have been proposed as additions to the basic framework of Fillmore, of which (Copeck et al, 1992) is the most comprehensive we have seen.  To their 28 we add four more, the upward type-supertype and part-whole links and two speech-act conditions (Austin, 1975), to get 32 altogether:

 

  • participant: agent (the person who initiates the action), beneficiary (the person who benefits), experiences (a psychological feature associated with the action), instrument (some thing that helps accomplish the action), object (what the action is done to), and recipient (the person who receives the action)
  • space: direction (of the action), location-at, location-from, location-to, location-through, and orientation (in some metric space)
  • time: frequency (of occurrence), time-at, time-from, time-to, and time-through
  • causality: cause, contradiction (what this action contradicts if anything), effect, and purpose
  • quality: accompaniment (additional object associated with the action), content (type of the action object), manner (the way in which the action is done), material (the atomic units out of which the action is composed), measure (the quantity associated with the action), order (of events), and value (the data transmitted by the action)
  • essence: supertype (generalization of the action type) and whole (of which the action is a part)
  • speech-act theory: precondition (on the action) and ability (of the agent to perform the action)

 

Our claim is that deception operates on an action to change its perceived associated case values.  For instance, the original Trojan horse modified the purpose of a gift-giving action (it was an attack not a peace offering), its accompaniment (the gift had hidden soldiers), and 'time-to' (the war was not over).  Similarly, an attacker masquerading as the administrator of a computer system is modifying the agent case and purpose case associated with their actions on that system.  For the fake-directory prototype that we built, we use deception in object (it isn't a real directory interface), 'time-through' (some responses are deliberately delayed), cause (it lies about files being too big to load), preconditions (it lies about necessary authorization), and effect (it lies about the existence of files and directories). 

 

A deception can involve more than one case simultaneously so there are many combinations.  However, not all of the above cases make sense in cyberspace nor for our particular concern, the interaction between an attacker and the software of a computer system.  We can rule out the cases of beneficiary (the attacker is the assumed beneficiary of all activity), experiences (deception in associated psychological states doesn't matter in giving commands and obeying them), recipient (the only agents that matter are the attacker and the system they are attacking), 'location-at' (you can't 'inhabit' cyberspace), orientation (there is no coordinate system in cyberspace), 'time-from' (attacks happen anytime), 'time-to' (attacks happen anytime), contradiction (commands don't include comparisons), manner (there's only one way commands execute besides in duration), material (everything is bits and bytes in cyberspace), and order (the order of commands or files rarely can be varied and cannot deceive either an attacker or defender).

 

In general, offensive opportunities for deception are as frequent as defensive opportunities, in cyberspace as well as in conventional warfare, but appropriate methods differ.  For instance, the instrument case is associated with offensive deceptions in cyberspace since the attacker can choose the instrument between email attachments, coming in through an insecure port, a backdoor, regular access with a stolen password, and so on, and the defender has little control since they must use the targeted system and its data.  That is different from conventional warfare where, say, the attacker can choose the weapons used in an aerial attack but the defender also can choose among many defensive tactics like hardening targets, decoys, jamming, anti-aircraft fire, or aerial engagement.  In contrast, 'time-through' is primarily associated with defensive deceptions in cyberspace since the defending computer system controls the time it takes to process and respond to a command, and is little affected by the time it takes an attacker to issue commands to it.  But 'object' can be associated with both cyberspace offense and defense because attackers can choose to attack little-defended targets like unused ports, while defenders can substitute low-value targets like honeypots for high-value targets that the attacker thinks they are compromising.  Table 2 summarizes the suitability of the remaining 21 deception methods as we judge them on general principles.  10 indicates the most suitable, and 0 indicates unsuitable; these numbers could be refined by surveys of users or deception experiments.  This table provides a wide-ranging menu of choices for deception planners.

Table 2: Evaluation of deception methods in cyberspace.

Deception method

Suitability for offense

in information systems, with general example

Suitability for defense

in information systems, with general example

supertype

6 (pretend attack is something else)

0

whole

8 (conceal attack in a common sequence of commands)

0

agent

4 (pretend attacker is legitimate user or is standard software)

0

object

8 (attack unexpected software or feature of a system)

5 (camouflage key targets or make them look unimportant, or disguise software as different software)

instrument

7 (attack with a surprising tool)

0

location-from

5 (attack from a surprise site)

2 (try to frighten attacker with false messages from authorities)

location-to

3 (attack an unexpected site or port if there are any)

6 (transfer control to a safer machine, as on a honeynet)

location-through

3 (attack through another site)

0

direction

2 (attack backward to site of  a user)

4 (transfer Trojan horses back to attacker)

frequency

10 (swamp a resource with tasks)

8 (swamp attacker with messages or requests)

time-at

5 (put false times in event records)

2 (associate false times with files)

time-through

1 (delay during attack to make it look as if attack was aborted)

8 (delay in processing commands)

cause

1 (doesn't matter much)

9 (lie that you can't do something, or do something not asked for)

purpose

3 (lie about reasons for needing information)

7 (lie about reasons for asking for authorization data)

preconditions

5 (give impossible commands)

8 (give false excuses for being unable to do something)

ability

2 (pretend to be an inept attacker or have inept attack tools)

5 (pretend to be an inept defender or have easy-to-subvert software)

accompaniment

9 (a Trojan horse installed on a system)

6 (software with a Trojan horse that is sent to attacker)

content

6 (redefine executables; give false file-type information)

7 (redefine executables; give false file-type information)

measure

5 (send data too large to easily handle)

7 (send data too large or requests too hard to attacker)

value

3 (give arguments to commands that have unexpected consequences)

9 (systematically misunderstand attacker commands)

effect

3 (lie as to what a command does)

10 (lie as to what a command did)

 

 

 

 

 

 

 

COSTS AND BENEFITS OF DECEPTION

 

Deception in an information system does have disadvantages than must be outweighed by advantages.  Deception may antagonize an enemy once discovered and may provoke them to do more damage.  But it may also reveal more of their attack methods since it encourages them to try new methods other than what they intended (and probably less successfully if they are unplanned).  Some of this effect can be obtained just by threatening deception.  If, say, word gets out to hacker bulletin boards that US command-and-control systems practice deception, then attackers of those systems will tend more to misinterpret normal system behavior and engage in unnecessary countermeasures.  Thus widespread dissemination of reports of research on deceptive capabilities of information systems (though not their 'order of battle' or assignment to specific systems) might be a wise policy.

 

Deceptive methods can also provoke and anger legitimate users who encounter them.  While we should certainly try to target deception carefully, there will always be borderline cases in which legitimate users of a computer system do something atypical that could be construed as suspicious.  This problem is faced by commercial 'intrusion-detection systems' (Lunt, 1993) that check computers and networks for suspicious behavior, since they are by no means perfect either: You can set their alarm thresholds low and get many false alarms, or you can set the thresholds high and miss many real attacks.  As with all military options, the danger must be balanced against the benefits.

 

CONCLUSION

 

It is simplistic to think of information warfare as just another kind of warfare.  We have seen that a careful consideration of deception strategy and tactics shows that many ideas from conventional warfare apply, but not all, and often those that apply do so in surprising ways.  As in conventional warfare, careful planning will be necessary for effective deception in cyberwar, and the two taxonomies we give here provide useful planning tools.

 

REFERENCES

 

Austin, J. L. (1975).  How To Do Things With Words (2nd ed., ed. by J.O. Urmson & M. Sbis). Oxford: Oxford University Press.

 

Bell, J. B., & Whaley, B. (1991).  Cheating and Deception.  New Brunswick, NJ: Transaction Publishers.

 

Bok, S. (1978).  Lying: Moral Choice in Public and Private Life.  New York: Pantheon, 1978.

 

Breuer, W. B. (1993).  Hoodwinking Hitler: The Normandy Deception. London: Praeger.

 

Carbonell, J. (1981). Counterplanning: A strategy-based model of adversary planning in real-world situations.  Artificial Intelligence, 16, pp. 295-329.

 

von Clausewitz, K., On War, trans. Howard, M., & Paret, P., New York: Everyman's Library, 1993.

 

Copeck, T., Delisle, S., & Szparkowicz, S. (1992).  Parsing and case interpretation in TANKA. Conference on Computational Linguistics, Nantes, France, pp. 1008-1023.

 

Dunnigan, J. F., & Nofi, A. A. (2001).  Victory and Deceit, 2nd edition: Deception and Trickery in War.  San Jose, CA: Writers Press Books.

 

Fillmore, C. (1968).  The case for case.  In Universals in Linguistic Theory, ed. Bach & Harns, New York: Holt, Rinehart, & Winston.

 

Fowler, C. A., & Nesbit, R. F. (1995).  Tactical deception in air-land warfare.  Journal of Electronic Defense, 18(6) (June), pp. 37-44 & 76-79.

 

The Honeynet Project (2002).  Know Your Enemy. Boston: Addison-Wesley.

 

Julian, D., Rowe, N., & Michael, J. B. (2003).  Experiments with deceptive software responses to buffer-based attacks.  Proc. 2003 IEEE-SMC Workshop on Information Assurance, West Point, NY, June, pp. 43-44.

 

Kay, D. A. (1995).  Denial and deception practices of WMD proliferators: Iraq and beyond.  The Washington Quarterly.

 

Latimer, J. (2001).  Deception in War.  New York: The Overlook Press.

 

Lunt, T. F. (1993).  A survey of intrusion detection techniques.  Computer and Security, 12(4) (June), pp. 405-418.

 

Michael, B., Auguston, M., Rowe, N., & Riehle, R. (2002).  Software decoys: intrusion detection and countermeasures.  Proc. 2002 Workshop on Information Assurance, West Point, NY, June.

Michael, J. B., Fragkos, G., & Auguston, M. (2003).  An experiment in software decoy design: Intrusion detection and countermeasures via system call instrumentation.  Proc. IFIP 18th International Information Security Conference, Athens, Greece, May.

Miller, G. R., & Stiff, J. B. (1993).  Deceptive Communications.  Newbury Park, UK: Sage Publications.

Mintz, A. P. (ed.) (2002).  Web of Deception: Misinformation on the Internet.  New York: CyberAge Books.

Montagu, E. (1954).  The Man Who Never Was.  Philadelphia: Lippincott.

Rowe, N. C. (2003).  Counterplanning deceptions to foil cyber-attack plans.  Proc. 2003 IEEE-SMC Workshop on Information Assurance, West Point, NY, June, pp. 203-211.

 

Scales, R. (1998).  Certain Victory: The US Army in the Gulf War.  New York: Brassey's.

 

Stein, J. G. (1982).  Military deception, strategic surprise, and conventional deterrence: a political analysis of Egypt and Israel, 1971-73.  In Military Deception and Strategic Surprise, ed. Gooch, J., and Perlmutter, A., London: Frank Cass, pp. 94-121.

Tognazzini, B. (1993).  Principles, techniques, and ethics of stage magic and their application to human interface design.  Proc. Conference on Human Factors and Computing Systems 1993, Amsterdam, April, pp. 355-362.