Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters (NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.
Since game AI is centered on appearance of intelligence and good gameplay, its approach is very different from that of traditional AI; hacks and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPC's otherwise perfect movement and aiming would be beyond human skill.
Game playing was an area of research in AI from its inception. In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. These were among the first computer programs ever written. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a world champion. Work on checkers and chess would culminate in the defeat of Garry Kasparov by IBM's Deep Blue computer in 1997.
The first video games developed in the 1960s and early 1970s, like Spacewar!, Pong and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AI.
Games that featured a single player mode with enemies started appearing in the 1970s. The first notable ones for the arcade included the 1974 Atari games Qwak (duck hunting) and Pursuit (fighter aircraft dogfighting simulator). Two text-based computer games from 1972, Hunt the Wumpus and Star Trek, also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns.
The idea was used by Space Invaders (1978), sporting an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player's input. Galaxian (1979) added more complex and varied enemy movements.
Pac-Man (1980] applied these patterns to maze games, with the added quirk of different personalities for each enemy, and Karate Champ (1984) to fighting games, although the poor AI prompted the release of a second version.
Games like Madden Football, Earl Weaver Baseball and Tony La Russa Baseball all based their AI on an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games. Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy.
The emergence of new game genres in the 1990s prompted the use of formal AI tools like finite state machines. Real-Time Strategy games taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things. The first games of the genre had notorious problems. Herzog Zwei, for example, had almost broken pathfinding and very basic three-state state machines for unit control, and Dune II attacked the players' base in a beeline and used numerous cheats. Later games in the genre exhibited much better AI.
Later games have used nondeterministic AI methods, ranging from the first use of neural networks in a videogame in Battlecruiser 3000AD (1996), to the emergent behaviour and evaluation of player actions in games like Creatures or Black & White.
Goldeneye 007 (1997) was one of the first FPSs to use AI that would react to players' movements and actions as well as taking cover, performing rolls to avoid being shot and throwing grenades at the appropriate time. Its creators later expanded on this in the title Perfect Dark, with enemies running for dead team mates' weapons if the player shot the weapon out of the hand. The only unfairness during the course of both games was that enemies knew where the player was, even if no one saw where the player hid.
Halo (2001) featured AI that could use vehicles and some basic team actions. The AI could recognize threats such as grenades and incoming vehicles and move out of danger accordingly.
Far Cry (2004) exhibited very advanced AI for its time, although this made minor glitches more apparent. The enemies would react to the player's playing style and try to surround him when possible. They would also use real life military tactics to try and beat the player. The enemies did not have "cheating" AI, in the sense that they did not always know exactly where the player is all the time. They would remember his last known position and work from there.
"F.E.A.R" (2005) introduced advanced character AI that used real time cover, tactics and team coordination against the player. AI characters worked as a team, knowing where other AI team mates are and made tactical decisions off of that. The AI would be able to recognize when out gunned and even hide from the player, to later attack from behind. F.E.A.R AI was praised by critics as a major benchmark in game AI.
Elder Scrolls IV: Oblivion (2006) uses very complex AI. NPC's in the game have a 24/7 schedule and pursue their goals in their own ways. They don't stand in one place all the time. They eat, sleep, and go do their jobs everyday. Events happening in the game can also change their daily routine. They can grow from nice town folk, to deadly assassins.
Left 4 Dead (2008) uses a new artificial intelligence technology dubbed The Director. The Director is used to procedurally generate a different experience for the players each time the game is played. The games developers calls the way the Director is working "Procedural narrative" because instead of having a difficulty level which just ramps up to a constant level, the A.I. analyze how the players fared in the game so far, and try to add subsequent events that would give them a sense of narrative.
Project Natal, an add-on peripheral for the Xbox 360 unveiled at E3 2009, June 1st, was shown to include a technical demo called "Milo", developed by Lionhead Studios. It lets players interact with an incredibly sophisticated AI called Milo, showcasing voice recognition (and, in turn, full conversation) and the ability to pass real-life items "through" the screen and into virtual items.
AI has continued to improve, with aims set on a player being unable to tell the difference between computer and human players.
Some game programmers consider any technique that is used to help create the illusion of intelligence to be part of a game's AI. This view is controversial because it includes techniques that are also widely used outside of a game's AI engine. For example, information about potential future collisions is an important input to algorithms that help create characters that are clever enough to avoid bumping into things. But the same collision detection techniques are also commonly needed to implement a game's physics. Similarly, line of sight test results are usually important inputs to AI targeting decisions, but are also widely used inside the rendering engine. Another example is scripting, which can be a convenient tool for all aspects of game development, but is often closely associated with controlling NPCs' behavior.
Purists complain that the "AI" in the term "game AI" overstates its worth, as game AI is not about intelligence, and shares few of the objectives of the academic field of AI. Whereas "real" AI addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AI that can reason, "game AI" often consists of a half-dozen rules of thumb, or heuristics, that are just enough to give a good gameplay experience.
Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become less idiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games by cheating creates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game an NPC can simply look up the position in the game's scene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to use cheating.
Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although scripting is currently the most common means of control. Pathfinding is another common use for AI, widely seen in real-time strategy games. Pathfinding is the method for determining how to get an NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". Game AI is also involved with dynamic game difficulty balancing, which consists in adjusting the difficulty in a video game in real-time based on the player's ability.
The concept of emergent AI has recently been explored in games such as Creatures, Black & White and Nintendogs and toys such as Tamagotchi. The "pets" in these games are able to "learn" from actions taken by the player and their behavior is modified accordingly. While these choices are taken from a limited pool, it does often give the desired illusion of an intelligence on the other side of the screen.
Cheating AI (also called Rubberband AI) is a term used to describe the situation where the AI has bonuses over the players, such as having more hit-points, driving faster, or ignoring fog of war. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for the bonuses. In the context of AI programming, cheating refers only to any privilege given specifically to the AI; this does not include the inhuman swiftness and accuracy natural to a computer, although a player might call that "cheating".
Of course, in reality the human always has the "disadvantage" of having to rely on visual and auditory input to infer an abstract game situation, while the AI has direct, though possibly limited, access to the abstractions of the game engine. However, nobody seriously considers that a true game AI should have to include visual processing algorithms, especially since human vision is a giant leap beyond what is currently possible for computer vision.
One common example of cheating AI is found in many racing games. If an AI opponent falls far enough behind the rest of the drivers it receives a boost in speed or other attributes, enabling it to catch up and/or again become competitive. This technique is known as "rubber banding" because it allows the AI character to quickly snap back into a competitive position. A similar method is also used in sports games such as the Madden NFL series. In more advanced games, NPC competitiveness may be achieved through dynamic game difficulty balancing, which can be considered fairer though still technically a cheat, since the AI characters still get a boost, even though they respect the virtual world's rules.
- General Game Playing
- Computer chess
- Computer Go
- Action selection
- Computer game bot
- Simulated reality
- Dynamic game difficulty balancing
- ↑ See "A Brief History of Computing" at AlanTuring.net.
- ↑ Template:Crevier 1993, p. 58
- ↑ Template:McCorduck 2004, p. 480-483
- ↑ Schwab, 2004, p. 97-112
- ↑ Schwab, 2004, p.107
- ↑ "Left 4 Dead". Valve Corporation.
- ↑ "Left 4 Dead Hands-on Preview". Left 4 Dead 411.
- ↑ Newell, Gabe (2008-11-21). "Gabe Newell Writes for Edge". edge-online.com. Retrieved on 2008-11-22. "The events are trying to give them a sense of narrative. We look at sequences of events and try to take what their actions are to generate new sequences. If they’ve been particularly challenged by one kind of creature then we can use that information to make decisions about how we use that creature in subsequent encounters. This is what makes procedural narrative more of a story-telling device than, say, a simple difficulty mechanism. "
- ↑ 9.0 9.1 Scott, Bob (2002). "The Illusion of Intelligence". in Rabin, Steve. AI Game Programming Wisdom. Charles River Media. pp. 19–20.
- Bourg; Seemann (2004). AI for Game Developers. O'Reilly & Associates. ISBN 0-596-00555-5.
- Buckland (2002). AI Techniques for Game Programming. Muska & Lipman. ISBN 1-931841-08-X.
- Buckland (2004). Programming Game AI By Example. Wordware Publishing. ISBN 1-55622-078-2.
- Champandard (2003). AI Game Development. New Riders. ISBN 1-59273-004-3.
- Funge (1999). AI for Animation and Games: A Cognitive Modeling Approach. A K Peters. ISBN 1-56881-103-9.
- Funge (2004). Artificial Intelligence for Computer Games: An Introduction. A K Peters. ISBN 1-56881-208-6.
- Millington (2005). Artificial Intelligence for Games. Morgan Kaufman. ISBN 0-12-497782-0.* Schwab (2004). AI Game Engine Programming. Charles River Media. ISBN 1-58450-344-0.
- Smed and Hakonen (2006). Algorithms and Networking for Computer Games. John Wiley & Sons. ISBN 0-470-01812-7.