#1 Should AI be stupid?
Posted: Tue May 27, 2008 12:16 pm
So offers Next Gen
Thoughts? I'm tempted to agree, it's magically easy to write good AI, esspecially in a tactical FPS or generally anything where the complexity of thinking ahead is limited to a few seconds. You just give it all the information, it can know far more then you, it never forgets. It's far harder to create something with humane faults and limitations.Pandemonium breaks out. Alarm bells sound, red warning lights flash and guards come running to the scene of the crime. Meanwhile, you, the perpetrator, slink unnoticed into some shadowy crevice. Less than a minute later, peace has been restored. Guards return to their patrols – no more aware of you now than they were before you shot one of their colleagues in broad daylight. These guys are idiots. Of course, it is the very fact that they are such short-sighted, amnesiac goons, who don’t think to peer too closely into shadows or perform a systematic sweep of their surroundings that allows the player to overcome the tremendous odds stacked against them. It makes the game possible. It also makes it a game.
Such acts of idiocy are the articles of a familiar gaming language; an understanding that the game, no matter the realism of the setting, is a system of behaviors and mechanisms that can be understood, predicted and exploited. The stealth genre has long made use of guards who perform their duties with strict, inevitable fallibility, leaving openings for a nimble-footed spy, thief or assassin to slip past unnoticed. For all the grim texturing of Snake’s world, his opponents’ behavior conforms to a decidedly inhuman rule-set – it’s hard to think that real paramilitary goons would be so enraptured by the discovery of a gentleman’s magazine on the floor.
The strategy genre too is reliant on such foolish enemy decisions; the beauty of Advance Wars comes down to the fact that you can repeatedly exploit features of the AI – it will always pursue a cheap, unmanned vehicle with its full force, allowing you to create diversions and bore holes in its defense.
How appropriate an AI is for a game does not then run parallel to its absolute intelligence – but it’s not always simply because the AI may have some design mechanic to fulfill, such as exposing tactical flaws by chasing empty APCs.
“Designing an AI that takes optimal advantage of its knowledge about the game and maximizes its ability to solve problems results in AI that isn’t fun to play against,” says Epic Games’ Steve Polge, the man responsible for the Reaper Bot, a popular multiplayer AI made for Quake, and for much of the AI development on the Unreal Tournament series.
AIs may find it difficult to react in credibly human ways, may struggle with tactical complexity, but they tend to be a pretty decent shot if their skills aren’t tempered. And this has, for a long time, been the traditional approach: build a classical AI solution that maximizes some measurable objective – for example, collecting the most resources, finding the shortest distance to a goal, scoring the most points – and then either dumb it down or provide it with cheats to buff it up to the appropriate level.
The problem with this approach is that, if not carefully handled, it can lead to an asymmetry between the challenges facing the AI and the player that can feel unfair – a phenomenon that is well illustrated by the evolution of the racing genre. In the past, computer-controlled vehicles ran on conveyor belts – aware of the exact racing line, following it unflinchingly, never spinning off, never crashing – except when the player powered them off the track. But in order to make them possible for the player to defeat, AIs would slow excessively at corners, so that the player could inch past them in a slightly more powerful car.
However, such simplistic hobbling of the AIs’ abilities is increasingly untenable, whether it is done to fit in with a design mechanic or in order to adjust the challenge; as games become more simulatory in their presentation, as these worlds become more and more credible, so too does it become easier for unrealistic AI behavior to upset the illusion.
In the words of David Hayward, of videogame consultancy Pixel-Lab, “The closer a representation of a human is to reality, the slighter the flaws that can suddenly de-animate it.” It’s the uncanny valley phenomenon, whereby the closer approximations of humanity become more unsettlingly inhuman than those resting in abstraction, and it applies to more than just the fidelity with which human bodies are rendered – if the context in which an AI exists is realistic, but its behavior conforms to abstracted ideas of gameplay, then the result can be jarring.
Clearly, when designers dumb down an AI they now need to be smarter about the way they make it stupid. For a game like Unreal Tournament 3, in which the AI opponents must act as much like human multiplayer opponents as possible, this is particularly important.
“We spent more time working on limiting AI capabilities in human-like ways, such as aiming accuracy or world-state knowledge, than any other AI problem,” says Polge. “Before UT3, the approach we used was to determine the factors that made human players more or less likely to hit a target – like whether the target was stationary or moving, whether its movement was erratic, whether the shooter had just been knocked around by a shot, whether the shooter was stationary or moving – and use these factors to modify the magnitude of the random aiming error.
“This approach worked reasonably well in terms of mimicking how frequently a target should be hit, but it broke down in a couple of ways. The first was that at some extremes, such as [when the target was] very close or very far away, this accuracy model wasn’t as realistic. The second was that bots would miss as frequently as a human, but not in the same way. For example, when a player suddenly dodges to the side, other humans tend to miss by shooting where the player used to be going, rather than with a large spread around where the player is currently going.
“We improved the aiming model significantly in UT3 by adding reaction time to the bot’s model of where a player is going.Rather than extrapolating where a player will be when the projectile reaches their location based on the player’s current location and velocity, bots extrapolate their enemy’s position based on what they were doing a few hundred milliseconds ago – which is what humans do. This results in bot aiming ‘feeling’ much more human-like.”
Similar solutions have been developed to deal with the uncanny skillset of opponents in racing games. ‘Rubber-banding’ has been one way of addressing the issue of creating a consistently surmountable challenge, causing AI drivers to adjust their driving capabilities, or even achieve impossible speeds, in order to tax you regardless of how well or how poorly you are driving. This too has proved unconvincing at times, with considerable leads being improbably reduced in seconds and vice versa.
“Rubber-banding is an interesting artform,” says Hamish Young, a producer at Criterion who has worked on every Burnout game. “Essentially there are some cars in the pack you want to be around however badly the player plays, to encourage them to get back into the race. These are the back-markers. Then there are a group of cars in the middle who stretch between the back-markers and the pacemakers. The pacemakers are the front couple of cars and they in effect set the difficulty of a race. Over the course of the different Burnouts we have added more and more cars to the races, which means the rubber-banding can be more subtly spread across the pack.”
Burnout’s emphasis on battling with other vehicles and forcing them from the road allows for more variables by which the abilities of the AI can be reduced or increased – disguising the degree with which this is contrived to match player skill.
“I think in general in the genre you either get cars that drive almost impossibly well and often they ignore that you exist,” says Young. “In Burnout, neither cliché is true. We try to make our AI behave in a human way mainly by trying to get them to only make the same mistakes a human would. For Burnout, this would be things like mispredicting where a piece of cross-traffic will be and crashing into it. The reasons for misprediction are mostly similar to what a human experiences: there is a degree of guessing where the traffic will be and when you could potentially contact it. Causing AI to crash is relatively easy because you can directly play with its perception – for example, make it ignore a piece of traffic, make it think a corner is wider than it is, etc. Ultimately, it requires understanding the mistakes humans make and why their judgments are off and then building a system whose judgments can be similarly off.”
MotorStorm is also an interesting example for this – it shares Burnout’s love of cinematic vehicle carnage, often using its AI to contrive collisions directly in the player’s view. However, it also attempts to personify your opponents in ways that allow you to appreciate precisely what the AI is attempting and why. Offensive gesticulations are one of the more brazen examples of how the AI states its aggression towards another racer. Drive up behind a motorcycle in a heavy vehicle, however, and it will look over its shoulder, its appearance of nervousness signaling that it is now more likely to make a mistake and crash.
“One of the most important things required to write a good AI for any game is to make sure that there is some way for users to know what the AI is trying to do,” says Nathan Sturtevant, creator of ’90s tank game Dome Wars and now a PhD lecturer in AI at the University of Alberta.
“If the user has no ability to perceive what the AI is planning or attempting, users will be frustrated. If the AI is too strong, it will probably be perceived as cheating, and if it is too weak, it will be perceived as stupid. In FEAR, if the AI couldn’t get in through a door, it would try a window. This makes the enemy more predictable, and when I can predict what the enemy is going to do, I can both appreciate its intelligence and begin to defeat it. I may have the most intelligent AI system in the world, but if there is no way for a player to perceive what the AI is trying to do, it will end up looking stupid.”
A large part of this is consistency – in fact when we complain about stupid AI, and toss the controller across the floor in disgust, we are more often than not referring to anomalies in its behavior rather than an actual lack of intelligence. When an eagle-eyed enemy improbably spots you while you believe yourself to be adequately hidden, or when opponents manage to track you down with the insistence of a psychic beagle – these are the things that jar with the player’s understanding of the world and drag him or her out of it.
“I think pathfinding is an area that used to cause designers a lot of problems,” says Sturtevant. “If your henchman got stuck in Neverwinter Nights or even just fell too far behind, he would just teleport to catch up. I worked on the pathfinding system for Dragon Age, and I hope and expect that there won’t be such a problem there. Last year I got to hear Quinn Dunki [senior AI programmer at Pandemic Studios] talk about the pathfinding design in Saboteur, and they have a variety of animations they will play when an AI gets stuck, culminating with one of angry frustration. If your AI does get stuck, the human player will probably be much more forgiving if they can see that the AI knows it’s stuck.”
With inconsistent or inscrutable behavior currently anathema, it seems like current design paradigms naturally limit the kind of dynamism you can squeeze from an AI. As Polge says, “AI NPCs are still not as innovative as human players. Improving in this area, with the goal of really surprising players without frustrating them, is challenging, and less straightforward than the improvements we’ve made so far.”
In fact, rather than seeing future AI research feed into the genres of today, Young foresees that it will add an entirely new branch to the games that get made: “Games generally are better if the game designer can shape and direct the experience. Many of these research directions are therefore tangential to requirements of games for now. My view is that new types of AI will ultimately lead to new types of game rather than games using more and more of the research piecemeal.”
Polge throws out one suggestion of how emerging AI research might shape game design: “A game with a solid implementation of a robust speech recognition and synthesis system as an interface, and a compelling personality and motivation model for NPCs could have gameplay focused on determining the motivations of allies and opponents.”
Even then, credible stupidity will be key to emulating human interaction. The Turing Test, which demands that an AI must be indistinguishable from a human in conversation, isn’t simply a matter of increasing an AI’s knowledge. It will only be passed when an AI can intuit questions a human would answer, such as ‘What color is grass?’ and which they would not, such as ‘What is the square root of Pi?’ It seems like an AI’s stupidity might prove to be the cleverest thing about it.