Jump to content

Why can't we ever have decent AI?


Slaughter

Recommended Posts

  • 2 months later...

While I'm no AI programmer, I am studying software engineering and want to make a turn-based game sometime in the future, just for fun. Of course, that means I'll eventually have to develop an AI for it as well (right now, even the core engine is less than 10% complete). I also almost never play multiplayer myself, so I have experience with the AI of lots of games (mostly strategic ones).

 

There were some AIs that show promise, despite not actually being all that good. One such example I've seen is the AI of Dawn of War:

  • It adapts its army composition to counter the player's. If its anti-infantry soldiers are getting decimated by my tanks, it starts to roll out anti-vehicle infantry and melee walkers.
  • It doesn't waste troops on minor probing attacks but keeps building up its army, attacking only once it outguns me or once I aggro it. If either of those conditions are met, its entire army attacks all at once in a huge wave, so no luring individual units away for piecemeal annihilation.
  • In team games versus a single human player (like most campaign missions), attacking one AI instantly aggroes its teammates as well. So if you build up your army and attack one AI, the other will counterattack your base while you're away. AI teammates also tend to gang up on the same player to ensure a kill overwhelming numerical superiority; I've played several human-vs-AI 3v3 matches before and saw multiple instances of all three AI players attacking the same human player simultaneously.
  • It concentrates fire on low-health targets but splits firepower to take advantage of hard counters. If a mixed army of anti-infantry and anti-tank guys are fighting an infantry blob who suddenly get armored support, the anti-tank guys immediately switch targets.
  • If it gets into a fight that has it outnumbered, the AI actively tries to kite the attackers back to its nearest armed structure to add that structure's firepower into the equation.

And the AI already pulls these behaviors off at the second difficulty level out of four (on Easy, it is practically brain-dead; Standard difficulty is where it gets unshackled and the fun starts). Now this is what I would call a pretty damn good AI... would. Unfortunately, in-depth observation of the AI's behavior reveals that it is actually rather rigidly scripted. To use the kiting one as example, the AI somehow instantly knows if you give an attack order onto one of its units and withdraws that unit. The withdrawing unit immediately stops fighting and runs to the structure above all else before turning around and coming back, so you can abuse this by giving an attack order onto an enemy unit then cancelling it; the unit will still disengage and make the round trip, putting it out of the fight for several seconds.

 

Another bad example of scripting in Dawn of War is units capable of jumping or teleportation. Unarmed units and units with a ranged attack more powerful than their melee attack AND set to ranged stance are designated by the AI as priority targets. If the AI has a jump- or teleport-capable unit and a priority target comes into range, the AI instantly jumps/teleports next to that unit. Considering that most jump/teleport units are melee and attacking infantry in melee forces them to stop shooting and melee back, this would be an excellent way to tie up shooter-type units, neutralizing their firepower. Unfortunately, the AI is hardcoded to use jump/teleport whenever the above condition is met, resulting in nonsensical behavior. Behavior like fleeing units unexpectedly teleporting right into the middle of the army they're trying to flee from, then resuming fleeing (and getting gunned down from behind). Or units with two jump charges using one charge to jump next to the priority target, then promptly using their second charge to take off and land ONE METER AWAY. It is painfully evident that whoever programmed the AI didn't take a lot of variables into account, like how far away the target is or what the jump/teleport unit itself is doing right now.

 

 

 

So then, I had an idea. I think that the best way to force the development of a good AI is to write one that isn't embedded into the game engine. I always felt that basing the AI's behavior around data the human player has no access to is lazy and cheap. But if the AI only has access to the same information the player is also legitimately able to obtain, it becomes much more challenging for the programmer to write its behavior, plus it would curb occurrences of superhuman AIs. Ideally, the AI programmer team should hire a tournament-level online player of the game's genre as consultant on how the AI should utilize the information it does have access to for maximum effect: when to build resource gatherers and how many of them, what build order it should use, etc..

 

Related to this, another one I was thinking of is to dispense with the "if X happens, do Y" kind of AI programming. Instead of events directly triggering scripted responses, the responses all get weighted probabilities, with certain events increasing the priority of some of them. When the AI needs to act, it uses this probability table to decide what's more important to do now and bases its actions around that; in other words, base the AI around emergent behavior instead of pre-programmed responses (Black and White already used this in 2001, I think; I read one example of the AI being taught not to do something it liked doing resulting in the AI only doing it when the player wasn't watching). This would make the AI's actions seem more fluid and natural, instead of being predictable. For example, if an AI is programmed to prioritize damaged enemies but somehow doesn't have access to the HP level of enemy units, it could actively note how much damage it saw the target take and use that to "guess" the target's current health (and if it didn't see the target being healed, the AI would obviously guess wrong - exactly like a human player).

Link to comment
Share on other sites

Amitakartok, your thoughts in the last paragraph about response based on weights and probabilities are essentially also pre-scripted behaviors and it’s already in use. Again we run into the issue that we could not define endless set of possible scenarios and responses without messing up things or hit some serious CPU bottlenecks.

 

What we call AI in games are just scripts for stats and range checks, related conditions and responses. No matter how many scenarios you manage to define it remains a predefined behavior that will look stupid in many cases and human will always find a way to exploit and crush it.

 

Little bit off-topic, but there is not true AI in games. Even it’s strange why we call it this way. AI would mean full blown algorithmic adaptive behavior, logical decision responses and ultimately the ability to learn and self-improve. We are far away from this in games (maybe it’s for good).

IMHO people with enough knowledge and skills to work on this fascinating topic are already assigned into robotic and military projects. When there is breakthrough there we will start seeing this in games. Hope we don’t mess up that one… who knows what could happen.

Link to comment
Share on other sites

Part of the problem with any AI in a "Pro" game is that any humans have access to the "metagame"; for example in SC2 you'll see various tactics develop and become popular by everyone to counter popular strat A, then someone comes up with a counter to THAT one and snowball.

 

I've seen several games won soley because the winner KNEW from watching previous gamed that thier foe liked to switch tech at a precise point or etc. This is sometging that you can practically never expect an AI to get close to replicating.

 

With that in mind, RTS AI will never ever be able to even try to compete with the big boys without cheating. FPs are possibly a different story, but that's hard to say.

Link to comment
Share on other sites

  • 6 months later...

So if you break down AI behaviour in terms of layers of intelligence.

 

You'd usually start with some utility functions; generate a path from A->B, follow a path, track other entities in the game, LOS etc. Secondly you'd start tagging meta-data into the game; tag areas as under cover, or full of lava, tagging, tagging some entities as good (healthpacks) or bad (hostile turrets). Here you find your first trade-off; how dynamic is the world? It's easy for a designer to tag a rock as 'good cover', but if you can blow up the rock you now need to be able to re-calculate things like path-finding graphs and cover spots on the run. Personally I'd rather see highly dynamic worlds, but that's me. Mostly games are going the other way, baking in a LOT of information into largely static worlds; a lot of the middleware and engines are also becoming increasingly focused on really pretty, static worlds.

 

At that point you've got a pretty dumb AI. It can move from A->B, it can find cover, and fire at the player. To move it to the next level you need better domain knowledge, and you need to let the player KNOW that the AI is being 'smart'. Farcry's a good example; the enemies will chatter with each other 'he went over there', 'I see something', 'something's moved', ' cover me', etc. Even if the other AIs were all dumb as bricks the player would READ intelligence into the chatter; if you hear on AI shout 'cover me' while rushing you, and another AI is firing at the same time, you'll assume the AI is cooperating. Granny Weatherwax would call it Headology. In addition to that they had a bunch of little AI scripts that were situation dependent; see a grenade, what do you do (run away, dive on top of it, throw it back, etc). It's the primary way to make AIs appear smart, because it's something the player can see and identify as smart. One a similar vein is explicit set-pieces; the most famous one is probably the one in Halflife where the AI opens the tunnel door, toss in a satchel charge, and close the door on you. It's not actual AI (just clever cutscene work), but it makes you think the AI is planning these kinds of things.

 

At this point you probably have a pretty decent AI for an FPS, but you're going to be woefully short for a real strategy game, especially a turn-based game where the various cheats don't really work. There's a couple of ways to deal with things; the 'Chess' way is essentially brute-force. It generates a play-space of all possible moves, then all possible responses to that, projected forward several turns (that possibility space gets huge very quickly). For a strategy game, where the possible action set is even greater, it quickly gets out of hand. Usually you end up with a goal-based system. I want to have a huge army -> I need dragons -> I need upgrade my citadel -> I need more gold -> I should take that city to get gold. The goals are thus your 'domain knowledge'; the better you script them the better the game can plan. You need to balance KEEPING a goal (an AI that changes it's ultimate goal every turn is no smarter than one that has no ultimate goal) with reacting to player actions (still saving up to buy dragons while the player is burning your citadel is also dumb).

 

One poster mentioned various self-learning forms of AI; usually split along Genetic Algorithms, Neural Networks and matrix weights. The consistent issue with them is that computers are both capable of incredible acts of calculation, while also being incredibly dumb. You're going to need a new AI to keep track of your self-learning AI and stop it from wandering into Rainman land.

 

Genetic algorithms basically starts with some possible action configurations, run them through a game, score each configuration based on some 'fitness' criteria (how well did it do), then take a few of the best scoring configurations, modify them a little, and run them again (iterate this survival of the fittest until you get a great AI). Perhaps the best way to see Genetic Algorithms in action is this little Flash app using Box2D to 'grow' cars. It's one of those ideas that programmers love, but which usually ends up not going anywhere useful. A couple of racing games have used genetic algorithms to tune AI car behaviour, for example... but usually you'll lock the evolution before shipping, to make sure your price-winning heifer doesn't get any two-headed offspring.

 

Neural networks are another one of the ideas we borrow from nature; essentially we try to model how neurons function by giving it a big set of matching valid inputs and outputs, and slowly accumulating input in the neural net. It's pretty good at some things traditional AI is bad at (like

), but the more abstract the inputs and outputs get the more likely it will draw wrong conclusions from it. If you have a net that *kinda* works, but is just plain broken in 10% of the cases, trying to work out WHY those 10% of the cases are wrong is a nightmare.

 

Finally, matrix weighting basically takes our goals and steps, and weight them based on past successes. If every time I try to build up my dragons I get rushed before I'm done, let's change the probability of taking that goal, or change WHEN I try that goal. You want to have some level of control for how that's applied, and how things decay back into neutral, otherwise the AI's likely to draw the wrong conclusions, but at least this strategy works, and is open to normal debugging procedures. If the AI is acting stupid, you can crack it open and see the reason why it's always building dragons.. that's incredibly hard to do with something like a neural network, where the answer is 'the seemingly random sum of 100,000 floating point values made it seem like a good idea'.

 

TL;DR. AI is hard. The more complex the game, the harder it is. Most of the 'cool new technologies' that is supposed to revolutionise AI ends up creating a bastard love-child of Rainman and Frankenstein, but without the charm.

Link to comment
Share on other sites

  • 9 months later...
  • 1 month later...
You mention FPS games and you are right of course. But let me please throw my two cents in under the name of Civilization VI. Have you read the reviews? Oh the glorifying articles with thrilled reviewers giving the game 8, 9 and 10 even. Maxed out scores - it is all peachy. My question is: did they even play this game? I agree that the game has a potential but the AI is simply devastating and absolutely ridiculous. Starting with again the cheat system the higher the difficulty goes (more everything for AI instead of making them smarter), through a ridiculous apostle, missionary spamming and ending with their erratic behavior. One turn AI declares friendship and two turns later yells at the player for having e.g. weak military. No logic whatsoever. Civ VI suffers from few more bugs but the AI issue is the biggest pleasure killer. I could go on but I think I made my point. Apologies for the rant but I thought this thread was good for my complaints.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
  • Create New...