So if you break down AI behaviour in terms of layers of intelligence.
You'd usually start with some utility functions; generate a path from A->B, follow a path, track other entities in the game, LOS etc. Secondly you'd start tagging meta-data into the game; tag areas as under cover, or full of lava, tagging, tagging some entities as good (healthpacks) or bad (hostile turrets). Here you find your first trade-off; how dynamic is the world? It's easy for a designer to tag a rock as 'good cover', but if you can blow up the rock you now need to be able to re-calculate things like path-finding graphs and cover spots on the run. Personally I'd rather see highly dynamic worlds, but that's me. Mostly games are going the other way, baking in a LOT of information into largely static worlds; a lot of the middleware and engines are also becoming increasingly focused on really pretty, static worlds.
At that point you've got a pretty dumb AI. It can move from A->B, it can find cover, and fire at the player. To move it to the next level you need better domain knowledge, and you need to let the player KNOW that the AI is being 'smart'. Farcry's a good example; the enemies will chatter with each other 'he went over there', 'I see something', 'something's moved', ' cover me', etc. Even if the other AIs were all dumb as bricks the player would READ intelligence into the chatter; if you hear on AI shout 'cover me' while rushing you, and another AI is firing at the same time, you'll assume the AI is cooperating. Granny Weatherwax would call it Headology. In addition to that they had a bunch of little AI scripts that were situation dependent; see a grenade, what do you do (run away, dive on top of it, throw it back, etc). It's the primary way to make AIs appear smart, because it's something the player can see and identify as smart. One a similar vein is explicit set-pieces; the most famous one is probably the one in Halflife where the AI opens the tunnel door, toss in a satchel charge, and close the door on you. It's not actual AI (just clever cutscene work), but it makes you think the AI is planning these kinds of things.
At this point you probably have a pretty decent AI for an FPS, but you're going to be woefully short for a real strategy game, especially a turn-based game where the various cheats don't really work. There's a couple of ways to deal with things; the 'Chess' way is essentially brute-force. It generates a play-space of all possible moves, then all possible responses to that, projected forward several turns (that possibility space gets huge very quickly). For a strategy game, where the possible action set is even greater, it quickly gets out of hand. Usually you end up with a goal-based system. I want to have a huge army -> I need dragons -> I need upgrade my citadel -> I need more gold -> I should take that city to get gold. The goals are thus your 'domain knowledge'; the better you script them the better the game can plan. You need to balance KEEPING a goal (an AI that changes it's ultimate goal every turn is no smarter than one that has no ultimate goal) with reacting to player actions (still saving up to buy dragons while the player is burning your citadel is also dumb).
One poster mentioned various self-learning forms of AI; usually split along Genetic Algorithms, Neural Networks and matrix weights. The consistent issue with them is that computers are both capable of incredible acts of calculation, while also being incredibly dumb. You're going to need a new AI to keep track of your self-learning AI and stop it from wandering into Rainman land.
Genetic algorithms basically starts with some possible action configurations, run them through a game, score each configuration based on some 'fitness' criteria (how well did it do), then take a few of the best scoring configurations, modify them a little, and run them again (iterate this survival of the fittest until you get a great AI). Perhaps the best way to see Genetic Algorithms in action is this little Flash app using Box2D
to 'grow' cars. It's one of those ideas that programmers love, but which usually ends up not going anywhere useful. A couple of racing games have used genetic algorithms to tune AI car behaviour, for example... but usually you'll lock the evolution before shipping, to make sure your price-winning heifer doesn't get any two-headed offspring.
Neural networks are another one of the ideas we borrow from nature; essentially we try to model how neurons function by giving it a big set of matching valid inputs and outputs, and slowly accumulating input in the neural net. It's pretty good at some things traditional AI is bad at (like image recognition
), but the more abstract the inputs and outputs get the more likely it will draw wrong conclusions from it. If you have a net that *kinda* works, but is just plain broken in 10% of the cases, trying to work out WHY those 10% of the cases are wrong is a nightmare.
Finally, matrix weighting basically takes our goals and steps, and weight them based on past successes. If every time I try to build up my dragons I get rushed before I'm done, let's change the probability of taking that goal, or change WHEN I try that goal. You want to have some level of control for how that's applied, and how things decay back into neutral, otherwise the AI's likely to draw the wrong conclusions, but at least this strategy works, and is open to normal debugging procedures. If the AI is acting stupid, you can crack it open and see the reason why it's always building dragons.. that's incredibly hard to do with something like a neural network, where the answer is 'the seemingly random sum of 100,000 floating point values made it seem like a good idea'.
TL;DR. AI is hard. The more complex the game, the harder it is. Most of the 'cool new technologies' that is supposed to revolutionise AI ends up creating a bastard love-child of Rainman and Frankenstein, but without the charm.