Saturday, May 16, 2009

Board game (Keltis) AI micro post-mortem.




Keltis
I finally got a copy of Keltis PC, I had to learn the game, design and write the AI all in just 6 weeks, obviously, not enough time for polishing. Additionally, Keltis itself is pretty tricky and apart from it being an incomplete information game, it is also based on a delicate balance of risk vs. reward, short-term vs. long-term gameplay, which is what it's inventor Dr. Reiner Knizia's games are famous for.

AI design
Still, I managed a well playing AI, I based it on a careful mix of statistics, heuristics, board evaluation, opponent modeling and planning.
The heuristics were useful for the start and middle phases where planning is difficult and way too many options are possible. In contrast, the AI's planning abilities become more and more important as the game approaches it's (difficult to time and predict) end phase and allows it to take risks, catch you off guard and snatch the victory in front of your nose.
The AI took both 'approaches' into consideration during the whole game ensuring no strange changes in strategy and smooth purposeful decisions.
The opponent modeling component was used to predict the end phase of the game, decide what cards to discard or not, when to take risks. This was one part I really wished I had more time to work on, it would have definitely improved the AI's quality.

Probabilistic model
The probabilistic model for this game happens to be a negative hypergeometric distribution however I started prototyping with a much simpler model, which turned out to be very inaccurate for a range of inputs, because of that I resorted to using the precise model, for that I unfortunately needed a so called big-number library because the calculations involved huge factorials which are then divided for the final 0.0-1.0 range probabilities.
After trying some of them I settled on TTMath because of its small size and because it fit my needs very well. However when it came to porting to NDS and mobile phones, this turned out to be a problem because the library used x86 assembly to do some of it's work, in the short time I had, the best thing I could do was to reimplement those features in c++.

Planning with probability
To plan based on probabilities, The planner would choose a fixed probability, per example 0.7, with that a precise non-probabilistic plan gets built based on the probabilistic model. This plan can then be evaluated and scored.
In the case of Keltis this envolves many things including planning what pieces to start or not (which depends on again different estimations including opponent modeling), how to use the bonus points (which are crucial to playing well) to state a few.
It might turn out per example that such a plan will not get the AI enough points to win the game if the player is already ahead, in that case the AI can choose to play with more risk and recursively test plans with lower probabilities until one that has a good risk/reward ratio is found.
Playing it risky in this case means hoping to get cards that have low probability. This was the approach I used to adaptively choose the AI's risk readiness based on the situation (comfortably winning, close game, hopeless ....)

Lessons learned
In retrospect improving the simple probabilistic model I had for better accuracy on the problematic input range, or finding an simpler approximate model would have probably been a much better idea and would have saved me some time that I could have used for improving the AI.
Another thing I learned is that if time is very short, you can go a long way with heuristics, and cover the rest with special case code... Andreas Epple my ex-colleague senior programmer helped me get myself to actually accept going the special-case way to bend the last problems before shipping, it worked much better than I expected.

No comments: