Sunday, May 24, 2009

Link to forum rant about my thoughts on time allocation between doing and researching

"In Reinforcement Learning / Optimization, and basically all methods that rely fully or partially on searching, There always is an important and often manually tuned balance between exploitation (doing what you know worked well for now) and exploration (looking for new exploits that work even better).

We always find parallels between how humans (or biological systems) tick and how artificial systems we built work, on many different levels, be it seeing the brain as a biological computer, a simple internet search is enough to show many have made such observations.

I present a new one, it is a problem I have and I think others as well.
How deep should I go???

To give an example ......."

Sandbox Release #8: Collision Detection, Running/Racing Demo

Release #8 is here.

This time it was about physics integration for LOS, collision detection and distance queries, a bit of tactical cover analysis, and mostly as usual, animation / motion planning.
I made many improvements to the motion planner and refactored it into components compatible with our MVC framework.
We did not start working with parametric animations yet, this means that the motion planner cannot reach every possible point in space without some error correction or replanning to alternative locations.
Both error correction and robust plan failure handling and alternative planning were therefore built into two components: Locomotion and Navigation.
Additionally a form of collision avoidance was added, all that was integrated into a small racing prototype.

The new release also includes other improvements and features, for the full details, head to

Sunday, May 17, 2009

johndcook: RT @aycangulez: How do startups morph from agile, can-do companies to ones that have lost their edge? A: New building.
"That’s when things went south. Lets Fix Everything that Was Broken"

The thing I like about this is that I think it also applies to individuals! so watch out :)


Saturday, May 16, 2009

Board game (Keltis) AI micro post-mortem.

I finally got a copy of Keltis PC, I had to learn the game, design and write the AI all in just 6 weeks, obviously, not enough time for polishing. Additionally, Keltis itself is pretty tricky and apart from it being an incomplete information game, it is also based on a delicate balance of risk vs. reward, short-term vs. long-term gameplay, which is what it's inventor Dr. Reiner Knizia's games are famous for.

AI design
Still, I managed a well playing AI, I based it on a careful mix of statistics, heuristics, board evaluation, opponent modeling and planning.
The heuristics were useful for the start and middle phases where planning is difficult and way too many options are possible. In contrast, the AI's planning abilities become more and more important as the game approaches it's (difficult to time and predict) end phase and allows it to take risks, catch you off guard and snatch the victory in front of your nose.
The AI took both 'approaches' into consideration during the whole game ensuring no strange changes in strategy and smooth purposeful decisions.
The opponent modeling component was used to predict the end phase of the game, decide what cards to discard or not, when to take risks. This was one part I really wished I had more time to work on, it would have definitely improved the AI's quality.

Probabilistic model
The probabilistic model for this game happens to be a negative hypergeometric distribution however I started prototyping with a much simpler model, which turned out to be very inaccurate for a range of inputs, because of that I resorted to using the precise model, for that I unfortunately needed a so called big-number library because the calculations involved huge factorials which are then divided for the final 0.0-1.0 range probabilities.
After trying some of them I settled on TTMath because of its small size and because it fit my needs very well. However when it came to porting to NDS and mobile phones, this turned out to be a problem because the library used x86 assembly to do some of it's work, in the short time I had, the best thing I could do was to reimplement those features in c++.

Planning with probability
To plan based on probabilities, The planner would choose a fixed probability, per example 0.7, with that a precise non-probabilistic plan gets built based on the probabilistic model. This plan can then be evaluated and scored.
In the case of Keltis this envolves many things including planning what pieces to start or not (which depends on again different estimations including opponent modeling), how to use the bonus points (which are crucial to playing well) to state a few.
It might turn out per example that such a plan will not get the AI enough points to win the game if the player is already ahead, in that case the AI can choose to play with more risk and recursively test plans with lower probabilities until one that has a good risk/reward ratio is found.
Playing it risky in this case means hoping to get cards that have low probability. This was the approach I used to adaptively choose the AI's risk readiness based on the situation (comfortably winning, close game, hopeless ....)

Lessons learned
In retrospect improving the simple probabilistic model I had for better accuracy on the problematic input range, or finding an simpler approximate model would have probably been a much better idea and would have saved me some time that I could have used for improving the AI.
Another thing I learned is that if time is very short, you can go a long way with heuristics, and cover the rest with special case code... Andreas Epple my ex-colleague senior programmer helped me get myself to actually accept going the special-case way to bend the last problems before shipping, it worked much better than I expected.

Monday, May 11, 2009

A new (and cool) AIGamedev video + article, showing some of my work

Hard-Earned Insights from The AI Sandbox Development

You can see here some of the work I previously have not mentioned here including Hierarchical Pathfinding, terrain analysis, skeleton mirroring.

Thursday, May 7, 2009

The most AMAZINGLY detailed pac-man article I have ever seen

I must admit I did not read the whole thing but man!!! just have a look at it, I am sure it is more detailed than some AI designs for more 'complex' games

As an additional bonus, it seems that the author, Jamey Pittman, also suffers Game Developer/Guitarist schizophrenia just like me... (bottom of the article)
(Thanks to for twittering the link)

'In Defense of Eye Candy' Article, a must read.

Please choose one button to start your search:

The nice but still professional-looking one on the right? choose an AI SDK for your next game, choose your next car, choose your next employee based on his portfolio... I'm afraid aesthetics will always play a role in all of this, no matter how hard we try to be objective.
My personal way of looking at things since a couple of years is that feelings are objective as well, they are the objectivity of your subconscious, whenever you get overwhelmed and your logic circuits overflow, fall back to the feelings, and you know what? it's absolutely the right thing to do, but I digress... is about the importance of aesthetics, it was written on, a web design site (I think). It is an excellent read, throw away your coder art, there are good free models out there!

This reminds off the GDC AI demos (which I watched on the AIGamedev GDC report videos), some companies had great looking assets, while others had wireframe blocky shapes, although this does not tell much about AI, psychology again played it's role in making an opinion about them. Now of course, it might not totally fool one, but it will still tend to tip the balance in the depths of your brain, this article explains why.

Framework1 / Quake3 level Bezier patches

Today I got the Bezier patches working in the renderer.
Quake3 uses the simplest patches possible, bi-quadratic, which means 9 control points per patch.

A face in Quake3 can be composed of multiple patches arranged in a grid, when this is the case, each patch shares a line of 3 control points with it's horizontal and vertical neighbor.
To render such a face, all what needs to be taken care of is to correctly setup and share the control points, the resulting tesselated vertices however, are not shared and also not stitched together. The fact that they are not shared is obvious since they will tesselate to different vertices, not having to stitch them together however (based on the references mentioned in previous articles), sounded a bit strange to me, but it seems this really is the case, since the renders look ok.

I implemented two version of the Bezier tesselator, a simple one (as a reference for debugging and testing) that makes a mesh directly out of the control points, and another real tesselator.

The whole implementation took roughly 8 hours of work.

As usual, I took some screenshots, some are quite dark (despite a bit of gimp post-processing), but if you download the image you can see sufficient detail.
I grouped all the shots in one image, the ones the left show the 'reference' control point renderings, while the ones on the right show a tesselation of level 5 (6x6=36 vertices per patch).

In some places, small artifacts due to texture coordinates are visible, (the arched gate), I checked the level with another Quake3 bsp renderer and I observed the same artifacts, so this might be a level design problem, in any case it is not a big deal.

I really got into C# generics to make the tesselator be able to tesselate any kinds of vertices without needing to know their structure, I did this using Generics with 'where' constraints which was needed to be able to weight and add vertices together using functions of a required interface.
I really liked that, specially that implementing an interface is very cheap and does not mean that the functions implemented are 'virtual' and therefore less performant that non-virtual functions. It is simply a 'compile-time' constraint, nice!

This might be premature optimization, but since the whole design is based on streaming, all loading operations (such as loading a face that just became visible and tesselating in on the fly) better be fast enough.

Sunday, May 3, 2009

Good code review article

Article related to my work on Locomotion Planning

Alex Champandard just released the article 'Motion Planning for Fun and Profit!' which features some of my work with him at
It is an insider article, so if you do not have an insider membership yet (which is free), go and get one, it is priceless for anyone interested in game AI.