Motivation
in WOF, We usually seek perfection. In the case of AI assisted Footballer control switching, we wanted to do better than other comparable games. This means, when a human player is controlling a footballer, and the ball is shot or passed, control has to switch to the best candidate. This does not mean, the first footballer in the ball's direction! Even the smallest thought effort gives us a long list of cases where this does not work: lobs, balls unreachable because of ball speed and footballer reaction time, target footballer being busy doing other important actions... (even ball deflections from the goal bar are handled 'correctly' by WOF!)
the AI also had to correctly handle multiple human players in the same team and not allow ping- pong switching between them.
Results
a shy video showing the results (partially) can be seen here:
Non Linearity and Searching
Since we are in 3 dimensions, the speed of the ball being non linear, and the position even much less so, the fact that the footballers speeds can be approximated as linear is not of great help.
Therefore, the correct algorithm had to rely on a search. in it's basic unoptimized form, it should calculate the ball's complete trajectory (colliding with static obstacles only), and for each potential footballer, find the portions of the trajectory (and yes they are portions because of ball height changing) when the ball can be intercepted. Then choose among those based on mix of criteria: first to intercept, wait time after arriving to position, effort needed to arrive to position...
As with every search algorithm, it makes lots of sense trying to optimize it, we do not want this single 'basic' task to wreak havoc on the whole game's performance.
Full Ball Trajectory Calculation
The full ball trajectory is very useful, and we use it for many different purposes, that is why, we already had it implemented. It is calculated using exactly the same physics engine used in game, it is therefore very exact. The collisions are only performed with static objects, it does not make sense to include dynamic ones since they are ... dynamic.
Because of our performant physics, collision detection and spatial subdivision, we can afford to calculate the full ball trajectory every time the ball is shot.
The calculation is optimized in 2 ways. The 1st optimization is splitting the calculation among a configurable amount of frames, since we never need the whole trajectory instantaneously, in practice we obtain it fast enough for all of our purposes. The second optimization is the intelligent choice of sampling technique. Instead of some naive sampling, we use an error metric that allows us to have the least number of samples possible, while keeping the quality of the samples so, that one can linearly interpolate the position of the ball based on a sample's start velocity, until the next sample, keeping the error less than the ball's radius.
Search Optimizations
As with all searches, we can do much better than brute force. And since this search will be continuously executed since the footballers move, it better be performant!
One optimization was made by adding functionality to the trajectory calculation phase, allowing it, on the fly, to split the trajectory into multiple sections, depending on the height of the ball (and therefore it's reachability by footballers), allowing 'too high' sections to be completely skipped during the search.
As for the search itself, we used temporal caching, and a number of WOF specific optimizations.
For temporal caching, we hold on to the last found intercept point, and only check for it's validity using it's type (footballer will wait for ball at intercept, ball will wait for footballer at intercept, no intercept, ...) and the amount of time elapsed since the intercept's first analysis time. This means in one case per example, if we found a valid intercept point where the footballer has to wait 0.5 seconds for the ball after arriving to his position, we can, until 0.5 secs pass, 'simply' recheck for the intercept's validity (while keeping an eye on where the footballer is going) and this is much cheaper than starting a new search.
Of course, samples are invalidated as times passes by, thus making the number of samples to search smaller the further the time advances, but this is obvious.
The implementation itself produces (crudely estimated) 2500 lines of source code (excluding headers), split among multiple files in a nicely organized design, that allows reusing it in a clean way for a number of other purposes (like the scripted AI using it to make it's decisions).
Conclusion
when considering searches, make sure they are unavoidable, if you are sure they are, understand why, implement them, optimize, profile and enjoy the hopefully nice results.
Friday, August 15, 2008
Wednesday, August 13, 2008
Smoothing Character Collision Response using Quadratic Equations
Motivation
In WOF, we wanted characters to smoothly collide with the world (goal, advertisement banners, other characters, ...) and therefore 'slide' next to obstacles and not get blocked by them.
This might not be a necessity per se, since in a football game, the footballers have no business being outside the pitch, but this would be nice to have to reuse in other games, it was also something I had not done before so I wanted get my hands dirty with a more general solution then just character / character collision where bounding volumes could be used.
I decided to use the real polygon geometry of the obstacles, and not any specially prepared collision proxies, this meant the algorithm needed to handle colliding simultaneously with a number of objects / triangles and still making the best of it and sliding in the correct direction whenever it would be the right thing to do.
Results Video
A video of the nice results can be seen here:
http://www.youtube.com/watch?v=zBxg_Mp3md0
Usual Approach(es)
So as a first step of course, research, this was some time ago, so I will not post any links, you can search yourselves, but mostly people used nicely formed collision volumes such as spheres, capsules, cylinders. the advantages being per example being able to rotate in place without the collision volume changing, and therefore not causing collisions because of rotations.
This is important, because, if a character is moving, as in changing it's position, we know in which direction it moved, and we can therefore use that direction to make the collision response look credible, however for turning, things are a bit more complicated, and your usual collision tests help you by telling you how to translate volumes to resolve the collision, but not how to rotate, except if we search using bisection, which is expensive, and it seems there is no way around it for now.
This is again related to continuous collision detection (CCD), an active research topic, again linear is much easier than angular, and this of course makes sense, and has an explanation, but this is out of our scope right now, but as usual, the net is there for those who want to know more. A first search points to"Continuous Collision Detection Progress and Challenges" by 'Gino van der Bergen', who is known for his research in this area. The Bullet physics forums are probably also a good place for this topic, among many others.
Chosen Solution
Now our math library already had collision tests for many primitives, almost all a game would ever need, except capsules. I did not like the idea of using spheres because they would be too big, and we wanted the characters to be able to come very close before colliding. Of course, the spheres could be made smaller, which then has other disadvantages, basically, the sphere did not fit the real volume closely enough for my taste, (there are many collision detection tutorials / papers that discuss fitting properties of collision shapes). At the time, I chose to use Axis Aligned bounding boxes that do not change their volume when the footballers changes directions, additionally, OBB were being used for accelerating collision detection between the ball and the footballer's limbs using the physics engine, this means we have two types of bounding volumes for 2 different purposes.
Capsules would have been an equally good choice, or even a better one. But the 'technique' I used to resolve collision applies to both.
Plan of Attack
So what do we really want to achieve? we want the character to slide along obstacles and not be blocked by them when it makes sense.
When does it make sense? it makes sense when the obstacle does not directly oppose the direction the character is moving. so except if the obstacle directly opposes the movement direction, there is possibility to slide, this is the basic idea. Of course, we have more than one obstacle, and since we want to use the raw polygons of the obstacles, we will have more than 1 triangle per obstacle, additionally in WOF, obstacles can be volumes, so in the end, we will have a number of triangles and volumes to check against, and we will need to find a sliding direction giving the original movement direction and the triangles and volumes.
There is more than one way to solve this, but I will describe the final solution (and I think in WOF's context the best solution) that I came up with.
'Algorithm'
For all triangles and volumes, I used a swept volume collision detection to gather all the contact normals between the moving volume and the world's objects, for triangles I used only the face normals, this prevents the character from getting stuck because of slight penetrations of polygonal objects, as for cases where we really have a collision with a triangle edge, this works as well, assuming all triangles in an object are connected, whenever we have contact with an edge we also have contact with 2 or more triangle faces that meet at that edge, so we can get away with using triangle face normals and no edges (or points) assuming all triangles are connected, this way we happily slide along polygonal objects even if we hit edges because of slight penetrations and we will still be correct when we hit real edges because in that case more than 1 face normal will be taken into account. (some pictures at this point would be nice, but maybe I will do then later and update this post).
So as i said, I gather all contact normals as explained above, then I use quadratic programming to find a direction that satisfies the following constraint: has a non-zero negative dot product with all the gathered normals, or in other words a direction along which we can move freely without penetrating all current contacting obstacles. I take the resulting direction and if it is zero or if it has the same direction as the movement vector (solvedDir dot moveDir > 0), I block the moving object, sending it to where it was at the last step, I use a fixed step approach to physics, thus making this always look ok, otherwise I use the direction obtained from the solution as the new movement direction, optionally projecting the current movement translation onto it (thus reducing the movement speed), the new direction is not used 'after' the current movement is applied, the contacts are gathered during a 'check potential contacts using the current direction', so this means the new direction (translation) is used instead of the original direction.
Luckily, we have a Convex Quadraitc Prorgramming Problem at hand, this means it can be solved in reasonable time, I used the QuadProg++ library, (which uses the Goldfarb-Idnani dual method), but I modified it to optimize it's memory allocation behaviour
There are some more details to take into consideration, because in WOF per example, in some modes (tackling per example), we allow footballers to penetrate each other, so we need filters to specify which triangles / objects to use for collision detection, there are also issues about how to make swept collision detection behave nicely in all problematic cases, all those issues are solved and work well in WOF as one can see in the video, but the point here was using quadratic programming to solve for the 'sliding direction'. As for performance, there were no problems until now, I will spare you the profiling numbers though, they are only relevant for WOF.
In WOF, we wanted characters to smoothly collide with the world (goal, advertisement banners, other characters, ...) and therefore 'slide' next to obstacles and not get blocked by them.
This might not be a necessity per se, since in a football game, the footballers have no business being outside the pitch, but this would be nice to have to reuse in other games, it was also something I had not done before so I wanted get my hands dirty with a more general solution then just character / character collision where bounding volumes could be used.
I decided to use the real polygon geometry of the obstacles, and not any specially prepared collision proxies, this meant the algorithm needed to handle colliding simultaneously with a number of objects / triangles and still making the best of it and sliding in the correct direction whenever it would be the right thing to do.
Results Video
A video of the nice results can be seen here:
http://www.youtube.com/watch?v=zBxg_Mp3md0
Usual Approach(es)
So as a first step of course, research, this was some time ago, so I will not post any links, you can search yourselves, but mostly people used nicely formed collision volumes such as spheres, capsules, cylinders. the advantages being per example being able to rotate in place without the collision volume changing, and therefore not causing collisions because of rotations.
This is important, because, if a character is moving, as in changing it's position, we know in which direction it moved, and we can therefore use that direction to make the collision response look credible, however for turning, things are a bit more complicated, and your usual collision tests help you by telling you how to translate volumes to resolve the collision, but not how to rotate, except if we search using bisection, which is expensive, and it seems there is no way around it for now.
This is again related to continuous collision detection (CCD), an active research topic, again linear is much easier than angular, and this of course makes sense, and has an explanation, but this is out of our scope right now, but as usual, the net is there for those who want to know more. A first search points to"Continuous Collision Detection Progress and Challenges" by 'Gino van der Bergen', who is known for his research in this area. The Bullet physics forums are probably also a good place for this topic, among many others.
Chosen Solution
Now our math library already had collision tests for many primitives, almost all a game would ever need, except capsules. I did not like the idea of using spheres because they would be too big, and we wanted the characters to be able to come very close before colliding. Of course, the spheres could be made smaller, which then has other disadvantages, basically, the sphere did not fit the real volume closely enough for my taste, (there are many collision detection tutorials / papers that discuss fitting properties of collision shapes). At the time, I chose to use Axis Aligned bounding boxes that do not change their volume when the footballers changes directions, additionally, OBB were being used for accelerating collision detection between the ball and the footballer's limbs using the physics engine, this means we have two types of bounding volumes for 2 different purposes.
Capsules would have been an equally good choice, or even a better one. But the 'technique' I used to resolve collision applies to both.
Plan of Attack
So what do we really want to achieve? we want the character to slide along obstacles and not be blocked by them when it makes sense.
When does it make sense? it makes sense when the obstacle does not directly oppose the direction the character is moving. so except if the obstacle directly opposes the movement direction, there is possibility to slide, this is the basic idea. Of course, we have more than one obstacle, and since we want to use the raw polygons of the obstacles, we will have more than 1 triangle per obstacle, additionally in WOF, obstacles can be volumes, so in the end, we will have a number of triangles and volumes to check against, and we will need to find a sliding direction giving the original movement direction and the triangles and volumes.
There is more than one way to solve this, but I will describe the final solution (and I think in WOF's context the best solution) that I came up with.
'Algorithm'
For all triangles and volumes, I used a swept volume collision detection to gather all the contact normals between the moving volume and the world's objects, for triangles I used only the face normals, this prevents the character from getting stuck because of slight penetrations of polygonal objects, as for cases where we really have a collision with a triangle edge, this works as well, assuming all triangles in an object are connected, whenever we have contact with an edge we also have contact with 2 or more triangle faces that meet at that edge, so we can get away with using triangle face normals and no edges (or points) assuming all triangles are connected, this way we happily slide along polygonal objects even if we hit edges because of slight penetrations and we will still be correct when we hit real edges because in that case more than 1 face normal will be taken into account. (some pictures at this point would be nice, but maybe I will do then later and update this post).
So as i said, I gather all contact normals as explained above, then I use quadratic programming to find a direction that satisfies the following constraint: has a non-zero negative dot product with all the gathered normals, or in other words a direction along which we can move freely without penetrating all current contacting obstacles. I take the resulting direction and if it is zero or if it has the same direction as the movement vector (solvedDir dot moveDir > 0), I block the moving object, sending it to where it was at the last step, I use a fixed step approach to physics, thus making this always look ok, otherwise I use the direction obtained from the solution as the new movement direction, optionally projecting the current movement translation onto it (thus reducing the movement speed), the new direction is not used 'after' the current movement is applied, the contacts are gathered during a 'check potential contacts using the current direction', so this means the new direction (translation) is used instead of the original direction.
Luckily, we have a Convex Quadraitc Prorgramming Problem at hand, this means it can be solved in reasonable time, I used the QuadProg++ library, (which uses the Goldfarb-Idnani dual method), but I modified it to optimize it's memory allocation behaviour
There are some more details to take into consideration, because in WOF per example, in some modes (tackling per example), we allow footballers to penetrate each other, so we need filters to specify which triangles / objects to use for collision detection, there are also issues about how to make swept collision detection behave nicely in all problematic cases, all those issues are solved and work well in WOF as one can see in the video, but the point here was using quadratic programming to solve for the 'sliding direction'. As for performance, there were no problems until now, I will spare you the profiling numbers though, they are only relevant for WOF.
Tuesday, August 12, 2008
It's all so easy... (Brain stack dump)
For some time now, I have the feeling that all problems seem to carry no difficulty for me.
So I took some time to think about to find out if it's an illusion, a realtiy, or pure naive stupidity.
What I will begin with is not the first idea link in the chain of thoughts that I went through while thinking about this, but I still found it would be good to start with.
Of course, as with all topics, it is important to first define the used terms.
one dictionary definition of 'difficult' is: "not easy to do, understand, or solve". this definition uses the word easy, so this does not really help much, since I am searching for a deeper definition of difficult.
Looking up the definition for easy I get: "not difficult; simple", which is of course, also not helpful, since it uses the word 'difficult'. Instead of looking up simple (to probably find it using 'difficult' or 'easy'), I stop here and decide that the problem is really worth investigating!
So let us finally start: do you think the question 'what is the result of 1 + 1' is a difficult question for a 2 year old? yes? I think no...
I think there is a clear method to tackle any problem, and that there is no such thing as a difficult problem. The method is: identify the problem, search for a known method to solve the problem, if non is found, search for a way to solve the problem.
Pffff you say, we all know that and it does not make any problems less difficult. Sure it does.
Going back to the '1 + 1' question, let us apply our method. 1st "Identify the problem", ok the problem consits or the parts '1' '+' and the answer we are thinking of is '2'. Can we explain to a 1 year old what '1' is? no. so the problem is not difficult at all, we cannot even 'feed' the problem into the 1 year old, who's brain does not have yet any way of internally representing the 1st symbol '1', so the problem is non existent from the point of view of the 1 year old, but not 'difficult'.
Now what about asking the same question to a 6 year old who is good at math. easy u say? I disagree as well. when we ask what is '1 + 1' we get the answer '2' yes, but if we ask why ... we are stuck. the 6 year old did not solve the problem, he just looked up the answer in his memory, or counted on his fingers, but I do not consider the problem solved. To really solve the problem let us ask ourselves why does 1 + 1 = 2. Stupid question? not at all, 1 + 1 could be 10 like we see printed on popular nerd T-shirts, this would be the case if we are talking binary. 1 + 1 could also be 0, this would be if we are asking how many apples is 1 orange + 1 banana. So again, using our method, we 1st have to identify the problem. what does 1 + 1 actually mean, before we know that, it is useless to start solving. so, painful but necessary, what is '1' ? 1 could be anything, but abstracted it is a number, this is still not enough, what is a number? ... it is also a number using the decimal system, not binary, not hexadecimal. so 1 is one unit of 'anything', and also very important when we say 1+1 we do not many 1 of anything + 1 of anything, no, we mean 1 of anything + 1 of the same anything. of course we also have to define + (for which I do not have a clear definition without using a concrete 'something' like 1+1 oranges is having 1 unit of orange in some container and then adding on more unit of orange into the same container). And all of this is useless is we do not define what 2, 3, 4, 5, 6, 7, 8, 9, 10, (which I also do not find easy to explain in abstract terms without using oranges) and a method to count in decimal. Only after all this can we say that we have identified the problem. After having the problem identified, and by using the definition of how the decimal system works, and what it's symbols and their combinatorial representations mean (99, 1054), we find that 1+1 is basically by definition, 2. So the problem is not 'difficult' in itself, there is a clear method for solving it.
Now what if we try to explain this whole definition to a 15 year old, for 1 whole year, and that person still does not understand it? that's interesting point to think about. It means that people are like Computers, they have processors with capacity and speed, the processors are analogue but they are still processors, they are flexible and can grow and change, but they are still processors, and so I think that when a problem's definition is above the capacity of the brain that is supposed to 'solve' it, it is useless to talk about problem difficulty, the problem simply can't even be 'fed' into the brain.
There is also the case where that 15 year old would understand the problem, but needs 5 seconds to come with an answer, instead of 1 second, this would simply mean the capacity to accept the definition is there, but the processing speed is slow.
It still doesn't me the problem is 'difficult', not even relatively.
(Speaking about processors, here are some very cool mechanical ones)
Now after identifying the problem, our 2nd step says 'search for a known method to solve the problem', this could be using a method that we already know ourselves (saved in the brain), or doing research to find that people already found solutions to our problem or similar ones.
This would be per example the case when asking 'write a skeletal animation library' to someone who just started to learn '3D graphics', after identifying the problem, he would do some research and find that other people have done this countless times and written about it, he would then use his research to write the library. Does this mean the problem is 'difficult' for a 3D beginner ? again, no, we would be tempted to say that it is difficult for the beginner and easy for an experienced 3D Graphics Software engineer, but no, there is no 'difficulty' involved here, there is simply search time envolved that the experienced engineer need not do, but did at some point in the past, again it is not 'difficult'.
Another example is asking 'write a very robust game physics engine', is this difficult? again, I say no, because of the same reasons, defining what is exactly meant by a physics engine is, and what makes it robust, could fill a several papers, to someone who has never written a physics engine or used one, a lot of time would be needed working on step 1, does this mean that the step is difficult ? no, it simply needs, time, additionally, if that someone has no math experience, more papers would be added, and there are also many topics that would also justify a good amount of papers until reaching the end of the 1st step, but we are still following a clear recursive method, there is no magic nor difficuly envoled, it just takes time depending on the brain speed, assuming the capacity is there. after that would come step 2 also consuming lots of time, and if robust means more robust than the best current engines, we would finally come to step 3, if there are no known solution to the 'very robust' problem for problematic cases like huge world dimensions, very fast movements and rotations, huge mass ratios, etc... then we would need to 'search' for solutions, and a search is really a search, it is like an A* or similar in AI, there is no way to circumvent searches when the problems are new, it could also be that the problem is NP hard ( there are many places to read about P=NP problems, one I recommend though is the book "Artificial Intelligence, A Modern Approach, Stuart J. Russell, Peter Norvig"). Simplfied, NP hard means that until now, nobody managed to prove that we can do anything better than 'searching' to find a solution for such problems.
Searching would be done by using all known heursitics, and theoretical information, and trying plausible solutions until one is found or not found, finding out there is no solution is also a solution. Some robustness problems are per example very easy to solve if we use much better data types, like huge 512 bit floating or fixed point numbers instead of our usual floats or doubles, but then we hit the limits of our current technological limits (speed and memory of our current computers) and the real time constraints a game physics engine must satisfy, but all this still does not mean the problem is difficult, there is an obvious way to solve it, the solution might end up with a search that has no time bounds, or with a conclusion that the problem is not solvable given the constraints, but this does not mean 'difficult'. What is important though is finding out that sometimes, a search is needed, and that is of course the case for all 'new' problems, which are usually generated by solutions to old problems, or requiring improvements, and this is how the beautiful train of technology evolution rolls by, taking all of us enthusiasts on a nice ride.
Being the brain dump it is, there is no conclusion for this post, this topic still needs more thinking, and maybe after a few more related posts, a structured conclusion will come out.
So I took some time to think about to find out if it's an illusion, a realtiy, or pure naive stupidity.
What I will begin with is not the first idea link in the chain of thoughts that I went through while thinking about this, but I still found it would be good to start with.
Of course, as with all topics, it is important to first define the used terms.
one dictionary definition of 'difficult' is: "not easy to do, understand, or solve". this definition uses the word easy, so this does not really help much, since I am searching for a deeper definition of difficult.
Looking up the definition for easy I get: "not difficult; simple", which is of course, also not helpful, since it uses the word 'difficult'. Instead of looking up simple (to probably find it using 'difficult' or 'easy'), I stop here and decide that the problem is really worth investigating!
So let us finally start: do you think the question 'what is the result of 1 + 1' is a difficult question for a 2 year old? yes? I think no...
I think there is a clear method to tackle any problem, and that there is no such thing as a difficult problem. The method is: identify the problem, search for a known method to solve the problem, if non is found, search for a way to solve the problem.
Pffff you say, we all know that and it does not make any problems less difficult. Sure it does.
Going back to the '1 + 1' question, let us apply our method. 1st "Identify the problem", ok the problem consits or the parts '1' '+' and the answer we are thinking of is '2'. Can we explain to a 1 year old what '1' is? no. so the problem is not difficult at all, we cannot even 'feed' the problem into the 1 year old, who's brain does not have yet any way of internally representing the 1st symbol '1', so the problem is non existent from the point of view of the 1 year old, but not 'difficult'.
Now what about asking the same question to a 6 year old who is good at math. easy u say? I disagree as well. when we ask what is '1 + 1' we get the answer '2' yes, but if we ask why ... we are stuck. the 6 year old did not solve the problem, he just looked up the answer in his memory, or counted on his fingers, but I do not consider the problem solved. To really solve the problem let us ask ourselves why does 1 + 1 = 2. Stupid question? not at all, 1 + 1 could be 10 like we see printed on popular nerd T-shirts, this would be the case if we are talking binary. 1 + 1 could also be 0, this would be if we are asking how many apples is 1 orange + 1 banana. So again, using our method, we 1st have to identify the problem. what does 1 + 1 actually mean, before we know that, it is useless to start solving. so, painful but necessary, what is '1' ? 1 could be anything, but abstracted it is a number, this is still not enough, what is a number? ... it is also a number using the decimal system, not binary, not hexadecimal. so 1 is one unit of 'anything', and also very important when we say 1+1 we do not many 1 of anything + 1 of anything, no, we mean 1 of anything + 1 of the same anything. of course we also have to define + (for which I do not have a clear definition without using a concrete 'something' like 1+1 oranges is having 1 unit of orange in some container and then adding on more unit of orange into the same container). And all of this is useless is we do not define what 2, 3, 4, 5, 6, 7, 8, 9, 10, (which I also do not find easy to explain in abstract terms without using oranges) and a method to count in decimal. Only after all this can we say that we have identified the problem. After having the problem identified, and by using the definition of how the decimal system works, and what it's symbols and their combinatorial representations mean (99, 1054), we find that 1+1 is basically by definition, 2. So the problem is not 'difficult' in itself, there is a clear method for solving it.
Now what if we try to explain this whole definition to a 15 year old, for 1 whole year, and that person still does not understand it? that's interesting point to think about. It means that people are like Computers, they have processors with capacity and speed, the processors are analogue but they are still processors, they are flexible and can grow and change, but they are still processors, and so I think that when a problem's definition is above the capacity of the brain that is supposed to 'solve' it, it is useless to talk about problem difficulty, the problem simply can't even be 'fed' into the brain.
There is also the case where that 15 year old would understand the problem, but needs 5 seconds to come with an answer, instead of 1 second, this would simply mean the capacity to accept the definition is there, but the processing speed is slow.
It still doesn't me the problem is 'difficult', not even relatively.
(Speaking about processors, here are some very cool mechanical ones)
Now after identifying the problem, our 2nd step says 'search for a known method to solve the problem', this could be using a method that we already know ourselves (saved in the brain), or doing research to find that people already found solutions to our problem or similar ones.
This would be per example the case when asking 'write a skeletal animation library' to someone who just started to learn '3D graphics', after identifying the problem, he would do some research and find that other people have done this countless times and written about it, he would then use his research to write the library. Does this mean the problem is 'difficult' for a 3D beginner ? again, no, we would be tempted to say that it is difficult for the beginner and easy for an experienced 3D Graphics Software engineer, but no, there is no 'difficulty' involved here, there is simply search time envolved that the experienced engineer need not do, but did at some point in the past, again it is not 'difficult'.
Another example is asking 'write a very robust game physics engine', is this difficult? again, I say no, because of the same reasons, defining what is exactly meant by a physics engine is, and what makes it robust, could fill a several papers, to someone who has never written a physics engine or used one, a lot of time would be needed working on step 1, does this mean that the step is difficult ? no, it simply needs, time, additionally, if that someone has no math experience, more papers would be added, and there are also many topics that would also justify a good amount of papers until reaching the end of the 1st step, but we are still following a clear recursive method, there is no magic nor difficuly envoled, it just takes time depending on the brain speed, assuming the capacity is there. after that would come step 2 also consuming lots of time, and if robust means more robust than the best current engines, we would finally come to step 3, if there are no known solution to the 'very robust' problem for problematic cases like huge world dimensions, very fast movements and rotations, huge mass ratios, etc... then we would need to 'search' for solutions, and a search is really a search, it is like an A* or similar in AI, there is no way to circumvent searches when the problems are new, it could also be that the problem is NP hard ( there are many places to read about P=NP problems, one I recommend though is the book "Artificial Intelligence, A Modern Approach, Stuart J. Russell, Peter Norvig"). Simplfied, NP hard means that until now, nobody managed to prove that we can do anything better than 'searching' to find a solution for such problems.
Searching would be done by using all known heursitics, and theoretical information, and trying plausible solutions until one is found or not found, finding out there is no solution is also a solution. Some robustness problems are per example very easy to solve if we use much better data types, like huge 512 bit floating or fixed point numbers instead of our usual floats or doubles, but then we hit the limits of our current technological limits (speed and memory of our current computers) and the real time constraints a game physics engine must satisfy, but all this still does not mean the problem is difficult, there is an obvious way to solve it, the solution might end up with a search that has no time bounds, or with a conclusion that the problem is not solvable given the constraints, but this does not mean 'difficult'. What is important though is finding out that sometimes, a search is needed, and that is of course the case for all 'new' problems, which are usually generated by solutions to old problems, or requiring improvements, and this is how the beautiful train of technology evolution rolls by, taking all of us enthusiasts on a nice ride.
Being the brain dump it is, there is no conclusion for this post, this topic still needs more thinking, and maybe after a few more related posts, a structured conclusion will come out.
Monday, August 11, 2008
quote "Cargo Cult Methodology: How Agile Can Go Terribly, Terribly Wrong"
whenever an article contains something to this:
"All of us team members were survivors of another much larger project. That project had been done with outsourcing to a CMM Level 5 organization, with great care in the methodology at our end and with careful detailed project management overall. The project consumed tens of millions of dollars and years of overtime. It failed utterly. "
I just HAVE to read it ...
http://www.cio.com/article/print/442264
related: http://www.ddj.com/architect/209600574?cid=http://www.informationweek.com/maindocs/archive.htm
related: http://brucefwebster.com/2008/06/16/anatomy-of-a-runaway-it-project/
which has a very good part about 'documenting the unnecessary just for the sake of it'
"I truly believe that in the past we have been too ambitious in describing process, in adopting too much process, and in documentation. The reality is that even if people write a lot, very few people will ever read it. Thus the trend towards light will sustain. However, it is easy to be light. The trick is to be as light as possible but not lighter. I believe you will find our work on EssUP and EssWork new and fresh. "
related: http://www.ddj.com/architect/209602001?cid=http://www.informationweek.com/maindocs/archive.htm
" what to do when your stakeholders still insist on having a "precise estimate" at the beginning of the project."
"All of us team members were survivors of another much larger project. That project had been done with outsourcing to a CMM Level 5 organization, with great care in the methodology at our end and with careful detailed project management overall. The project consumed tens of millions of dollars and years of overtime. It failed utterly. "
I just HAVE to read it ...
http://www.cio.com/article/print/442264
related: http://www.ddj.com/architect/209600574?cid=http://www.informationweek.com/maindocs/archive.htm
related: http://brucefwebster.com/2008/06/16/anatomy-of-a-runaway-it-project/
which has a very good part about 'documenting the unnecessary just for the sake of it'
"I truly believe that in the past we have been too ambitious in describing process, in adopting too much process, and in documentation. The reality is that even if people write a lot, very few people will ever read it. Thus the trend towards light will sustain. However, it is easy to be light. The trick is to be as light as possible but not lighter. I believe you will find our work on EssUP and EssWork new and fresh. "
related: http://www.ddj.com/architect/209602001?cid=http://www.informationweek.com/maindocs/archive.htm
" what to do when your stakeholders still insist on having a "precise estimate" at the beginning of the project."
Thursday, August 7, 2008
mixing php and the c++ preprocessor?
Idea
here I go again, thinking and complaining about the well known limits of c++ meta programming,
the most known complaint candidate being lack of partial template specialization for functions.
Meta-programming seems to me to be very close to web server side scripting a la php per example. Because in the end, this is what the meta in meta-programming means...
generating source code via 'meta' code. well what about using something like a php processor to meta-program c++ files??
Potential
One could do something like #include "Vector.php.hpp?dims=3" this would include a meta programmed header for a 3d vector, one advantage is the the resulting header would be plain c++, it would be possible generate functions that take 3 arguments, like per example
VectorT::VectorT(Type value1, Type value2, Type value3) which is currently impossible with standard meta-programming.
actually the generated class could directly be generated as Vector3 or Vector3D, where ususally
it would be Vector<3>, of course this can be typedef'd and I am a big fan of typedef's they are great for writing fire and forget, self refactoring code. But anyway, it would be possible...
Taking it to the extreme we would probably be able to do something like this:
#include "Vector.php.hpp?class=FunkyVector3D, dims=3", and have a resulting class called FunkyVector3D! changing names of classes at will, would that be a maintenance disaster? or simply more responsability?
Issues
Planning will definitely be needed so that things don't get out of control and turn into a cryptic mess.
I suspect though, that without some perprocessor features to help with this, issues will arise (name collisions, linker errors, classes not found...).
Time for bed though, I will think about it some more when I have more time, but this could definitely be a cool experiment.
here I go again, thinking and complaining about the well known limits of c++ meta programming,
the most known complaint candidate being lack of partial template specialization for functions.
Meta-programming seems to me to be very close to web server side scripting a la php per example. Because in the end, this is what the meta in meta-programming means...
generating source code via 'meta' code. well what about using something like a php processor to meta-program c++ files??
Potential
One could do something like #include "Vector.php.hpp?dims=3" this would include a meta programmed header for a 3d vector, one advantage is the the resulting header would be plain c++, it would be possible generate functions that take 3 arguments, like per example
VectorT::VectorT(Type value1, Type value2, Type value3) which is currently impossible with standard meta-programming.
actually the generated class could directly be generated as Vector3 or Vector3D, where ususally
it would be Vector<3>, of course this can be typedef'd and I am a big fan of typedef's they are great for writing fire and forget, self refactoring code. But anyway, it would be possible...
Taking it to the extreme we would probably be able to do something like this:
#include "Vector.php.hpp?class=FunkyVector3D, dims=3", and have a resulting class called FunkyVector3D! changing names of classes at will, would that be a maintenance disaster? or simply more responsability?
Issues
Planning will definitely be needed so that things don't get out of control and turn into a cryptic mess.
I suspect though, that without some perprocessor features to help with this, issues will arise (name collisions, linker errors, classes not found...).
Time for bed though, I will think about it some more when I have more time, but this could definitely be a cool experiment.
Monday, August 4, 2008
marriage to c++ (also known as 'c++, elegant static arrays' part 2)
Motivation
I received feedback for the 'c++ static arrays' post from numerous friend developers, who are all unfortunately too lazy to post a comment.
Some were scared by the meta-programming heaviness, while others said it's an overkill.
I fed their feedback into my thinking machine and came to the following conclusion: the difference between us is that I am married to c++, while the skeptics have only been dating it, and it makes a big difference even if they have been dating for years.
Analysis
This kind of situation happens whenever you spend a lot of time in a relationship with something or somebody, be it c++ in my case, your car, your flat, your partner, your friend and EVEN your new IPhone! the situation always evolves the same way: in the beginning, you enjoy the new benefits, you only see the cool features, you are happy it works and you are satisfied, but eventually, you start taking all the good things for granted, and start to only see the small annoyances, which transform into big annoyances, because they become all what you see, it is like taking a small coin, and sticking it into your eye, even though it is small, it is all what you see, and it is very painful. I could turn this into a human relationship post, but I will just say it is different because people have feelings, and trying to change others is in many cases egoistic, in other rarer cases the right thing, but in most cases can hurt feelings, but let's close it here and concentrate on c++, which until now, has not developed any feelings.
Allow me to change you...
The 'issues' can be divided into 2 types:
There are the things that I don't like when done the "copy from book/tutorial/demo" way, and this does not mean that the book/tutorial/demo is bad, of course not! but things have to be put in context. You cannot simply take whatever code was there and stick it into a codebase specially if it is large and complex, the culprit here it turns out is the programmer, but a beginner programmer is automatically excused, he is happy enough that things are working and that's fine, I am happy. But I see many experienced programmers, who should know better, and know that they have to write code that has minimal dependencies and needs minimal changes, and that is robust and very good at detecting problems at run time, and not just for the sake of it, or because of some obscure obsession about code elegance, but becuase this is wasted time and money in any term except the short term, I am not happy!
Most such issues can be easily solved however, by using meta-programming per example, and by thinking long term, how long is long term is also an important skill by the way, but let us skip that and move to the second type.
This is the tougher type, with issues that are impossible to solve wihout the help of the compiler, (except when killing performance or causing other big disasters is no problem). In these cases we can just hope and wait for things like c++0x, tr1 and friends to come to the rescue, one example mentioned in the previous post is the 'auto' keyword which allows code to be made less dependent, and mroe change resilient among other benefits.
But unsolved compiler dependent issues remain (themes for future rants), seemingly meaningless to most ... except the married.
I received feedback for the 'c++ static arrays' post from numerous friend developers, who are all unfortunately too lazy to post a comment.
Some were scared by the meta-programming heaviness, while others said it's an overkill.
I fed their feedback into my thinking machine and came to the following conclusion: the difference between us is that I am married to c++, while the skeptics have only been dating it, and it makes a big difference even if they have been dating for years.
Analysis
This kind of situation happens whenever you spend a lot of time in a relationship with something or somebody, be it c++ in my case, your car, your flat, your partner, your friend and EVEN your new IPhone! the situation always evolves the same way: in the beginning, you enjoy the new benefits, you only see the cool features, you are happy it works and you are satisfied, but eventually, you start taking all the good things for granted, and start to only see the small annoyances, which transform into big annoyances, because they become all what you see, it is like taking a small coin, and sticking it into your eye, even though it is small, it is all what you see, and it is very painful. I could turn this into a human relationship post, but I will just say it is different because people have feelings, and trying to change others is in many cases egoistic, in other rarer cases the right thing, but in most cases can hurt feelings, but let's close it here and concentrate on c++, which until now, has not developed any feelings.
Allow me to change you...
The 'issues' can be divided into 2 types:
There are the things that I don't like when done the "copy from book/tutorial/demo" way, and this does not mean that the book/tutorial/demo is bad, of course not! but things have to be put in context. You cannot simply take whatever code was there and stick it into a codebase specially if it is large and complex, the culprit here it turns out is the programmer, but a beginner programmer is automatically excused, he is happy enough that things are working and that's fine, I am happy. But I see many experienced programmers, who should know better, and know that they have to write code that has minimal dependencies and needs minimal changes, and that is robust and very good at detecting problems at run time, and not just for the sake of it, or because of some obscure obsession about code elegance, but becuase this is wasted time and money in any term except the short term, I am not happy!
Most such issues can be easily solved however, by using meta-programming per example, and by thinking long term, how long is long term is also an important skill by the way, but let us skip that and move to the second type.
This is the tougher type, with issues that are impossible to solve wihout the help of the compiler, (except when killing performance or causing other big disasters is no problem). In these cases we can just hope and wait for things like c++0x, tr1 and friends to come to the rescue, one example mentioned in the previous post is the 'auto' keyword which allows code to be made less dependent, and mroe change resilient among other benefits.
But unsolved compiler dependent issues remain (themes for future rants), seemingly meaningless to most ... except the married.
Saturday, August 2, 2008
c++, elegant static arrays
It can be better
I will show you a nicer way of 'declaring' c++ static arrays that is more elegant, a bit less error prone, and makes life easier when changing the array size or index type.
This is fairly basic, and I have been using it for a long time, but I have never seen any code or tutorial doing it my way.
In any case, its a really microscopic issue that does not deserve so many words, but only needs a nice small example.
The basic Tutorial
this is tutorial code from the 1st search engine hit for 'tutorial c++ static arrays'
PS: when viewed with IE, the source code is not horizontally scrollable, I am too lazy to find out why, sorry, use firefox.
Evolution
Agreed, this might be too heavy for a 'static arrays' tutorial, BUT the problem is most people don't evolve and don't strive to improving their skills, this approach works so they stick to it.
I have seen senior programmers with 10+ years experience still sticking to it, there's definitely nothing wrong with that, but there is a certain general 'attitude' in play here, being the 'non-naive flexible' perfectionist I am, I cannot help but to keep pushing the limits, and this is the latest version to date.
Benefits
One thing to note is that all this fanciness introduces zero overhead, by using inline functions and static/const variables.
Also notice that this really 'decouples' the array declaration from the code using it, making changes to array properties (size, data types, preferred index) automatically propagate through our code with no need to change anything, and whenever changes are needed but cannot be detected at compile time, asserts will come to the rescue and save us many headaches.
I like this because I have a tendency to try to write 'fire and forget' code as much as c++ allows, this means using asserts to warn me whenever there is danger that things might break at run time.
The extremely simple but effective numeric_cast is an example of that, making sure all data types used to store indexes are 'fire and forget' so that I do not to be paranoid, allowing me to make changes with piece of mind knowing that I'll directly get notified when there is danger lurking around the corner, there maybe room for a more detailed explanation here, but I will leave it at that.
Long Term Thinking
In the end, many will probably dismiss this as pointless overkill, in my personal opinion and experience, in the long term it never is.
I will show you a nicer way of 'declaring' c++ static arrays that is more elegant, a bit less error prone, and makes life easier when changing the array size or index type.
This is fairly basic, and I have been using it for a long time, but I have never seen any code or tutorial doing it my way.
In any case, its a really microscopic issue that does not deserve so many words, but only needs a nice small example.
The basic Tutorial
this is tutorial code from the 1st search engine hit for 'tutorial c++ static arrays'
PS: when viewed with IE, the source code is not horizontally scrollable, I am too lazy to find out why, sorry, use firefox.
ok, this was your standard tutorial, from my point of view though, it should be like this.
int my_array[10];
my_array[0] = 100;
my_array[1] = 100;
my_array[2] = 100;
my_array[3] = 100;
my_array[4] = 100;
my_array[5] = 100;
my_array[6] = 100;
my_array[7] = 100;
my_array[8] = 100;
my_array[9] = 100;
for(int i = 0; i<10;>
my_array[i] = 100;
usage, with comments explaining some of the benefits:
template<typename TypeT, size_t LengthT, typename PreferredIndexT = int>
struct StaticArray {
typedef int EnumType; //this is an int, except if changed in compiler settings when available
typedef PreferredIndexT PreferredIndex;
typedef TypeT Type;
enum { Length = LengthT };
Type data[Length];
inline StaticArray() {
//check if Length fits into enum data type
numeric_cast<EnumType>(LengthT);
}
static inline EnumType length() const { return Length; }
template<typename OutT, typename InT>
static inline OutT numeric_cast(const InT& val) { assert(((InT) (OutT) val) == val); return (OutT) val; }
template<typename T>
static inline const T& getLength() const { static const T len = numeric_cast<T>(length()); return len; }
template<typename T>
inline Type& safeEl(const T& i) { assert(i >= 0 && i < Length); return data[i]; }
template<typename T>
inline const Type& ctSafeEl(const T& i) const { return safeEl(i); }
template<typename T>
inline Type& el(const T& i) { return data[i]; }
template<typename T>
inline const Type& ctEl(const T& i) const { return el(i); }
template<typename T>
inline Type& operator[](const T& i) { return el(i); }
template<typename T>
inline const Type& operator[](const T& i) const { return ctEl(i); }
typedef Type* iterator;
inline iterator begin() const { return data; }
inline iterator end() const { return data + Length; }
};
{
typedef StaticArray<int, 300> Array;
Array arr;
//we can do away with the typedef
//StaticArray<int, 300>arr;
//we can sepcify the preferred index type to be used with this array
//StaticArray<int, 300,unsigned short> arr;
{
int i = 0;
arr[i++] = 0; //use i++ and save us from typo problems, we also can copy-paste better like this
arr[i++] = 1;
arr[i++] = 2;
arr[i++] = 3;
arr[i++] = 4;
arr[i++] = 5;
arr[i++] = 6;
arr[i++] = 7;
arr[i++] = 8;
arr.safeEl(9) = 9; //check that we did not go over the array's size
}
//with arr.length(), we don't need to change this code if the array size changes
for(int i = 0; i < arr.length(); ++i)
arr[i] = i;
//this will produce an assert if 'char' cannot hold the size of the array, that's good!
for(char i = 0; i < arr.getLength<char>(); ++i)
arr[i] = i;
//only works with typedef declaration
//use the 'preferred' index
for(Array::PreferredIndex i = 0; i < arr.getLength<Array::PreferredIndex>(); ++i)
arr[i] = i;
//use an stl type iterator, if we ever use an stl container, no code
//needs to change
//additionally, we do not to worry about index types
{
Array::Type i = 0;
for(Array::iterator it = arr.begin(); it != arr.end(); ++it) {
*it = i++;
}
}
//the 'best' version works only with c++0x
//using auto allows changing the iterator (or index type if not using iterator)
//without needing to change code anywhere else
{
Array::Type i = 0;
for(auto it = arr.begin(); it != arr.end(); ++it) {
*it = i++;
}
}
}
Evolution
Agreed, this might be too heavy for a 'static arrays' tutorial, BUT the problem is most people don't evolve and don't strive to improving their skills, this approach works so they stick to it.
I have seen senior programmers with 10+ years experience still sticking to it, there's definitely nothing wrong with that, but there is a certain general 'attitude' in play here, being the 'non-naive flexible' perfectionist I am, I cannot help but to keep pushing the limits, and this is the latest version to date.
Benefits
One thing to note is that all this fanciness introduces zero overhead, by using inline functions and static/const variables.
Also notice that this really 'decouples' the array declaration from the code using it, making changes to array properties (size, data types, preferred index) automatically propagate through our code with no need to change anything, and whenever changes are needed but cannot be detected at compile time, asserts will come to the rescue and save us many headaches.
I like this because I have a tendency to try to write 'fire and forget' code as much as c++ allows, this means using asserts to warn me whenever there is danger that things might break at run time.
The extremely simple but effective numeric_cast is an example of that, making sure all data types used to store indexes are 'fire and forget' so that I do not to be paranoid, allowing me to make changes with piece of mind knowing that I'll directly get notified when there is danger lurking around the corner, there maybe room for a more detailed explanation here, but I will leave it at that.
Long Term Thinking
In the end, many will probably dismiss this as pointless overkill, in my personal opinion and experience, in the long term it never is.
Subscribe to:
Posts (Atom)