Post Mortem of a Google AI bot

The submission phase of this years Google AI Challenge is over and final bot rankings are being established. I have to admit that the competition was (to me at least) very, very tough. It looks like Heron's performance is going to be much less than what I was hoping for.

But the good news is that I learned a great deal during these past two months. Here are some of the lessons I've learned:

1. Don't converge on an 'optimal' solution too quickly. This is basically just restating Don Knuth's view on pre-mature optimization being the root of all evil. I think more than anything this is what hurt me. I have a strong tendency to be a perfectionist in the things that I do and sometimes this comes back to bite me. Maybe you, gentle reader, can relate. The problem is that this makes me spend too much time trying to perfect things that maybe aren't as important and when in fact combined with other components of the system in fact turn out to be very much sub-optimal. So, in the future I need to give myself more time to experiment with various ideas before polishing a single one down to perfection.

2. Prefer simpler algorithms when the performance difference is not too great. Again, this is pretty much following along the lines of number 1 above. Now that other contestants are starting to talk about their bot designs its becoming clear to me that I seriously over engineered a number of things, particularly with choice of algorithms. For example, I wrote an A* implementation using binary heaps and an indexed array. This is one approach suggested by Amit Patel in his path finding notes. And it is a very efficient implementation but in hindsight might have been over kill for this problem as I've noticed some of the top contestants used a breadth first search to both find paths and handle task assignment. So while, BFS is theoretically sub optimal, the side effect that it also allows you to handle task assignment among several ants actually appears to make it better.

3. Maybe it's time to start seriously thinking about dynamic languages. OK, I admit it. I've always secretly been a bit of a language snob. I've always felt smugly superior as a C/C++ programmer to folks that use things like JavaScript and Python. But now that I really think about it, that feeling was probably more due to my own resentment about how productive these languages seem to make people. In any case, with a competition like this I really would have liked a language that would have more easily facilitated rapid development. And again this ties back to lesson 1. Don't optimize prematurely. I really think it would be helpful to have a dynamic language so that I could have tried many more ideas. While I love the speed and power of C++, it's kind of bad to get one month into a contest like this and realize that you've made all these mistakes and now have to go back and redesign your class types. Apparently Python has a fairly good foreign function interface to C, so that might be a good way to optimize cpu bound code portions. Also I noticed some of the contestants using something called "PyPy" which apparently is a JIT compiler for Python that can give the language a substantial performance boost.

Lastly, lesson #4 is "Never give up!". While very little in this contest went my way, I think I can at least claim a moral victory in the fact that I kept at it during the whole two months, even though, I was very, very tempted to quit so that I could start playing in the SWTOR head start. I really think that not giving up is probably the most important thing to do in programming, or anything for that matter.

Oh, I almost forgot. Actually there is a lesson 5. And it is "Aim high." I admit very often I tend to be overly exuberant and unrealistic in my optimism when starting a new project. I've heard this is fairly common among programmers. While it certainly can be disappointing to have one's hopes dashed, on the other hand it would be much worse to never try to begin with. After all, how many people enter a contest like this with the though "Oh, I just want to be average". Answer: nobody. So, don't be afraid to aim for the stars - otherwise you may never even get off the ground.

Let slip the ants of war!

So the Google AI challenge is back with a vengeance as of yesterday, and for better or worse I've decided to participate this year. I say "or worse" because it could end up being a bit of a personal obsession for the next few weeks, and given the calibre of the competition I'm not sure how well I'll be able to do but it should be a fun learning experience regardless. Naturally this means my current reworking of my vector and matrix math code will be on the back burner while this thing is going on.

I've already signed up and downloaded the C++ starter pack and then uploaded the default bot which is pretty mindless but at least it's something until I've had a chance to submit a bot with a reasonable amount of capability. I'm scared to look at my ranking at the moment since its the default bot, but my handle is "Heron" if anyone is interested in tracking my progress in the rankings. I'll update this blog as soon as I've had a chance to upload a revised bot.

Anyway, this year's competition involves the simulation of an ant colony. The goal for the bot is to collect food, grow the colony, and exterminate opposing ant colonies controlled by other players' bots. I've got an idea for a fairly simple bot that I'm currently researching. I thought I might find a possible algorithm to implement it in Donald Knuth's Art of Computer Programming but unfortunately he hasn't published that chapter yet. I'm devastated.

Everything old is new again.

This week the world lost Dennis Ritchie. I'm not going to try to eulogize this giant of Computer Science. Many other people have already done that, and besides, the man's work speaks for itself.

But it has made me interested in taking a new look at C. Over the past year, I've been using C++ on my graphics engine and the year before that I spent exploring Haskell. I've had mixed success with these. While they're both very nice languages it seems to me that for every problem each one of them solves, other problems are created. So I think I'm going to go back to C for a while and rewrite my vector and matrix math code in that language instead of C++ (or Haskell).

Much of my reasoning for this is based on Joel Spolsky's Law of Leaky Abstractions. It seems to me that while object oriented and functional programming abstractions can be very helpful, they can also be incredibly frustrating to deal with when something doesn't work and you have to go track down a bug. So I think a good case can be made for writing code that keeps as much practical information on the current line as possible - in other words code that is self documenting. And I think C might be a good way to do this. After all, sometimes less is more.