I ran a round-robin tournament of all the submissions for the Battlecode 2010 Qualifying tournament (which I had since I calculated the Dropbox sponsor prize). I then ran a PageRank-like algorithm on the results to determine overall "goodness" of a player, taking into account the fact that teams that could beat better players and probably better than teams that just beat weaker players.
Here are the results for the top 13 teams. They are close to the final tournament results, but I think it's closer to most people's intuition than the actual results:
|Team number||Team name (approx)||Ranking (sum of all rankings = 1)|
|team139||You must construct additional pylons||0.05806974626755422|
|refplayer||(the hard version)||0.021432089706978193|
These are calculated from the results on three arbitrary maps, similar to the ones used in the final tournament in terms of size. Hopefully the results aren't as arbitrary as the tournament, but then again, we have no data on how arbitrary either of them are (except anecdotal). BellmanFord did have a very commanding lead, though, and the hard version of the refplayer was surprisingly hard (only 12 teams ranked higher than it).