I ran a round-robin tournament of all the submissions for the Battlecode 2010 Qualifying tournament (which I had since I calculated the Dropbox sponsor prize). I then ran a PageRank-like algorithm on the results to determine overall "goodness" of a player, taking into account the fact that teams that could beat better players and probably better than teams that just beat weaker players.

Here are the results for the top 13 teams. They are close to the final tournament results, but I think it's closer to most people's intuition than the actual results:

Team numberTeam name (approx)Ranking (sum of all rankings = 1)
team134BellmanFord0.12694167581159302
team049boydboyd0.086454603273658212
team211blacksheep0.084287207991206689
team003drunkasaurus0.078714932538328425
team263Swimming Submarines0.063079175590442543
team139You must construct additional pylons0.05806974626755422
team235Little0.052258878439831666
team204It's 89990.045763989864547897
team259CodeRage0.037884246041697184
team313Experiencing A...0.033037785533185975
team073Cheddar Sourcrack0.026936449903030273
team220Prim ignored...0.021622070036154288
refplayer(the hard version)0.021432089706978193

These are calculated from the results on three arbitrary maps, similar to the ones used in the final tournament in terms of size. Hopefully the results aren't as arbitrary as the tournament, but then again, we have no data on how arbitrary either of them are (except anecdotal). BellmanFord did have a very commanding lead, though, and the hard version of the refplayer was surprisingly hard (only 12 teams ranked higher than it).