Market leaderboards show the expected value of your Round portfolio, in points (฿). They refresh every few hours. If the market is accurate, they will roughly predict prizes.

Survey leaderboards sum your rank score for each batch in that Round. Rank score is just inverted rank: 1 is the worst. Survey prizes go to the top 4 per batch, so will roughly correlate. (See this blog post for details, including a revision that did not affect prizes.)

Survey Leaderboards: longer version:

While we won’t know your accuracy until the replication results are announced, we do our best to estimate it and keep score anyway. The survey leaderboards reward both quality (estimated accuracy) and participation (surveys completed). We provide them for motivation & feedback:

We want to recognize and reward the most accurate forecasters.

We hope even this limited feedback gives you some idea of whether you’re on the right track.

How does RM score surveys for the leaderboard?

Estimate the accuracy of every forecasting survey using peer prediction. (See this post for more info.) Rank these.

Each completed batch earns 1..N points based on your rank:

Least accurate gets 1 point, plus one point per rank, so…

The most accurate receives N points, the number who completed that batch.

Your total round leaderboard score sums the batch scores for that round.

As participation increases, doing well becomes more important. In Round 8, winning a single batch earned ~35 leaderboard points.

How does RM estimate survey forecasting accuracy before replication results are known?

We are excited to be using new methods in peer prediction. We believe they are fair and accurate. Though we don’t we don’t believe they are vulnerable to manipulation[1], we decided to remain vague on their details for now. Our methods will eventually be published publicly.

The leaderboard needn’t match the prize winners list perfectly

Completing 30 batches of surveys and earning a respectable 5^{th} place in each would not win a single cash prize, but it would place you fairly high on the leaderboard. Much higher than a person who completed one batch and won the $80 first place prize. The people who win the most money obviously complete a lot of batches and score in the top four in many of them.

Changes to the leaderboard calculations

Note that in Rounds 1-5 we initially translated rank into points slightly differently… If the most popular batch had 35 participants, then every batch in the round gave 35 points to the 1^{st} place winner. It also wasn’t required that participants complete every survey in the batch in order to win some points. We started assigning points differently in Round 6 to better align with how we were choosing prizes, and because it seemed fair that 10^{th} place, out of 10, should earn fewer points than 10^{th} place out of 35. On June 8, 2020 we adjusted the scores for rounds 1-5 to use the same scoring formula. This had a negligible effect on the ranking of participants that completed more than 5 batches of surveys. Prize allocation is unchanged.

[1] That is, we think it’s easier for you to coordinate on the truth than to game the method. See this post for some early robustness checks, or search the literature for “peer prediction” and “surrogate scoring”.