Announcing The Inverse Scaling Prize

Event Notification Type: 
Call for Participation
Abbreviated Title: 
The Inverse Scaling Prize First-Round Deadline
Location: 
State: 
Country: 
City: 
Contact: 
Sam Bowman
Ethan Perez

NYU is announcing the Inverse Scaling Prize: a $100k grand prize + $150k in additional prizes for finding an important task where larger language models do worse.

We're running submissions in two rounds, with the first deadline in late August. Details here: https://github.com/inverse-scaling/prize

Larger models consistently, predictably do better than smaller ones on many tasks (“scaling laws”). However, model size doesn't always improve models on all axes, e.g., social biases & toxicity. This contest is a call for important tasks where models actively get worse w/ scale.

Such tasks seem rare, but we've found some. E.g., in one Q&A task, we've noticed that asking a Q while including your beliefs influences larger models more towards your belief. Other possible examples are imitating mistakes/bugs in the prompt or repeating common misconceptions.

Finding more examples of inverse scaling would point to important issues with using large, pretrained LMs that won't go away with scale. These examples could provide inspiration for better pretraining datasets and objectives.

If it turns out to be very difficult to find inverse scaling, that would be some evidence that LM scaling would not make LMs worse in noticeable ways in the near term.

To enter the contest:
1) Identify a task that you suspect shows inverse scaling
2) Construct a dataset of 300+ examples for the task
3) Test your dataset for inverse scaling with GPT-3/OPT using our Colab notebooks
4) Follow instructions here to submit: https://github.com/inverse-scaling/prize

Submissions will be evaluated on a series of private models provided by Anthropic, and prize decisions will be made by a panel of anonymous reviewers.

For questions, reach out to us at inverse.scaling [at] gmail.com, open an issue on our repo, or join our Slack (details in our repo: https://github.com/inverse-scaling/prize).

We’re excited for people from all fields to take part (philosophy, cog sci, linguistics, etc), and we've designed our tools to be easy for ML newcomers to use too.

The Inverse Scaling Prize is run by a team out of New York University: Ian McKenzie, Alex Lyzhov, Alicia Parrish, Ameya Prabhu, Aaron Mueller, Najoung Kim, Sam Bowman, and Ethan Perez. The prize pool was generously made available by Future Fund.

If you’re excited about the contest, we’d appreciate you sharing it with people who might be interested in participating.