DynaBench is a research platform for dynamic data collection and benchmarking. Static benchmarks have well-known issues: they saturate quickly, are susceptible to overfitting, contain exploitable annotator artifacts and have unclear or imperfect evaluation metrics.
The DynaBench platform in essence is a scientific experiment: can we make faster progress if we collect data dynamically, with humans and models in the loop, rather than in the old-fashioned static way? Visit our website to learn more.
Go to the DynaBench website.
Click on a task you are interested in:
Click on 'Create Examples' to start providing examples.
You can also validate other people’s examples in the 'Validate Examples' interface.
As a model builder, you can upload your own model’s predictions.