Introducing the Temporal data set, a benchmark for recognizing actions in videos

October 31, 2019

Share

What the research is:

A new data set to train and benchmark AI systems to better understand actions in videos — in particular, actions that can’t be determined by viewing just a single frame. Current video data sets often focus on actions where a single image is enough for recognition, such as washing dishes, eating pizza, or playing guitar. To improve computer vision systems’ understanding of elements that can be recognized only in a video sequence — such as whether someone is sneezing or opening a door — we discovered a set of actions where temporal information is essential for recognition.

We’re now sharing this work, along with our methodology for determining those classes and results from training networks on it, in order to help researchers benchmark their systems’ ability to recognize temporal actions.

How it works:

To discover which actions in video should be designated as temporal classes, we presented annotators with video clips from existing video recognition data sets, with their frames shuffled out of order. If annotators couldn’t identify a given action, we determined that temporal information was essential to recognition and added that class to our data set. We discovered 50 such temporal action classes in all, which were associated with a total of 35,504 publicly available videos from the Kinetics and Something-Something benchmarks. Our resulting list of classes, called the Temporal data set, doesn’t consist of video content; rather, it consists of the classes associated with specific clips in those benchmarks.

To evaluate the utility of our data set, we used it to benchmark current video recognition methods and found that some state-of-the-art networks capture more image information than temporal information. We also used this new Temporal data set to train existing video recognition networks and found that they became more sensitive to temporal changes and less reliant on image information, which improved their ability to generalize to new, unseen action classes.

Why it matters:

Our training results indicate that incorporating temporal data can improve the overall performance of video understanding systems. But our work also suggests that current video data sets underrepresent classes where temporal information is essential for understanding, which could bias progress toward understanding images, rather than understanding the kinds of actions that are recognizable only in videos. Our data set — which isn’t a separate, downloadable resource and can be found in the appendix of the paper below — will help researchers assess and improve their systems’ ability to use temporal information, while also pushing the field to incorporate that information into future video data sets.

Read the full paper:

Only time can tell: Discovering temporal data for temporal modeling