Welcome to VALUE Challenge 2021!


Overview


We are pleased to announce VALUE Challenge 2021! The challenge will be hosted at the Forth Workshop on Closing the Loop Between Vision and Language, ICCV 2021.

Please stay tuned for more information!


Important Dates


  • Challenge Launch: June 7th, 2021.
  • Results Submission Deadline: 23:59:59 (AoE), September 13th, 2021.
  • Decision to participants: September 27th, 2021.
  • The winners will be announced at the CLVL workshop, ICCV 2021 on October 17th, 2021 .

Important Updates


  • [09/07/2021] We noticed an error in calculating the meta average score in our CodaLab evaluation script, where 11 tasks across 3 macro-tasks and the 3 average scores of the 3 macro-tasks are used, thus leading to the wrong results. By definition, meta average score should be the average of scores from the 11 tasks. We have updated the evaluation script to correctly calculate the meta average score. Note that your new submissions from today will be evaluated using this script, while old submissions scores will not be updated, you need to calculate it yourself.
  • [08/23/2021] CodaLab evaluation portal updates: (1) support `leaderboard` feature, you will now see a `Submit to Leaderboard` button to allow display your results on CodaLab's leaderboard; (2) the server will only evaluate `test` splits, instead of both `val` and `test` to reduce evaluation time cost; (3) action required: please register and submit predictions to this new portal, the old portal will be deleted soon.
  • [08/05/2021] We updated the main metric to Mean-Rank for all leaderboards (challenge phases). Mean-Rank is the average of model rank on each task considered in the leaderboard or challenge phase. Browse each leaderboard to see how VALUE baselines are ranked in action.
    Q: Why we changed the main metric? A: Check out full discussions here.

Challenge Submission Requirements


To be eligible for consideration for awards, we kindly request you to fill out this Google form, and then forward a copy of your response to value-benchmark@googlegroups.com for us to confirm.


Guidelines


The VALUE benchmark is a collection of video-and-language dataset on multi-channel videos (video+subtitle) across diverse video domains and genres, which contains 11 datasets over 3 popular video-and-language tasks: text-based video retrieval, video question answering and video captioning. Please refer to our paper for more details. This VALUE Challenge aims to benchmark progress towards general video-and-language understanding systems that can generalize to different tasks and can process multi-channel videos with both visual frames and subtitles as inputs.

Dataset Download

Please refer to the details at the Data Release repo. You can download textual annotations, subtitles and different visual features of each dataset following the instructions.


Challenge Phases
  • VALUE: This phase evaluates task-agnostic algorithms on all datasets and tasks in VALUE benchmark. A submission needs to consist of results on all 11 datasets to be considered as a valid submission.
  • Retrieval: This phase evaluates algorithms on 4 text-to-video retrieval tasks in VALUE benchmark, including TVR, How2R, YC2R and VATEX-EN-R. A submission needs to consist of results on all 4 datasets to be considered as a valid submission.
  • QA: This phase evaluates algorithms on 4 video question answering tasks in VALUE benchmark, including TVQA, How2QA, VIOLIN and VLEP. A submission needs to consist of results on all 4 datasets to be considered as a valid submission.
  • Captioning: This phase evaluates algorithms on 3 video captioning tasks in VALUE benchmark, including TVC, YC2C and VATEX-EN-C. A submission needs to consist of results on all 3 datasets to be considered as a valid submission.

Please check out VALUE Submission page for more instructions on submission.


Challenge Prizes

The top ranked participants from each challenge phase will be awarded with Microsoft Azure credit. The prizes are $9,000 for VALUE phase and $4,500 for the other phases.



Contact



Have any questions or suggestions? Feel free to reach us at value-benchmark@googlegroups.com!
For faster processing time, please add [VALUE Challenge 2021] to the email title.