MultiMediate: Multi-modal Group Behaviour Analysis for Artificial Mediation
The evaluation server for the eye contact detection tasks is now available & Updates of rules

The evaluation server for the eye contact detection tasks is now available & Updates of rules

08 June 2021
updates

We would like to announce that the evaluation server for the eye contact detection tasks is available as of now (Link). The evaluation server for the next speaker prediction challenge will follow soon.

In addition there are some important updates and specifications of the rules:

(1) For awarding certificates for 1st, 2nd and 3rd place in each subchallenge we will only consider approaches that are described in accepted papers that were submitted to the ACM MM Grand Challenge track.

(2) The evaluation servers will be open until the camera ready deadline (August 10, 2021). In case participants’ evaluation results in the camera ready version of the paper differ from the results in the initial paper submission, the organisers need to be notified and the reason for the difference needs to be explained. The improved result can only be considered for the challenge ranking if it is obtained with the method described in the accepted paper.

(3) Both challenge tasks are formulated as an online prediction scenario at test time, i.e. using only information from a single test sample to perform prediction for that test sample. We are aware that the design of the evaluation server allows for offline prediction (i.e. using information from several test samples jointly). The challenge ranking will only be based on online approaches. However, we also invite submissions using an offline approach. In this case the fact that an offline approach is presented needs to be clearly communicated in the paper and it will be out-of-competition with the online approaches.

If you have any questions concerning the challenge or the evaluation procedure, please don’t hesitate to get in contact with us!