JEG

The Joint Effort Group is a new activity of VQEG group that proposes an alternative collaborative action. Usually VQEG inspects models performance from individual proponents with respects to jointly developed testplans in a competitive manner. 

It requires the design of a database (e.g. distorted video contents and associated MOS score) posterior to models submission in order to assess the performance of different submitted models. Once used, the database is useless for further competition, because fair comparison is no longer valid (One could easily submit an overtrained metric on the dataset). This closes any opportunities to falsely improve assessed metrics besides parameters training. Furthermore, the knowledge exchange and the evolvement understanding of what made a model succeed and another model fail, is also very limited.

The Joint Effort Group (JEG) is intending to work jointly on both mandatory actions to validate metrics: subjective dataset completion and metrics design. That means that any proposal for improving metrics should be possible to inspect and that it is not necessary to provide a full metric to enter in the process. Therefore, JEG is clearly offering an opportunity to jointly propose quality metrics. In order to maintain fair validation of proposed tool and to avoid precious subjective quality assessment results, the metric has to be designed in a such a way that any contribution is assessed according to the availability of source code to the group.

It is also expected to increase after validation for some purposes, to extend the subjective dataset in order to better identify the limitations or extension of application scope of quality metrics. Regarding this latter comment, JEG could produce quality metrics identified through profile/layer with respect of their application scope.