Affective Video Retrieval Based on Dempster-Shafer Theory
Abstract
Affective video retrieval systems are designed to efficiently find videos matching the desires and needs of Web users. These systems usually use fusion strategies to combine information from different modalities aiming at understanding others` affective states. However, common fusion strategies used for affective video retrieval, neither were designed for this task, nor have any theoretical foundation. In order to address this problem, a novel fusion method based on the Dempster–Shafer theory of evidence is suggested. This method is utilized to combine audio and visual information contained in video clips. In order to show the effectiveness of the proposed method, experiments are performed on the video clips of DEAP dataset using two popular machine learning algorithms, namely SVM and Naïve Bayes. Results reveal the superiority of the proposed approach in comparison with the existing fusion strategies using both algorithms.
Keywords
Affective video retrieval; multimodal; fusion; Dempster-Shafer theory of evidence