Mesh Visual Quality based on the combination of convolutional neural networks
Affiliation auteurs | !!!! Error affiliation !!!! |
Titre | Mesh Visual Quality based on the combination of convolutional neural networks |
Type de publication | Conference Paper |
Year of Publication | 2019 |
Auteurs | Abouelaziz I, Chetouani A, Hassouni MEl, Latecki LJan, Cherifi H |
Conference Name | 2019 NINTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA) |
Publisher | EURASIP; IEEE; Yeditepe Univ; IEEE Turkey Sect; Univ Paris Saclay; IEEE France Sect; IEEE Yeditepe, KEKAM |
Conference Location | 345 E 47TH ST, NEW YORK, NY 10017 USA |
ISBN Number | 978-1-7281-3975-3 |
Mots-clés | 3D mesh, CNN, Quality prediction, Regression, Saliency |
Résumé | Blind quality assessment is a challenging issue since the evaluation is done without access to the reference nor any information about the distortion. In this work, we propose an objective blind method for the visual quality assessment of 3D meshes. The method estimates the perceived visual quality using only information from the distorted mesh to feed pre-trained deep convolutional neural networks. The input data is prepared by rendering 2D views from the 3D mesh and the corresponding saliency map. The views are split into small patches of fixed size that are filtered using a saliency threshold. Only the salient patches are selected as input data. After that, three pre-trained deep convolutional neural networks are used for the feature learning: VGG, Alexnet and Resnet. Each network is fine-tuned and estimates separately a quality score. Finally, a weighted sum is applied to compute the final objective quality score. Extensive experiments are conducted on a publicly available database, and comparisons with existing methods demonstrate the effectiveness of the proposed method according to the correlation with subjective scores. |