We propose a No-reference Image Quality Assessment (NR-IQA) approach based on the use of generative adversarial networks. To address the problem of lack of adequate amounts of labeled training data for NR-IQA, we train an Auxiliary Classifier Generative Adversarial Network (AC-GAN) to generate distorted images with various distortion types and levels of image quality at training time. The trained generative model allow us to augment the size of the training dataset by introducing distorted images for which no ground truth is available. We call our approach Generative Adversarial Data Augmentation (GADA) and experimental results on the LIVE and TID2013 datasets show that our approach – using a modestly sized and very shallow network – performs comparably to state-of-the-art methods for NR-IQA which use significantly more complex models. Moreover, our network can process images in real time at 120 image per second unlike other state-of-the-art techniques.

GADA: Generative Adversarial Data Augmentation for Image Quality Assessment / Bongini, Pietro; Del Chiaro, Riccardo; Bagdanov, Andrew D.; Del Bimbo, Alberto. - STAMPA. - 11752:(2019), pp. 214-224. (Intervento presentato al convegno Image Analysis and Processing -- ICIAP 2019) [10.1007/978-3-030-30645-8_20].

GADA: Generative Adversarial Data Augmentation for Image Quality Assessment

BONGINI, PIETRO
;
Del Chiaro, Riccardo;Bagdanov, Andrew D.
;
Del Bimbo, Alberto
2019

Abstract

We propose a No-reference Image Quality Assessment (NR-IQA) approach based on the use of generative adversarial networks. To address the problem of lack of adequate amounts of labeled training data for NR-IQA, we train an Auxiliary Classifier Generative Adversarial Network (AC-GAN) to generate distorted images with various distortion types and levels of image quality at training time. The trained generative model allow us to augment the size of the training dataset by introducing distorted images for which no ground truth is available. We call our approach Generative Adversarial Data Augmentation (GADA) and experimental results on the LIVE and TID2013 datasets show that our approach – using a modestly sized and very shallow network – performs comparably to state-of-the-art methods for NR-IQA which use significantly more complex models. Moreover, our network can process images in real time at 120 image per second unlike other state-of-the-art techniques.
2019
Image Analysis and Processing -- ICIAP 2019
Image Analysis and Processing -- ICIAP 2019
Bongini, Pietro; Del Chiaro, Riccardo; Bagdanov, Andrew D.; Del Bimbo, Alberto
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1171592
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 4
social impact