Towards learned color representations for image splicing detection

Hadwiger B, Baracchi D, Piva A, Riess C (2019)


Publication Language: English

Publication Type: Conference contribution, Original article

Publication year: 2019

Conference Proceedings Title: 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings

Event location: Brighton GB

DOI: 10.1109/icassp.2019.8682246

Abstract

The detection of images that are spliced from multiple sources is
one important goal of image forensics. Several methods have been
proposed for this task, but particularly since the rise of social me-
dia, it is an ongoing challenge to devise forensic approaches that are
highly robust to common processing operations such as strong JPEG
recompression and downsampling.
In this work, we make a first step towards a novel type of cue for
image splicing, which is based on the color formation of an image.
We make the assumption that the color formation is a joint result of
the camera hardware, the software settings, and the depicted scene,
and as such can be used to locate spliced patches that originally stem
from different images. To this end, we train a two-stage classifier
on the full set of colors from a Macbeth color chart, and compare
two patches for their color consistency. Our preliminary results on
a challenging dataset on downsampled data of identical scenes indi-
cate that the color distribution can be a useful forensic tool that is
highly resistant to JPEG compression.

Authors with CRIS profile

How to cite

APA:

Hadwiger, B., Baracchi, D., Piva, A., & Riess, C. (2019). Towards learned color representations for image splicing detection. In 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings. Brighton, GB.

MLA:

Hadwiger, Benjamin, et al. "Towards learned color representations for image splicing detection." Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Brighton 2019.

BibTeX: Download