Learning Privacy from Visual Entities
Authors: Alessio Xompero (Queen Mary University of London), Andrea Cavallaro (Idiap Research Institute / École Polytechnique Fédérale de Lausanne)
Volume: 2025
Issue: 3
Pages: 261–281
DOI: https://doi.org/10.56553/popets-2025-0098
Abstract: Subjective interpretation and content diversity make predicting whether an image is private or public a challenging task. Graph neural networks combined with convolutional neural networks (CNNs), which consist of 14,000 to 500 millions parameters, generate features for visual entities (e.g., scene and object types) and identify the entities that contribute to the decision. In this paper, we show that using a simpler combination of transfer learning and a CNN to relate privacy with scene types optimises only 732 parameters while achieving comparable performance to that of graph-based methods. On the contrary, end-to-end training of graph-based methods can mask the contribution of individual components to the classification performance. Furthermore, we show that a high-dimensional feature vector, extracted with CNNs for each visual entity, is unnecessary and complexifies the model. The graph component has also negligible impact on performance, which is driven by fine-tuning the CNN to optimize image features for privacy nodes.
Keywords: image privacy, image classification, deep learning, transfer learning, graph neural networks
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.
