Data Isotopes for Data Provenance in DNNs

Authors: Emily Wenger (The University of Chicago), Xiuyu Li (UC Berkeley), Ben Y. Zhao (The University of Chicago), Vitaly Shmatikov (Cornell Tech)

Volume: 2024
Issue: 1
Pages: 413–429
DOI: https://doi.org/10.56553/popets-2024-0024

Download PDF

Abstract: Today, creators of data-hungry deep neural networks (DNNs) scour the Internet for training fodder, leaving users with little control over or knowledge of when their data, and in particular their images, are used to train models. To empower users to counteract unwanted use of their images, we design, implement and evaluate a practical system that enables users to detect if their data was used to train a DNN model for image classification. We show how users can create special images we call isotopes, which introduce ``spurious features'' into DNNs during training. With only query access to a model and no knowledge of the model-training process, nor control of the data labels, a user can apply statistical hypothesis testing to detect if the model learned these spurious features by training on the user's images. Isotopes can be viewed as an application of a particular type of data poisoning. In contrast to backdoors and other poisoning attacks, our purpose is not to cause misclassification but rather to create tell-tale changes in confidence scores output by the model that reveal the presence of isotopes in the training data. Isotopes thus turn DNNs' vulnerability to memorization and spurious correlations into a tool for data provenance. Our results confirm efficacy in multiple image classification settings, detecting and distinguishing between hundreds of isotopes with high accuracy. We further show that our system works on public ML-as-a-service platforms and larger models such as ImageNet, can use physical objects in images instead of digital marks, and remains robust against several adaptive countermeasures.

Keywords: datasets, neural networks, provenance tracking

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.