PIGEON: A High Throughput Framework for Private Inference of Neural Networks using Secure Multiparty Computation

Authors: Christopher Harth-Kitzerow (Technical University of Munich, BMW Group), Yongqin Wang (University of Southern California), Rachit Rajat (University of Southern California), Georg Carle (Technical University of Munich), Murali Annavaram (University of Southern California)

Volume: 2025
Issue: 3
Pages: 88–105
DOI: https://doi.org/10.56553/popets-2025-0090

Download PDF

Abstract: Privacy-Preserving Machine Learning (PPML) is one of the most relevant use cases for Secure Multiparty Computation (MPC). While private training of large neural networks such as VGG-16 or ResNet-50 on state-of-the-art datasets such as ImageNet is still out of reach, given the performance overhead of MPC, GPU-based MPC frameworks are starting to achieve practical runtimes for private inference. However, we show that, unlike plaintext machine learning, using GPU acceleration for both linear (e.g., convolutions) and non-linear neural network layers (e.g., ReLU) is actually counterproductive in PPML. While GPUs effectively accelerate linear layers compared to CPU-based MPC implementations, the MPC circuits required to evaluate non-linear layers introduce memory overhead and frequent data movement between the GPU and the CPU to handle network communication. This results in slow ReLU performance and high GPU memory requirements in state-of-the-art GPU-based PPML frameworks, hindering them from scaling to multiple images per second inference throughput and more than eight images per batch on ImageNet. To overcome these limitations, we propose PIGEON, an open-source framework for Private Inference of Neural Networks. PIGEON employs a novel ABG programming model that switches between Arithmetic Vectorization and Bitslicing on the CPU for non-linear layers depending on the MPC-specific computation required while offloading linear layers to the GPU. Compared to the state-of-the-art PPML framework Piranha, PIGEON improves ReLU throughput by two orders of magnitude, reduces peak GPU memory utilization by one order of magnitude, and scales better with large batch sizes. This translates to one to two orders of magnitude improvements in throughput for large ImageNet batch sizes (e.g., 192) and more than 70% saturation of a 25 Gbit/s network.

Keywords: PPML, MPC, Secure Inference

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.