Shielding Latent Face Representations From Privacy Attacks

Arjun Ramesh Kaushik, Bharat Chandra Yalavarthi, Arun Ross, Vishnu Boddeti and Nalini Ratha
IEEE International Conference on Automatic Face and Gesture Recognition 2025 .

Abstract

In today’s data-driven analytics landscape, deep learning has become integral to diverse applications, with latent representations (embeddings) playing a central role in downstream tasks. In the face analytics domain, embeddings are commonly obtained to perform identity verification (e.g., face identification). However, these deep embeddings can inadvertently expose sensitive soft features, such as age, gender, and ethnicity. Such leaks of extraneous information pose serious risks to individual privacy, civil liberties, and human rights. To address these pressing concerns, we introduce a multi-layer protection framework. It consists of encrypting the embedding with an ability to operate on it using Fully Homomorphic Encryption (FHE), and irreversible feature manifold hash, applied in sequence. While encryption is known to provide a privacy guarantee through security, it is not directly amenable to downstream analytics without FHE-based computations. To reduce the overheads of encrypted processing, we employ embedding compression. Our proposed method shields latent representations of sensitive data from leaking soft features while retaining essential functional capabilities (such as identity verification). Extensive experiments conducted on several datasets using different face encoders demonstrate that our approach outperforms current state-of-the-art privacy protection methods. Leveraging FHE in a hybrid protection scheme achieves irreversibility and unlinkability without compromising accuracy. This hybrid scheme stands as a robust solution, combining privacy preservation with functional reliability, advancing the secure use of embeddings in sensitive applications.