Abstract
Adversarial representation learning aims to learn data representations for a target task while removing unwanted sensitive information at the same time. Existing methods learn model parameters iteratively through stochastic gradient descent-ascent which is often unstable in practice. To overcome this challenge, we adopt closed-form solvers for the adversary and target predictors by modeling them as kernel ridge regressors, resulting in a more stable one-shot optimization, dubbed OptNet-ARL. Numerical experiment on CelebA dataset demonstrates the utility of our approach for mitigating leakage of private information from learned representation.