Paper

Code

Face completion is a specific, and more challenging application of image completion centered around faces. The structural integrity of the face needs to be maintained for any face inpainting to look realistic. Face inpainting has primarily been accomplished using end-to-end trained models. These models, usually designed as autoencoders, are trained to take in an artificially masked face image and output the complete image and are usually trained using a combination of photometric, adversarial, and other face-specific losses. As such, these approaches have to implicitly model the geometrical structure of faces. We experimented with several of such approaches including GFC, DeepFillv2, PIC, etc. (cite), and observed that, in challenging cases, their outputs fail in producing a geometrically realistic face image.

alt text
Existing approaches typically employ an autoencoder to inpaint masked images

Face is a 3D Object

A face is, however, a 3D object whose appearance in the image is its 2D projection. The appearance of a face in an image depends on its shape, pose, albedo, background illumination, and camera parameters, among other factors.

alt text
Face image can be disentangled into its 3D shape, 3D pose, albedo, and illumination components

We observe that a face inpainting model that can explicitly disentangles the face image into these factors, and then selectively fills only the missing albedo, would better retain the geometric realisticity of such inpainted faces. At the same time, 3DMM-based approaches have seen tremendous focus and improvements in recent years, that can disentangle a single face image into such components. Motivated by these observations, we present 3DFaceFill, an analysis-by-synthesis approach for face completion. Below we show the architecture of 3DFaceFill.

alt text

Furthermore, facial albedo is a largely symmetric object, especially when represented in the UV space (as shown in the image below). We leverage this in our proposed approach as an attention mechanism to copy features from the visible parts to occluded counterparts in the symmetric halves as shown in the architecture figure.

alt text
Facial albedo is largely symmetric. We leverage this in our proposed 3DFaceFill approach for inpainting masked parts using features from the visible counterparts

Results

Below, we show some of the qualitative results of face completion using various baselines and those from 3DFaceFill. One can observe the various deformations and artifacts the baselines introduce to the completed faces in these examples, which make the output face look not real at all. In comparison, the results from 3DFaceFill are geometrically and photometrically much more realistic.

alt text

We also compared 3DFaceFill versus two of the most competitive baselines, DeepFillv2 and Pluralistic Image Completion (PIC) on real occlusions. One can observe the results in the figure below. Notice that, in the first row, the baselines create artifacts whereas 3DFaceFill doesn’t. In the second row, the baselines change the jawline of the face, whereas 3DFaceFill retains the geometry of the face in the deoccluded image.

alt text

To further show the advantage of explicit 3D consideration for face inpainting, we performed a cross-dataset evaluation on the pose and illumination varying images from the MultiPIE dataset. None of the methods were trained on the MultiPIE dataset. We show the results in the figure below. One can see that inpainting by the baselines becomes worse as the pose and illumination get more challenging. 3DFaceFill images are much less affected by such variations.

alt text

Finally, quantitatively too, 3DFaceFill outperforms the baselines across the whole range of Mask/Face are ratios (#pixels under the mask / # pixels in the face region). Furthermore, the gap between 3DFaceFill and the baselines increases as the mask ratio increases, which points to the observation that the improvement using 3DFaceFill is more important as the occlusion/mask gets more challenging.

alt text
As measured in terms of PSNR and LPIPS, 3DFaceFill performs better face completion than the baselines across mask sizes.

Conclusion

In this work, we proposed 3DFaceFill, a 3D-aware face completion method. Our solution was driven by the hypothesis that performing face completion with explicit 3D-disentanglement will allow us to effectively leverage the power of 3D correspondence and lead to face completions that are geometrically and photometrically more accurate. Experimental evaluations, both quantitative and qualitative and across multiple datasets, show the advantages of 3DFaceFill over other baselines, specifically under large variations in pose, illumination, shape, and appearance. These results validate our primary hypothesis.

A repository of resources for Fairness in Machine Learning.

Core Concepts in Fairness

Sampling for Fairness

Encryption for Fairness

Causal Inference in Fairness

Other Code Repositories

References

==========

  1. Hashimoto, T. B., Srivastava, M., Namkoong, H., & Liang, P. (2018). Fairness without demographics in repeated loss minimization. International Conference on Machine Learning (ICML).
  2. Liu, L. T., Dean, S., Rolf, E., Simchowitz, M., & Hardt, M. (2018). Delayed impact of fair machine learning. ArXiv:1803.04383.
  3. Kallus, N., & Zhou, A. (2018). Residual unfairness in fair machine learning from prejudiced data. ArXiv:1806.02887.
  4. Agarwal, A., Beygelzimer, A., Dudı́k Miroslav, Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. ArXiv:1803.02453.
  5. Komiyama, J., Takeda, A., Honda, J., & Shimao, H. (2018). Nonconvex optimization for regression with fairness constraints. International Conference on Machine Learning (ICML).
  6. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A Survey on Bias and Fairness in Machine Learning. ArXiv:1908.09635.
  7. Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. ArXiv:1808.00023.
  8. Celis, L. E., Keswani, V., Straszak, D., Deshpande, A., Kathuria, T., & Vishnoi, N. K. (2018). Fair and diverse DPP-based data summarization. ArXiv Preprint ArXiv:1802.04023.
  9. Amini, A., Soleimany, A., Schwarting, W., Bhatia, S., & Rus, D. (2019). Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure.
  10. Kilbertus, N., Gascón, A., Kusner, M. J., Veale, M., Gummadi, K. P., & Weller, A. (2018). Blind justice: Fairness with encrypted sensitive attributes. ArXiv:1806.03281.
  11. Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. Advances in Neural Information Processing Systems (NeurIPS).
  12. Kilbertus, N., Carulla, M. R., Parascandolo, G., Hardt, M., Janzing, D., & Schölkopf, B. (2017). Avoiding discrimination through causal reasoning. Advances in Neural Information Processing Systems, 656–666.
  13. Zhang, J., & Bareinboim, E. (2018). Fairness in decision-making—the causal explanation formula. AAAI Conference on Artificial Intelligence.
  14. Loftus, J. R., Russell, C., Kusner, M. J., & Silva, R. (2018). Causal reasoning for algorithmic fairness. ArXiv:1805.05859.
  15. Kusner, M. J., Russell, C., Loftus, J. R., & Silva, R. (2018). Causal Interventions for Fairness. ArXiv:1806.02380.
  16. Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2019). Fairness through causal awareness: Learning causal latent-variable models for biased data. Conference on Fairness, Accountability, and Transparency.

A repository of resources for privacy constraints in computer vision.

Papers

References

==========

  1. Pittaluga, F., & Koppal, S. J. (2015). Privacy preserving optics for miniature vision sensors. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  2. Pittaluga, F., & Koppal, S. J. (2016). Pre-capture privacy for small vision sensors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(11), 2215–2226.
  3. Pittaluga, F., Zivkovic, A., & Koppal, S. J. (2016). Sensor-level privacy for thermal cameras. IEEE International Conference on Computational Photography (ICCP).
  4. Yonetani, R., Boddeti, V., Kitani, K. M., & Sato, Y. (2017). Privacy-preserving visual learning using doubly permuted homomorphic encryption. IEEE International Conference on Computer Vision (ICCV).
  5. Boddeti, V. N. (2018). Secure face matching using fully homomorphic encryption. IEEE International Conference on Biometrics Theory, Applications and Systems (BTAS).
  6. Pittaluga, F., Koppal, S. J., Kang, S. B., & Sinha, S. N. (2019). Revealing scenes by inverting structure from motion reconstructions. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  7. Pittaluga, F., Koppal, S., & Chakrabarti, A. (2019). Learning privacy preserving encodings through adversarial training. IEEE Winter Conference on Applications of Computer Vision (WACV).
  8. Wang, Z. W., Vineet, V., Pittaluga, F., Sinha, S. N., Cossairt, O., & Bing Kang, S. (2019). Privacy-Preserving Action Recognition using Coded Aperture Videos. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
  9. Speciale, P., Schonberger, J. L., Kang, S. B., Sinha, S. N., & Pollefeys, M. (2019). Privacy preserving image-based localization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  10. Speciale, P., Schönberger, J. L., Sinha, S. N., & Pollefeys, M. (2019). Privacy Preserving Image Queries for Camera Localization. IEEE International Conference on Computer Vision (ICCV).

A repository of resources for Representation Learning as applicable to invariance, fairness or information leakage.

Papers

Other Code Repositories

References

==========

  1. Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations. International Conference on Machine Learning (ICML).
  2. Edwards, H., & Storkey, A. (2015). Censoring representations with an adversary. ArXiv:1511.05897.
  3. Louizos, C., Swersky, K., Li, Y., Welling, M., & Zemel, R. (2015). The variational fair autoencoder. ArXiv:1511.00830.
  4. Xie, Q., Dai, Z., Du, Y., Hovy, E., & Neubig, G. (2017). Controllable invariance through adversarial feature learning. Advances in Neural Information Processing Systems (NeurIPS).
  5. Pérez-Suay, A., Laparra, V., Mateo-Garcı́a Gonzalo, Muñoz-Marı́ Jordi, Gómez-Chova, L., & Camps-Valls, G. (2017). Fair kernel learning. European Conference on Machine Learning and Knowledge Discovery in Databases (ECMLPKDD).
  6. Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2018). Learning adversarially fair and transferable representations. ArXiv:1802.06309.
  7. Moyer, D., Gao, S., Brekelmans, R., Galstyan, A., & Ver Steeg, G. (2018). Invariant Representations without Adversarial Training. Advances in Neural Information Processing Systems (NeurIPS).
  8. Madras, D., Pitassi, T., & Zemel, R. (2018). Predict responsibly: improving fairness and accuracy by learning to defer. Advances in Neural Information Processing Systems (NeurIPS).
  9. Elazar, Y., & Goldberg, Y. (2018). Adversarial removal of demographic attributes from text data. Empirical Methods in Natural Language Processing (EMNLP).
  10. Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. Conference on Artificial Intelligence, Ethics and Soceity (AIES).
  11. Roy, P., & Boddeti, V. (2019). Mitigating Information Leakage in Image Representations: A Maximum Entropy Approach. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  12. Song, J., Kalluri, P., Grover, A., Zhao, S., & Ermon, S. (2019). Learning Controllable Fair Representations. International Conference on Artificial Intelligence and Statistics (AISTATS).
  13. Sadeghi, B., Yu, R., & Boddeti, V. (2019). On the Global Optima of Kernelized Adversarial Representation Learning. International Conference on Computer Vision (ICCV).
  14. Bertran, M., Martinez, N., Papadaki, A., Qiu, Q., Rodrigues, M., Reeves, G., & Sapiro, G. (2019). Adversarially Learned Representations for Information Obfuscation and Inference. International Conference on Machine Learning (ICML).
  15. Tan, Z., Yeom, S., Fredrikson, M., & Talwalkar, A. (2019). Learning Fair Representations for Kernel Models. ArXiv:1906.11813.
  16. Creager, E., Madras, D., Jacobsen, J.-H., Weis, M. A., Swersky, K., Pitassi, T., & Zemel, R. (2019). Flexibly Fair Representation Learning by Disentanglement. International Conference on Machine Learning (ICML).
  17. Kim, B., Kim, H., Kim, K., Kim, S., & Kim, J. (2019). Learning Not to Learn: Training Deep Neural Networks with Biased Data. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  18. Sarafianos, N., Xu, X., & Kakadiaris, I. A. (2019). Adversarial Representation Learning for Text-to-Image Matching. International Conference on Computer Vision (ICCV).
  19. Moyer, D., Steeg, G. V., Tax, C. M. W., & Thompson, P. M. (2019). Scanner Invariant Representations for Diffusion MRI Harmonization. ArXiv Preprint ArXiv:1904.05375.
  20. Jaiswal, A., Moyer, D., Steeg, G. V., AbdAlmageed, W., & Natarajan, P. (2020). Invariant Representations through Adversarial Forgetting. AAAI Conference on Artificial Intelligence (AAAI).