WACV 2025
Oral Presentation

Face Anonymization Made Simple

1University of Trento 2University of Oulu 3National University of Singapore
Original face 1
Original
Ours — face 1
Ours
FALCO — face 1
FALCO
DP2 — face 1
DP2
Original face 2
Original
Ours — face 2
Ours
RiDDLE — face 2
RiDDLE
DP2 — face 2
DP2
Our face anonymization technique preserves the original facial expressions, head positioning, eye direction, and background elements, effectively masking identity while retaining other crucial details. The anonymized face blends seamlessly into its original photograph, making it ideal for diverse real-world applications.

Abstract

Current face anonymization techniques often depend on identity loss calculated by face recognition models, which can be inaccurate and unreliable. Additionally, many methods require supplementary data such as facial landmarks and masks to guide the synthesis process. In contrast, our approach uses diffusion models with only a reconstruction loss, eliminating the need for facial landmarks or masks while still producing images with intricate, fine-grained details.

We validated our results on two public benchmarks through both quantitative and qualitative evaluations. Our model achieves state-of-the-art performance in three key areas: identity anonymization, facial attribute preservation, and image quality. Beyond its primary function of anonymization, our model can also perform face swapping tasks by incorporating an additional facial image as input, demonstrating its versatility and potential for diverse applications.

Method

The architecture treats anonymization as a variant of face swapping. Two ReferenceNet branches inject source and driving cues into a UNet. For anonymization, the same image is fed to both branches, while identity cues in the source stream are suppressed to yield an unknown identity.

Network architecture diagram showing UNet, source ReferenceNet and driving ReferenceNet

Controlling the Anonymization Degree

Increasing the anonymization degree d pushes the generated identity further from the original.

Anonymization degree example — subject 1
Anonymization degree example — subject 2
Original d = 0.3 d = 0.6 d = 0.9 d = 1.2

Diverse Anonymization Results

We can also use different integer seed values to produce varied anonymization results from the same input image.

Anonymization variation — subject 1
Anonymization variation — subject 2
Original Seed 32 Seed 56 Seed 68 Seed 81

Qualitative Comparison

We compare against DP2 (DeepPrivacy2), FALCO, and RiDDLE on the CelebA-HQ and FFHQ benchmarks.

CelebA-HQ

CelebA-HQ comparison row 1
CelebA-HQ comparison row 2
CelebA-HQ comparison row 3
CelebA-HQ comparison row 4
Original Ours (d=1.2) Ours (d=1.4) FALCO DP2

FFHQ

FFHQ comparison row 1
FFHQ comparison row 2
FFHQ comparison row 3
FFHQ comparison row 4
Original Ours (d=1.2) Ours (d=1.4) RiDDLE DP2

Face Swapping

Beyond anonymization, our model can perform face swapping tasks by incorporating an additional facial image as input (the source identity).

Face swapping — CelebA-HQ
Face swapping — FFHQ
Source Driving Ours InSwapper BlendFace DiffSwap

BibTeX

@InProceedings{Kung_2025_WACV,
    author    = {Kung, Han-Wei and Varanka, Tuomas and Saha, Sanjay and Sim, Terence and Sebe, Nicu},
    title     = {Face Anonymization Made Simple},
    booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)},
    month     = {February},
    year      = {2025},
    pages     = {1040-1050}
}