The next PGR seminar is taking place this Friday at 2PM in JC 1.33a
Below is a title and Abstract for Zhongliang’s talk– Please do come along if you are able.
Title: Adversarial Attack as a Defense: Preventing Unauthorized AI Generation in Computer Vision
Abstract: Adversarial attack is a technique that generate adversarial examples by adding imperceptible perturbations to clean images. These adversarial perturbations, though invisible to human eyes, can cause neural networks to produce incorrect outputs, making adversarial examples a significant security concern in deep learning. While previous research has primarily focused on designing powerful attacks to expose neural network vulnerabilities or using them as baselines for robustness evaluation, our work takes a novel perspective by leveraging adversarial examples to counter malicious uses of machine learning. In this seminar, I will present two of our recent works in this direction. First, I will introduce the Locally Adaptive Adversarial Color Attack (LAACA), which enables artists to protect their artwork from unauthorized neural style transfer by embedding imperceptible perturbations that significantly degrade the quality of style transfer results. Second, I will discuss our Posterior Collapse Attack (PCA), a grey-box attack method that disrupts unauthorized image editing based on Stable Diffusion by exploiting the common VAE structure in latent diffusion models. Our research demonstrates how adversarial examples, traditionally viewed as a security threat, can be repurposed as a proactive defense mechanism against the misuse of generative AI, contributing to the responsible development and deployment of these powerful technologies.