AMCR: A Framework for Assessing and Mitigating Copyright Risks in Generative Models
Published in European Conference on Artificial Intelligence (ECAI), 2025
Generative models have achieved impressive results in text-to-image (T2I) tasks, pushing the boundaries of visual content creation. However, their reliance on training data often leads to the unintended replication of copyrighted elements, raising legal and ethical concerns in real-world deployments. Although some defenses target prompts that explicitly reference copyrighted material, they often fail to address more subtle cases—where seemingly generic, ``non-sensitive'' prompts still produce infringing content. In addition, current techniques lack systematic tools to detect partial copyright violations and to balance infringement mitigation with generation quality. To address these challenges, we propose, Assessing and Mitigating Copyright Risks (AMCR), a comprehensive framework that (1) generates innocuous prompts capable of inducing copyright violations, (2) detects and evaluates partial infringements using attention-based similarity analysis, and (3) mitigates the risk posed by indirect or hard-to-detect prompt triggers. Extensive experiments validate the effectiveness of AMCR in revealing and assessing latent copyright risks, offering practical insights and benchmarks for safer deployment of generative models.