• Author(s) : Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Neil Zhenqiang Gong

With the increasing sophistication of AI-generated content, the need for effective detection and attribution methods has become crucial. Many prominent companies, such as Google, Microsoft, and Open AI, have recognized this and implemented watermarking techniques as a proactive measure to identify synthetic content. However, the current focus of most research in this field primarily centers on general detection rather than user-specific identification.

The ability to attribute AI-generated content back to its source, specifically the user who utilized a generative AI service, is of growing importance. This process, known as attribution, aims to address the uncharted territory of holding users accountable for the content they create with these powerful tools. Despite its potential impact, there has been limited exploration of effective and reliable attribution methods.

This work seeks to bridge this gap by presenting the first comprehensive study on watermark-based, user-aware detection and attribution. We delve into the theoretical foundations of this approach, providing a rigorous probabilistic analysis of detection and attribution performance. By doing so, they offer insights into the accuracy and robustness, or lack thereof, of watermarking methods within this context.

Additionally, they develop an efficient algorithm to select watermarks for users, optimizing the attribution process. Through their theoretical analysis and empirical evaluations, the effectiveness of our proposed approach is demonstrated. The results indicate that watermark-based detection and attribution inherit the characteristics of the underlying watermarking technique, underscoring the importance of selecting an appropriate method.

This study contributes to the growing field of AI content detection and attribution by offering a practical solution to aid in identifying the source of AI-generated content. They believe that their work has the potential to inform the development of tools and practices that promote accountability and transparency in the use of generative AI services.