Project Description

Detecting whether a text has been AI generated or if it is human generated is an important problem. Watermarking is a mechanism where a signal is injected into the AI content in such a way that (a) a watermarked content can be efficiently detected and, (b) it is infeasible to remove the watermark without sufficiently changing the meaning of the text.

In the past few years, there have been extensive works, initiated from [1], especially from the applied crypto community, on designing watermarking schemes. However, there have been limited works (see [2],[3]) on formulating the definition and formally proving the security of watermarking schemes. Without a formal security proof, how can we trust that the watermarking schemes behave the way they claim?

The goal of the project would be to:
      (1) Survey existing works on watermarking schemes,
      (2) Identify and formulate different definitions of watermarking schemes,
      (3) Come up with candidate constructions of watermarking schemes.

 

Team Members

  • Brian Sen
  • Emerson Yu
  • Siddhi Mundhra
  • Zeel Patel

 

Professor and Mentors

  • Prof. Prabhanjan Ananth