Determining the Trust of AI-generated images

Determining the Trust of AI-generated images

3 Aug 2017

Janadhi Uyanhewage and Athman Bouguettaya.

Images have become the primary medium for information exchange on digital communication platforms. These platforms range from widely-used social media channels (e.g., Facebook, X) to highly reputed news outlets (e.g., BBC, The New York Times). Recent research shows that at least 10% of these images are manipulated. 

In the late 2010s, AI advancements enabled the creation of synthetic images, extending beyond manipulation to the generation of synthetic images. This evolution introduced the challenge of differentiating between authentic images and AI-synthetic ones, particularly those created with malicious intent, commonly known as Deepfakes. It is important to note that not every AI-generated image is malicious, and the mere use of AI in image creation doesn’t categorise it as a Deepfake.

Existing methods for detecting AI-generated images often rely on computationally intensive image processing techniques. In our project, we present a novel approach that focuses solely on the non-functional attributes of images to evaluate the trustworthiness of AI-generated images. By leveraging these non-functional attributes, we aim to provide a cost-effective and efficient solution to promptly and correctly detect Deepfakes.