OpenAI, the pioneering AI research lab and creator of innovative AI models such as ChatGPT and DALL-E, is on the brink of releasing a cutting-edge tool designed to accurately detect images generated by its own artificial intelligence service, DALL-E 3. Mira Murati, Chief Technology Officer of OpenAI, recently revealed that this tool boasts a remarkable reliability rate of 99%. While the tool is currently in the midst of internal testing, there hasn't been a specified release date for public use.
This revelation was made at the Wall Street Journal’s Tech Live conference, where Murati and OpenAI CEO Sam Altman took center stage in Laguna Beach, California. The primary purpose of the tool is to ascertain whether a given image was produced using DALL-E 3, a ground-breaking image generation model developed by OpenAI.
Although there are several tools in existence claiming to detect AI-generated content, they frequently exhibit inaccuracy issues. OpenAI had previously unveiled a tool for identifying AI-generated text back in January, but it was discontinued in July due to issues related to its reliability. OpenAI has expressed its unwavering commitment to enhance this software and establish techniques to detect audio and image content generated by AI. This highlights the pressing need for precise AI-generated content detection tools, given the increasing occurrence of AI manipulation and content fabrication across various domains, including news reporting.
The OpenAI team also provided some insights at the conference regarding a forthcoming AI model set to succeed GPT-4. While the official name of this successor model has not been publicly disclosed, OpenAI initiated a trademark application for "GPT-5" with the US Patent and Trademark Office in July. It is anticipated that this model will be an evolution of the GPT series, which has consistently set new standards in the fields of natural language understanding and generation.
Discussing potential improvements in AI models, particularly with regards to addressing hallucination issues, where models inadvertently generate fictitious information, Mira Murati shared insights on the likelihood of a GPT-5 model producing fewer falsehoods. She commented:
Maybe. Let’s see. We’ve made a ton of progress on the hallucination issue with GPT-4, but we’re not where we need to be.
Moreover, Sam Altman, CEO of OpenAI, tackled the possibility of the organization designing and manufacturing its custom computer chips for training and operating AI models, as opposed to relying on third-party providers like Nvidia. He indicated, "The default path would certainly be not to, but I would never rule it out."
One key clarification to note is that OpenAI's forthcoming tool is specifically tailored for the detection of DALL-E images and does not encompass all AI-generated images, as highlighted by the company.