Under increasing scrutiny from activists and parents, OpenAI has announced the formation of a dedicated team focused on child safety. This move comes in response to growing concerns about the potential misuse or abuse of AI tools by children.
The newly formed Child Safety team at OpenAI is tasked with collaborating with internal and external partners to develop strategies for managing processes, incidents, and reviews related to underage users. In a recent job listing, the company revealed its intention to hire a child safety enforcement specialist to ensure compliance with policies regarding AI-generated content, particularly content that may be sensitive or relevant to children.
The decision to form this team comes in the wake of OpenAI's collaboration with Common Sense Media on kid-friendly AI guidelines and its recent partnership with its first education customer. It reflects a recognition of the potential risks associated with children using AI tools for both educational and personal purposes.
Despite the positive potential of AI in education, concerns have been raised about its impact on children's mental health and susceptibility to misinformation. Reports indicate that a significant percentage of children have used AI tools like ChatGPT to deal with anxiety, social issues, and family conflicts. Furthermore, incidents of plagiarism and the spread of false information have prompted some educational institutions to ban the use of AI tools in classrooms.
In response to these challenges, OpenAI has published documentation for using ChatGPT in classrooms, offering guidance to educators and emphasizing the need for caution when exposing children to AI-generated content. Calls for government regulation of AI usage in education, including age limits and data protection measures, are also gaining traction, underscoring the importance of proactive measures to ensure child safety in the digital age.