Chinese Researchers Develop Military AI Tool Using Meta's Llama Model
Experts have noted that this is the first substantial evidence of PLA military experts systematically researching and leveraging open-source LLMs for military purposes.
Chinese researchers affiliated with the People’s Liberation Army (PLA) have utilized Meta's publicly available Llama AI model to create a new AI tool, named "ChatBIT," for potential military applications. This development is outlined in a June academic paper by six researchers from institutions connected to the PLA's Academy of Military Science (AMS).
ChatBIT is based on an early version of Meta’s Llama 2 13B large language model (LLM), which the researchers fine-tuned for military dialogue and question-answering tasks. The paper claims that ChatBIT outperformed several other AI models that were nearly as capable as OpenAI's ChatGPT-4, although specific performance metrics were not provided. The researchers did not clarify whether ChatBIT has been deployed in operational settings.
Experts have noted that this is the first substantial evidence of PLA military experts systematically researching and leveraging open-source LLMs for military purposes. While Meta has established restrictions on the use of its models for military applications, enforcing these limitations has proven challenging due to the public nature of the models.
Meta's director of public policy, Molly Montgomery, stated that any usage of their models by the PLA is unauthorized and contradicts their acceptable use policy. The company emphasizes the need for open innovation in the AI space, particularly in light of significant investments by China in AI development.
The researchers involved in the ChatBIT project, including Geng Guotong and Li Weiwei, aim for the tool to be used not only for intelligence analysis but also for strategic planning and decision-making in military contexts. Despite the promising applications, some experts have questioned the capabilities of ChatBIT, noting its training on a relatively small dataset compared to leading models.
This research underscores ongoing debates in U.S. national security circles regarding the implications of making advanced AI models publicly available. U.S. officials are concerned about the potential security risks associated with open-source AI, prompting discussions about regulatory measures to limit technology investments in China that could pose national security threats.