In the academic literature there is a scarcity of studies regarding the impact of AI leak on national security. ‘AI Leak’ is an emerging concept which refers to the unintentional release or unauthorized access to sensitive machine learning models. A recent example was the leaked pre-trained model called ‘LLAMA’ which was developed by Meta’s (Facebook). The purpose of LLAMA is to generate human-like language processing for use cases such as virtual chatbot, translation or sentiment analysis. LLAMA model was accidentally made public by a Meta employee who uploaded it to a public GitHub repository, making it available for anyone to access and download. A leak could expose Meta’s proprietary language processing technology and expertise that could be used for competitors or adversaries to develop applications for intelligence gathering or spreading misinformation. AI, including large language models (LLMs), is considered by countries as a key enabling technology driving operational gains both for defense and commercial purposes. Recently released ASPI’s new Critical Technology Tracker reveals that that China has built a global lead relative to the US in 37 out of 44 crucial technology fields including in AI. The US Department of Defense (US DoD) is working on its own LLM, known as the ‘Gargantua’ program. The goal of the Gargantua program is to create LLMs that are capable of processing and understanding large amounts of unstructured text data, including potentially sensitive military data. The Gargantua program is still in development, but the DoD has stated that it sees significant potential for LLMs in a range of military applications, such as intelligence gathering, situational awareness, and decision making. My article explores a hypothetical scenario of Gargantua AI Leak in the context of security dilemma. First coined by John Herz in 1950, the security dilemma describes how the actions that one state takes to make itself more secure, such as the adoption of AI, tend to make other states less secure and lead them to respond in kind. I explore how such a leak could differ from previously known cyber leaks. Further, I investigate what security challenges such a leak poses on the US DoD / NATO Alliance both from the perspective of state and non-state actors. Finally, I explore the potential mitigation solutions and emerging regulatory strategies to address AI leaks more generally and in the context of LLMs for national security.