- Version 1.0
- Download 60
- File Size 1.75MB
- File Count 1
- Create Date 4 August 2025
- Last Updated 4 August 2025
Human Oversight in AI-Driven Defence - at what positions do we need the Human in the Loop
Introduction Picture the battlefield of today: Artificial Intelligence (AI) enables military leaders to analyse information rapidly and direct weaponry with exceptional accuracy. This technology is not coming, it is already here. From intelligence gathering to strategic planning, AI is transforming the operations of modern armed forces. Consider, for example, an AI-powered drone recognizing targets. Its accuracy is impressive, but can we really rely on it to make choices about human lives? This goes beyond precision, it involves judgement. When an AI highlights a potential threat in a busy area, a human operator must weigh civilian safety, international regulations, and mission objectives. These considerations exceed simple data; they involve complex moral judgments and require human insight and ethical understanding.
That is why prevailing military doctrine ensures humans are still “in the loop” for most activities. While AI acts as a formidable tool, crucial choices rest with human operators. Imagine it as a collaboration where each contributes their unique strengths: the speed and accuracy of AI along with the moral reasoning and judgment of humans.
The essential questions are deceptively simple yet deeply significant: What decisions can we entrust to AI, and in which areas should humans keep authority? How can we effectively integrate AI’s potential with military capabilities while keeping humanitarian values and a clear sense of accountability? What are the dynamics between human judgment and artificial intelligence in critical situations. In other words, at what positions do we need the Human in the Loop?