Constraints placed on artificial intelligence’s capabilities regarding the selection and employment of optimal armaments can be defined as measures restricting autonomous decision-making in lethal force scenarios. For example, regulations might prohibit an AI from independently initiating an attack, requiring human authorization for target engagement, even when presented with statistically favorable outcomes based on pre-programmed parameters.
Such restrictions address fundamental ethical and strategic considerations. They provide a safeguard against unintended escalation, algorithmic bias leading to disproportionate harm, and potential violations of international humanitarian law. The implementation of such limitations is rooted in a desire to maintain human control in critical decisions concerning life and death, a principle deemed essential by many stakeholders globally, and have been debated for decades.