Autonomous weapons systems, colloquially referred to as “killer robots,” are becoming less of a science fiction fantasy and more of a reality on the modern battlefield. They operate based on Artificial Intelligence (AI) which helps them undergo a self-learning thought process, where the machines learn new tasks based on responses from previous interactions. As the AI behind these systems becomes more sophisticated, the machines engage in greater autonomous decision-making.
As they function with increased autonomy, some weapons experts and governments have suggested extending legal personhood to autonomous weapons systems because their decision-making resembles that of human brains. Theoretically, this extension would resemble how corporations operate as legal persons, even though corporations are a type of business entity. Although the comparison reveals notable similarities at the surface, it ultimately fails. Because autonomous systems are weapons, their use must conform to International Humanitarian Law (IHL), which does not support such an extension of corporate personhood. The doctrine of IHL has restrictively extended the idea of personhood, and no express declaration from either the United States Congress or the United Nations supports a broad expansion.
Furthermore, fundamental principles of corporate law, such as the piercing the corporate veil doctrine, cannot apply to autonomous weapons systems. Because weapons systems are machines, they cannot hold financial assets or deliberately abuse a limited liability shield. Ultimately, the failed application of this analogy under IHL combined with the inapplicability of key principles of corporate law indicates that extending legal personhood to autonomous weapons systems cannot succeed.
"From Siri to Scifi: Are Lethal Robots People Too?,"
Penn State Law Review: Vol. 124
, Article 5.
Available at: https://elibrary.law.psu.edu/pslr/vol124/iss2/5