The Future of Killing: Ethical and Legal Implications of Fully Autonomous Weapon Systems
Clicks: 300
ID: 16510
2017
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Star Article
64.2
/100
291 views
237 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
Warfare is moving towards full weapon autonomy. Already, there are weapons in service that replace a human at the point of engagement. The remote pilot must adhere to the law and consider the moral and ethical implications of using lethal force. Future fully autonomous weapons will be able to search for, identify and engage targets without human intervention, raising the question of who is responsible for the moral and ethical considerations of using such weapons. In the chaos of war, people are fallible, but they can apply judgement and discretion and identify subtle signals. For example, humans can identify when an enemy wants to surrender, are burying their dead, or are assisting non-combatants. An autonomous weapon may not be so discerning and may not be capable of being programmed to apply discretion, compassion, or mercy, nor can it adapt commanders’ intent or apply initiative. Before fully autonomous weapons use lethal force, it is argued that there needs to be assurances that the ethical implications are understood and that control mechanisms are in place to ensure that oversight of the system is able to prevent incidents that could amount to breaches of the laws of armed conflict.
| Reference Key |
lark2017thesalus
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Lark, Martin; |
| Journal | salus journal |
| Year | 2017 |
| DOI |
DOI not found
|
| URL | |
| Keywords | Keywords not found |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.