When May a Robot Kill? New DOD Policy Tries to Clarify
When May a Robot Kill? New DOD Policy Tries to Clarify
https://www.nextgov.com/ ^
| JANUARY 26, 2023
| By Patrick Tucker,
Posted on 01/27/2023 12:21:53 PM PST by RomanSoldier19
Did you think the Pentagon had a hard rule against using lethal autonomous weapons? It doesn’t. But it does have hoops to jump through before such a weapon might be deployed—and, as of Wednesday, a revised policy intended to clear up confusion.
The biggest change in the Defense Department’s new version of its 2012 doctrine on lethal autonomous weapons is a clearer statement that it is possible to build and deploy them safely and ethically but not without a lot of oversight.
That’s meant to clear up the popular perception that there’s some kind of a ban on such weapons. “No such requirement appears in [the 2012 policy] DODD 3000.09, nor any other DOD policy,” wrote Greg Allen, the director of the Artificial Intelligence Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies.
(Excerpt) Read more at nextgov.com …
TOPICS: News/Current Events
KEYWORDS: robots
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC – PO Box 9771 – Fresno, CA 93794
Thank you very much and God bless you.
Navigation: use the links below to view more comments.
first 1-20, 21-31 next last
To: RomanSoldier19
After it becomes self-aware at 2:14 a.m.?
2
posted on 01/27/2023 12:24:17 PM PST
by SaveFerris
(Luke 17:28 … as it was in the days of Lot; they did eat, they drank, they bought, they sold ……)
To: RomanSoldier19
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
3
posted on 01/27/2023 12:24:20 PM PST
by higgmeister
(In the Shadow of The Big Chicken!)
To: higgmeister
Asimov later added the “Zeroth Law,” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
4
posted on 01/27/2023 12:25:43 PM PST
by higgmeister
(In the Shadow of The Big Chicken!)
To: higgmeister
You forgot the Zeroth Law:
Zeroth Law
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
5
posted on 01/27/2023 12:26:38 PM PST
by Yo-Yo
(Is the /Sarc tag really necessary? Pray for President Biden: Psalm 109:8)
To: SaveFerris
If a bunch of robots go rogue and start killing civilians they will be sent written reprimands in triplicate!
6
posted on 01/27/2023 12:27:26 PM PST
by cgbg
(Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
To: higgmeister
Now we have the “woketh law”.
5. White people are not considered “human” for the purposes of the other laws.
7
posted on 01/27/2023 12:28:35 PM PST
by cgbg
(Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
To: cgbg
What could possibly go wrong with armed robots?
8
posted on 01/27/2023 12:28:39 PM PST
by SaveFerris
(Luke 17:28 … as it was in the days of Lot; they did eat, they drank, they bought, they sold ……)
To: RomanSoldier19
9
posted on 01/27/2023 12:29:27 PM PST
by Magnum44
(…against all enemies, foreign and domestic… )
To: SaveFerris
To: Yo-Yo
11
posted on 01/27/2023 12:30:55 PM PST
by higgmeister
(In the Shadow of The Big Chicken!)
To: RomanSoldier19
“The biggest change in the Defense Department’s new version of its 2012 doctrine on lethal autonomous weapons is a clearer statement that it is possible to build and deploy them safely and ethically but not without a lot of oversight.“
Consider Lindsey Graham’s alleged exhortation to Capitol Police “you have guns, use them” on January 6th, it doesn’t afford much confidence about the oversight part.
12
posted on 01/27/2023 12:35:41 PM PST
by Tench_Coxe
To: higgmeister
Yup, one minute before I posted. LOL!
13
posted on 01/27/2023 12:36:33 PM PST
by Yo-Yo
(Is the /Sarc tag really necessary? Pray for President Biden: Psalm 109:8)
To: RomanSoldier19
When May a Robot Kill? New DOD Policy…
Anytime it wants to.
14
posted on 01/27/2023 12:36:39 PM PST
by Navy Patriot
(Celebrate Decivilization)
To: higgmeister
Don’t forget that Asimov was a Navy intel guy whose last book, in the early ‘70s, was about environmentalism and the destructive power of overpopulation.
15
posted on 01/27/2023 12:38:14 PM PST
by 9YearLurker
16
posted on 01/27/2023 12:41:12 PM PST
by BenLurkin
(The above is not a statement of fact. It is either opinion, or satire, or both.)
To: SaveFerris
Maybe we can talk the robots to death with paradoxes. LOL
Nomad:
M5:
Lamu:
17
posted on 01/27/2023 12:49:08 PM PST
by Tell It Right
(1st Thessalonians 5:21 — Put everything to the test, hold fast to that which is true.)
To: RomanSoldier19
Patriot, as deployed in the Gulf War I, had certain modes where it would automatically engage, one of which was when it felt threatened by an anti-radiation (anti-radar) missile. Two marine pilots were flying together without IFF (Identification friend or foe) active. One had a malfunction and separated to land on an airfield defended by a Patriot battery. When he separated from his wingman, he looked to the Patriot radar like a separating missile. When he flew an approach directly at the radar he looked like an anti radiation missile. His wingman decided to follow him. It looked like a salvo of two anti radiation missiles. Patriot did what it was programmed to do. Thereafter, the air boss was the E-3 AWACS, and Patriot could only engage a target if authorized by the air boss.
18
posted on 01/27/2023 12:51:04 PM PST
by Lonesome in Massachussets
(Forsan et haec olim meminisse iuvabit.)
To: Tell It Right
You forgot Haricourt Fenton Mudd and V’Ger
19
posted on 01/27/2023 12:53:38 PM PST
by ro_dreaming
(Who knew that in 2022 “1984”, “Enemy of the State”, and “Person of Interest” would be non-fiction?)
To: higgmeister
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Both the later books, and the Will Smith movie explored the logical conclusion of that law: to protect humans against themselves, robots would have to take control
20
posted on 01/27/2023 12:55:35 PM PST
by PapaBear3625
(We live in a time where intelligent people are being silenced so stupid people won’t be offended)
Navigation: use the links below to view more comments.
first 1-20, 21-31 next last
Disclaimer:
Opinions posted on Free Republic are those of the individual
posters and do not necessarily represent the opinion of Free Republic or its
management. All materials posted herein are protected by copyright law and the
exemption for fair use of copyrighted works.
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson