|Secretary of Defense Chuck Hagel considers new "moral" robot|
Well, apparently a full moral agent is a being who is capable of acting with reference to right and wrong. And that's exactly where Wallach fails to make his case. A "being" is understood to have a living soul, a spirit, an essence, and a heart. Robots have none of these things. They are wires and metal and computer chips. Now, can they be programmed to act out directives based on existing conditions? Absolutely. But can they be programmed with feelings, and values, and a code of ethical standards -- all which comes from our inner psyche, and which determines one's ability to define morality? I do not believe so.
But The Blaze reports that the Department of Defense, through the Office of Naval Research has $7.5 million set aside in grant money over the next five years for university researchers to build a robot with moral reasoning capabilities. And here's their truly chilling rationalization for such a plan: Proponents argue a “sense of moral consequence” could allow robotic systems to operate as one part of a more efficient — and truly autonomous — defense infrastructure. And some of those advocates think pre-programmed machines would make better decisions than humans, since they could only follow strict rules of engagement and calculate potential outcomes for multiple different scenarios.
First of all, I cannot easily dismiss the language they use to describe this possible scenario ... "the robot will need to bring "some form" of ethical reasoning to bear" ... a "sense" of moral consequence ... truly autonomous systems ... Does anyone else think that there's a huge risk for a) the robot's sense and form of morality being different than a human's, or b) the possibility that someone of inferior (or even evil) morals could corrupt the programming for a robot who is going to be involved in our defense infrastructure? And do you want these robots with unprovable morals being truly autonomous; i.e. self-ruling and self-determining? That would be a big, fat NO from me!
Artificial Intelligence researcher Steven Omohundro says it all makes sense to him. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions." Again, I ask, WHY?!? Why will they need to make moral decisions ... who decided that? Just because you say it, does not give it credibility.
I am so sick and tired of all these researchers and scientists that are so gung-ho to explore the world of human/robotic integration. I am weary of them acting and speaking as if it is the most natural and obvious development; the indisputable next step and course of action. We should take that leap ... oh, just because ... We can! Where is the morality in that?
Has anybody stopped to ask these geniuses if they would be willing to trust a robot to decide if they should be rescued in an IED attack? Would they be happy to rely on that robot deciding if the moral thing to do would be to leave you, while a more seriously injured comrade was rescued first? What if the robot decided you were "acceptable collateral damage" to the mission? Humans will always care more about humans than machines can! We have God-embedded DNA! Hasn't anybody seen the movie Terminator???
And before I end my tirade, I'd like to address the military minds that are behind this debatable and controversial plan. How dare you suggest that $7.5 million be spent on the development of so-called moral robots, when the genuinely moral thing to do would be to spend that money on caring for the real life, flesh-and-blood veterans who have been languishing and dying in our VA hospitals! Shame on you! God have mercy on this foolish and misguided generation!
Luke 12:57 "And why do you not judge for yourselves what is right? "