Have you ever given more than a cursory glance at some of the "news" articles featured on Yahoo.com? Most of them are not of the "serious" variety, and usually capture some cultural item that Yahoo hopes will attract you to their site. But one headline caught my attention ... Scientists Are Afraid To Talk About The Robot Apocalypse, And That's A Problem ... so I decided to check it out.
First of all, I found it curious that this article was from the Finance Section of Yahoo, but that didn't deter me from seeing what all the fuss was about. The author of the article, Dylan Love, obviously has a fascination with sci-fi and the whole robot phenomenon. He is honest in his reasons for writing the article: "I thought it'd be a cool story to interview academics and robotics professionals about the popular notion of a robot takeover." He then goes on to reveal that the first four professionals he contacted didn't want to talk to him. A fifth expert disclosed that most robotics engineers are fearful that talking about their advancement could hurt their credibility and careers.
And that's where Dylan Love and I come to a somewhat skeptical agreement ... if these roboticists truly believe in their inventions, then they should be eager to indulge the public's curiosity about them. After all, our popular culture is rife with the man vs. machine dynamic. You would think they would want their life's calling represented accurately.
He spoke to Author and physicist Louis Del Monte, who told Dylan "that the robot uprising "won't be the 'Terminator' scenario, not a war. In the early part of the post-singularity world — after robots become smarter than humans — one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We'll see the machines as a useful tool." I have certainly seen that in my many trips to Fisher House at Fort Sam Houston. So many of our soldiers are able to live normal lives due to the advent of artificial limbs. That's been a good thing.
Mr. Love goes on to write, "But according to Del Monte, the real danger occurs when self-aware machines realize they share the planet with humans. They 'might view us the same way we view harmful insects' because humans are a species that 'is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses.' " Furthermore, Love reports that Frank Tobe, editor and publisher of the business-focused Robot Report (there is actually such a publication), agrees with Google futurist Ray Kurzweil that "we're close to developing machines that can outperform the human mind, perhaps by 2045. He says we shouldn't take this lightly."
It seems that Mr. Tobe has some real concerns that the integration of man and robot could lead to some serious conflicts. Are we going to endow the robots with self-evolving powers, or limit them to man's control? And what of governments who see the potential to boost their military defense systems? In that case, being under man's control would not be such a good thing!
Then Mr. Love talked to Ray Calo, who is an assistant professor of law at the University of Washington "with an eye on robot ethics and policy." While Professor Calo doesn't really think there will ever by a robot uprising, he still warns that "we should watch for warnings leading up to a potential Singularity moment. If we see robots become more multipurpose and contextually aware, then they may be on their way to strong AI (Artificial Intelligence). That will be a tip that they're advancing to the point of danger for humans." Hmm, seems like a good possibility for an uprising to me!
Then there was Jorge Heraud, CEO of agricultural robotics company Blue River Technology. Mr. Heraud actually had the audacity to say that one day robots would surpass human intelligence, but he wasn't really concerned about a "Terminator-like" occurrence. "It will be more subtle. Think C-3PO. We don't have anything to worry [about] for a long while." Wait! Do we have anything to worry about, or not?
I found it interesting that Northwestern Law professor John O. McGinnis felt the "greatest problem is that such artificial intelligence may be indifferent to human welfare." He expressed the thought that this indifference, while it might cause some harm to humans by solving problems that could be potentially injurious, the robots "could be programmed to weigh human values in their decision making." But here was his alarming conclusion: "The key will be to assure such programming." Doesn't sound very convincing to me!
So, while I was fascinated by the theories and speculations of these roboticists, I find little comfort in their evaluations. It seems that it all comes down to the ethics, principles and morals of those designing these artificial intelligent systems. And there certainly seems to be a shortage of those values in today's world. While I heard skepticism about an actual "robot revolution", I heard much more conjecture about probable risks. We must tread warily!
Proverbs 14:12 "There is a way that seems right to a man, but its end is the way to death."