Are We Heading Past The Point Of No Return In Thinking Robots?

Can robots know right from wrong?

The U.S. military would want to spend millions of dollars on this. Grant money amounting to $7.5 million will be awarded by the Office of Naval Research to researchers from different universities, namely, Brown, Yale, Georgetown, Tufts, and Rensselaer Polytechnic Institute. The mission? They are to explore the possibility of introducing and building a sense of right and wrong into autonomous robot systems.

teaching-robots

Robot That Knows Right And Wrong
In other words, over the next five years, university researchers are to build a robot that recognizes right from wrong. With this sense of moral consequence, autonomous systems could operate more efficiently and independently. This may have stemmed from some people’s thinking that machines have the ability to make better decisions since they could strictly follow the rules of engagement to the T, and calculate the outcome of multiple different scenarios. The world has grown past the concept of robots being pieces of metal attached together and wrapping over brushless DC motors and drive belts and gears.

Preprogrammed Moral Code
Can you imagine a future where autonomous robots are making life or death decisions based on a preprogrammed moral code? The following robot systems may be created with moral and operational functionality:

  • missile defines
  • autonomous military vehicles
  • drones

talon-robot

Capability To Select And Engage
Lethal and fully-autonomous robots are prohibited by the U.S. military which means only semi-autonomous robots are developed – having no capability to “select and engage” individual targets or specific target groups without intervention from an authorized human operator. That makes it fully dependent on decisions made by the human operator. Hence, it’s extremely important to have full knowledge of the capabilities and limitations of the systems and a full understanding of the rules of war.

Need For Moral And Ethical Reasoning
Robot systems need not be armed to require the ability to make moral decisions. Imagine a disaster scenario where a robot has to decide on who will be evacuated or treated first – a situation that would need some sense of moral or ethical reasoning. Robots of this kind will be useful and valuable in first-response, search-and-rescue, and medical operations. And with robots being put to many uses where it’s difficult to predict their actions or the kind of situations they’ll face, having this capability of ethical reasoning will definitely help them sort through various options and arrive at the best decision.

fire-department-robot

Need For Standard Moral Code
While the purposes for which the robots with morals or ethical reasoning are nobly acceptable, the debate is on whether a certain moral code will be agreed upon by multiple parties – military, rescue teams, first responders, etc.  Sure, computer processing may be able to take care of that, say, in handling triage at a field hospital in a disaster scene, but what about when pointing missiles at people?

Need For A Moral Agency
Will this require the creation of a “moral agency?”  This would mean people understanding others and knowing what it means to suffer. Even if some rules of ethics are installed in the robot, you can’t expect it to care because it will follow the idea of ethics of the human operator or designer.

The Kind Of Robot In Five Years
In the meantime, the debate will go on. And while that is happening, the university researchers are deep into their mission and in five years, they may be able to surprise the world with a different kind of robot.

Do you think robots can be programmed to be ethical and moral?

Article Sources:
http://www.theatlantic.com
http://www.gizmodo.com.au

Lisa Myers
 

Is a blogger with an interest for all things mechanical. She is a full-time mom with three active boys, who loves encouraging them to explore the world of science and engineering. They spend a lot of time together playing with Legos.

Click Here to Leave a Comment Below 2 comments
Harold

This story line sounds like a plot right out of an i-robot’ sequel, or prequel, depending on how you look at it. This sounds like we are living at the dawn of this technology that will eventually lead to the age when we all have a mascot / companion / servant droid in our homes. There are moral implications to what DARPA and the U.S. Department of Defense are pushing towards, and we’ll soon witness the implications.

Reply
Barry

I wonder if Lisa isn’t raising and grooming the next generation of engineers. They’re already developing their structural creativity by exploring design possibilities with Lego sets. I suppose that is what that product was intended to do – stimulate the imaginations of kids within the parameters of practical functionality. In today’s digital gamer age, that seems to be a lost art. I recall when I was growing up, my generation and the ones before mine used to find joy in Popular Science publications and ordering from a medley of kits that were advertised in the classifieds on the back of each periodical.

Reply

Leave a Reply: