Can robots know right from wrong?

The U.S. military would want to spend millions of dollars on this. Grant money amounting to $7.5 million will be awarded by the Office of Naval Research to researchers from different universities, namely, Brown, Yale, Georgetown, Tufts, and Rensselaer Polytechnic Institute. The mission? They are to explore the possibility of introducing and building a sense of right and wrong into autonomous robot systems.

Quality Crafted and Far-Reaching Press Releases That Make An Impact

Are you looking to make a big impact on your small business? Look no further than press releases - they're a powerful tool for amplifying your news! Learn how to use them to your advantage.

teaching-robots

Robot That Knows Right And Wrong
In other words, over the next five years, university researchers are to build a robot that recognizes right from wrong. With this sense of moral consequence, autonomous systems could operate more efficiently and independently. This may have stemmed from some people’s thinking that machines have the ability to make better decisions since they could strictly follow the rules of engagement to the T, and calculate the outcome of multiple different scenarios. The world has grown past the concept of robots being pieces of metal attached together and wrapping over brushless DC motors and drive belts and gears.

Preprogrammed Moral Code
Can you imagine a future where autonomous robots are making life or death decisions based on a preprogrammed moral code? The following robot systems may be created with moral and operational functionality:

  • missile defines
  • autonomous military vehicles
  • drones

talon-robot

Capability To Select And Engage
Lethal and fully-autonomous robots are prohibited by the U.S. military which means only semi-autonomous robots are developed – having no capability to “select and engage” individual targets or specific target groups without intervention from an authorized human operator. That makes it fully dependent on decisions made by the human operator. Hence, it’s extremely important to have full knowledge of the capabilities and limitations of the systems and a full understanding of the rules of war.

Need For Moral And Ethical Reasoning
Robot systems need not be armed to require the ability to make moral decisions. Imagine a disaster scenario where a robot has to decide on who will be evacuated or treated first – a situation that would need some sense of moral or ethical reasoning. Robots of this kind will be useful and valuable in first-response, search-and-rescue, and medical operations. And with robots being put to many uses where it’s difficult to predict their actions or the kind of situations they’ll face, having this capability of ethical reasoning will definitely help them sort through various options and arrive at the best decision.

fire-department-robot

Need For Standard Moral Code
While the purposes for which the robots with morals or ethical reasoning are nobly acceptable, the debate is on whether a certain moral code will be agreed upon by multiple parties – military, rescue teams, first responders, etc.  Sure, computer processing may be able to take care of that, say, in handling triage at a field hospital in a disaster scene, but what about when pointing missiles at people?

Need For A Moral Agency
Will this require the creation of a “moral agency?”  This would mean people understanding others and knowing what it means to suffer. Even if some rules of ethics are installed in the robot, you can’t expect it to care because it will follow the idea of ethics of the human operator or designer.

The Kind Of Robot In Five Years
In the meantime, the debate will go on. And while that is happening, the university researchers are deep into their mission and in five years, they may be able to surprise the world with a different kind of robot.

Do you think robots can be programmed to be ethical and moral?

Article Sources:
http://www.theatlantic.com
http://www.gizmodo.com.au

Scroll to Top