Machine Ethics

Positronic Robot by Ralph McQuarrie

Machine Ethics is the part of the ethics of artificial intelligence concerned with the moral behavior of Artificial Moral Agents (AMAs) (e.g. robots and other artificially intelligent beings). It contrasts with roboethics, which is concerned with the moral behavior of humans as they design, construct, use and treat such beings.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved ‘cockroach intelligence.’ The consensus opinion was that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

In 2009, during an experiment at the Laboratory of Intelligent Systems in Switzerland, robots that were programmed to cooperate with each other in searching out a beneficial resource and avoiding a poisonous one eventually learned to lie to each other in an attempt to hoard the beneficial resource. One problem in this case may have been that the goals were ‘”terminal’ (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).

In ‘Moral Machines: Teaching Robots Right from Wrong,’ Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Isaac Asimov considered the issue in the 1950s in his ‘I, Robot.’ At the insistence of his editor John W. Campbell Jr., he proposed the ‘Three Laws of Robotics’ to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s