Archive for December 5th, 2012

December 5, 2012

Three Laws of Robotics

The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story ‘Runaround,’ although they had been foreshadowed in a few earlier stories.

The Three Laws are: ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm; A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; and A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These form an organizing principle and unifying theme for Asimov’s robotic-based fiction, appearing in his ‘Robot’ series, the stories linked to it, and his ‘Lucky Starr’ series of young-adult fiction.

read more »

December 5, 2012

Roboethics

roboEthics

The term roboethics was coined by roboticist Gianmarco Veruggio in 2002, who also served as chair of an Atleier (workshop) funded by the European Robotics Research Network to outline areas where research may be needed. The road map effectively divided ethics of artificial intelligence into two sub-fields to accommodate researchers’ differing interests:

Machine ethics is concerned with the behavior of artificial moral agents (AMAs); and Roboethics is concerned with the behavior of humans, how humans design, construct, use and treat robots and other artificially intelligent beings.

read more »

December 5, 2012

Machine Ethics

Positronic Robot by Ralph McQuarrie

Machine Ethics is the part of the ethics of artificial intelligence concerned with the moral behavior of Artificial Moral Agents (AMAs) (e.g. robots and other artificially intelligent beings). It contrasts with roboethics, which is concerned with the moral behavior of humans as they design, construct, use and treat such beings.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.

read more »

December 5, 2012

AI Ethics

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into Roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and Machine Ethics, concern with the moral behavior of artificial moral agents (AMAs).

The term ‘roboethics’ was coined by roboticist Gianmarco Veruggio in 2002. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans. ‘Robot rights’ are the moral obligations of society towards its machines, similar to human rights or animal rights. These may include the right to life and liberty, freedom of thought and expression, and equality before the law.

read more »