{"id":24249,"date":"2010-11-08T10:22:11","date_gmt":"2010-11-08T14:22:11","guid":{"rendered":"https:\/\/today.uconn.edu\/?p=24249"},"modified":"2011-08-18T16:19:52","modified_gmt":"2011-08-18T20:19:52","slug":"the-ethical-robot","status":"publish","type":"post","link":"https:\/\/today.uconn.edu\/2010\/11\/the-ethical-robot\/","title":{"rendered":"The Ethical Robot"},"content":{"rendered":"<p style=\"padding: 10px\"><em>[yframe url=&#8217;http:\/\/www.youtube.com\/watch?v=pajCoSTGvas&#8217;]<\/em><\/p>\n<p>Professor emerita Susan Anderson and her research partner, husband <a href=\"http:\/\/new.hartford.edu\/stories\/robot.aspx\" target=\"_blank\">Michael Anderson<\/a> of the University of Hartford, a UConn alumnus, at first seem to have little in common when it comes to their academic lives: she\u2019s a philosopher, he\u2019s a computer scientist.<\/p>\n<p>But these two seemingly opposite fields have come together in the Andersons\u2019 collaborative work, in which the team works in a new field of research, called machine ethics, that\u2019s only about 10 years old.<\/p>\n<p>Using their expertise in different areas, the Andersons have recently accomplished something that\u2019s never been done before: They\u2019ve programmed a robot to behave ethically.<\/p>\n<figure id=\"attachment_24282\" aria-describedby=\"caption-attachment-24282\" style=\"width: 324px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot1_lg.jpg\"><img decoding=\"async\" class=\"size-medium wp-image-24282 img-responsive lazyload\" data-src=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot1_lg-300x198.jpg\" alt=\"Robot1_lg\" width=\"324\" height=\"214\" data-srcset=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot1_lg-300x198.jpg 300w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot1_lg.jpg 700w\" data-sizes=\"(max-width: 324px) 100vw, 324px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 324px; --smush-placeholder-aspect-ratio: 324\/214;\" \/><\/a><figcaption id=\"caption-attachment-24282\" class=\"wp-caption-text\">Susan and Michael Anderson have programmed a robot to behave ethically. Image by Bret Eckhardt<\/figcaption><\/figure>\n<p>\u201cThere are machines out there that are already doing things that have ethical import, such as automatic cash withdrawal machines, and many others in the development stages, such as cars that can drive themselves and eldercare robots,\u201d says Susan, professor emerita of philosophy in the <a href=\"http:\/\/www.clas.uconn.edu\/\" target=\"_blank\">College of Liberal Arts and Sciences<\/a>, who taught at UConn&#8217;s <a href=\"http:\/\/www.stamford.uconn.edu\/\" target=\"_blank\">Stamford campus<\/a>. \u201cDon\u2019t we want to make sure they behave ethically?\u201d<\/p>\n<p>The field of machine ethics combines artificial intelligence techniques with ethical theory, a branch of philosophy, to determine how to program machines to behave in an ethical manner. But there is currently no agreement, says Susan, as to which ethical principles should be programmed into machines.<\/p>\n<p>In 1930, <a href=\"http:\/\/en.wikipedia.org\/wiki\/W._D._Ross\" target=\"_blank\">Scottish philosopher David Ross<\/a> introduced a new approach to ethics, she says, called the <em>prima facie <\/em>duty approach, in which a person must balance many different obligations when deciding how to act in a moral way \u2013 obligations like being just, doing good, not causing harm, keeping one\u2019s promises, and showing gratitude.<\/p>\n<p>However, this approach was never developed far enough to instruct people how to weigh these different obligations with a satisfactory decision principle: one that would instruct them on how to behave when several of the <em>prima facie<\/em> duties pull in different directions.<\/p>\n<p>\u201cThere isn\u2019t a decision principle within this theory, so it wasn\u2019t widely adopted,\u201d says Susan.<\/p>\n<p>That\u2019s where the Andersons come in. By using information about specific ethical dilemmas supplied to them by ethicists, computers can effectively \u201clearn\u201d ethical principles in a process called machine learning.<\/p>\n<figure id=\"attachment_24284\" aria-describedby=\"caption-attachment-24284\" style=\"width: 300px\" class=\"wp-caption alignright\"><a href=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot2_lgjpg.jpg\"><img decoding=\"async\" class=\"size-medium wp-image-24284 img-responsive lazyload\" data-src=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot2_lgjpg-300x206.jpg\" alt=\"Robot2_lg,jpg\" width=\"300\" height=\"206\" data-srcset=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot2_lgjpg-300x206.jpg 300w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot2_lgjpg.jpg 700w\" data-sizes=\"(max-width: 300px) 100vw, 300px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 300px; --smush-placeholder-aspect-ratio: 300\/206;\" \/><\/a><figcaption id=\"caption-attachment-24284\" class=\"wp-caption-text\">The robot the Andersons use in their research has been programmed with an ethical principle. Image by Bret Eckhardt<\/figcaption><\/figure>\n<p>The toddler-sized robot they have been using in their research, <a href=\"http:\/\/en.wikipedia.org\/wiki\/Nao_%28robot%29\" target=\"_blank\">called Nao<\/a>, has been programmed with an ethical principle that was discovered by a computer. This learned principle allows their robot to determine how often to remind people to take their medicine and when to notify an overseer, such as a doctor, when they don\u2019t comply.<\/p>\n<p>Reminding someone to take their medicine may seem relatively trivial, but the field of biomedical ethics has grown in relevance and importance since the 1960s. And robots are currently being designed to assist the elderly, so the Andersons&#8217; research has very practical implications.<\/p>\n<p>Susan points out that there are several <em>prima facie<\/em> duties the robot must weigh in their scenario: enabling the patient to receive potential benefits from taking the medicine, preventing harm to the patient that might result from not taking the medication, and respecting the person\u2019s right of autonomy. These <em>prima facie<\/em> duties must be correctly balanced to help the robot decide when to remind the patient to take medication and whether to leave the person alone or to inform a caregiver, such as a doctor, if the person has refused to take the medicine.<\/p>\n<p>Michael says that although their research is in its early stages, it\u2019s important to think about ethics alongside developing artificial intelligence. Above all, he and Susan want to refute the science fiction portrayal of robots harming human beings.<\/p>\n<figure id=\"attachment_24286\" aria-describedby=\"caption-attachment-24286\" style=\"width: 404px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot4_lg.jpg\"><img decoding=\"async\" class=\"size-medium wp-image-24286  img-responsive lazyload\" data-src=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot4_lg-300x188.jpg\" alt=\"Robot4_lg\" width=\"404\" height=\"253\" data-srcset=\"https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot4_lg-300x188.jpg 300w, https:\/\/today.uconn.edu\/wp-content\/uploads\/2010\/11\/Robot4_lg.jpg 700w\" data-sizes=\"(max-width: 404px) 100vw, 404px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 404px; --smush-placeholder-aspect-ratio: 404\/253;\" \/><\/a><figcaption id=\"caption-attachment-24286\" class=\"wp-caption-text\">Philosopher Susan Anderson believes artificial intelligence has changed the field of ethics. Image by Bret Eckhardt<\/figcaption><\/figure>\n<p>\u201cWe should think about the things that robots could do for us if they had ethics inside them,\u201d Michael says. \u201cWe\u2019d allow them to do more things for us, and we\u2019d trust them more.\u201d<\/p>\n<p>The Andersons organized the first international conference on machine ethics in 2005, and they have a book on machine ethics being published by Cambridge University Press. In the future, they envision computers continuing to engage in machine learning of ethics through dialogues with ethicists concerning real ethical dilemmas that machines might face in particular environments.<\/p>\n<p>\u201cMachines would effectively learn the ethically relevant features, <em>prima facie<\/em> duties, and ultimately the decision principles that should govern their behavior in those domains,\u201d says Susan.<\/p>\n<p>Although this is a vision of the future of machine ethics research, Susan thinks that artificial intelligence has already changed her chosen field in major ways.<\/p>\n<p>She thinks that working in machine ethics, which forces philosophers who are used to thinking abstractly to be more precise in applying ethics to specific, real-life cases, might actually advance the study of ethics.<\/p>\n<p>And she believes that robots could be good for humanity: she believes that interacting with robots that have been programmed to behave ethically could even inspire humans to behave more ethically.<\/p>\n<p>The Andersons&#8217; work was featured in <a href=\"http:\/\/www.scientificamerican.com\/article.cfm?id=robot-be-good\" target=\"_blank\"><em>Scientific American<\/em> magazine<\/a> on Oct. 14.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Philosopher Susan Anderson is teaching machines how to behave ethically.<\/p>\n","protected":false},"author":37,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","wds_primary_category":0,"wds_primary_series":0,"wds_primary_attribution":0,"footnotes":""},"categories":[1,70],"tags":[],"magazine-issues":[],"coauthors":[63,72],"class_list":["post-24249","post","type-post","status-publish","format-standard","hentry","category-uncategorized","category-video"],"pp_statuses_selecting_workflow":false,"pp_workflow_action":"current","pp_status_selection":"publish","acf":[],"publishpress_future_action":{"enabled":false,"date":"2026-04-10 22:44:40","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"category","extraData":[]},"publishpress_future_workflow_manual_trigger":{"enabledWorkflows":[]},"_links":{"self":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts\/24249","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/comments?post=24249"}],"version-history":[{"count":5,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts\/24249\/revisions"}],"predecessor-version":[{"id":24311,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/posts\/24249\/revisions\/24311"}],"wp:attachment":[{"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/media?parent=24249"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/categories?post=24249"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/tags?post=24249"},{"taxonomy":"magazine-issue","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/magazine-issues?post=24249"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/today.uconn.edu\/wp-rest\/wp\/v2\/coauthors?post=24249"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}