|< Texts on topics||cikon.de|
Three questions on "robot ethics"
Machine ethics: should robots be programmed to behave in ethical ways and if so based on which ethical system/guidelines/theory (e.g., religion, Asimov's three laws, utilitarianism)?
Machines (artificial sytems) should be designed, built, programmed, trained so that they can render useful services; while possibly requiring limited autonomy, they must always be controlled by people. Machines – no matter how sophisticated - are as “ethical” as the people who design / build / program / train / use them.
We should educate people to view machines – no matter how sophisticated – as useful tools at best. Machines should empower people and not take power away from them, or be used to take power away from them. HRIs should be designed so that people are not tempted to view machines/robots as fully autonomous and capable of making decisions that only people can make. Man-made machines – no matter how sophisticated they are - have no rights and can not be punished; they can be switched off, taken off line or, ultimately, destroyed.
We should fund research the expected results of which enable us to create better living conditions for everyone on this planet, and research that helps us to better understand ourselves and the world we live in. (In fact, the latter is a prerequisite for the former.) We – concerned citizens (and scientists and engineers in particular) - should make every effort to ensure that research results can not be abused to increase the suffering of people. Past experience shows that this is difficult. Open access to knowledge helps but is not sufficient. We should not give up though.
Questions asked at HRI 2008, 13-15 March 2008,
Hans-Georg Stork, March 2008