01 June, 2012

Robot ethics: Morals and the machine | The Economist

Robot ethics: Morals and the machine | The Economist:
Three laws for the laws of robotics
First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.

Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people. The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help. Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices. Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.