Knowing what you know about AI, what do you believe happens when a computer can learn on the job?
Wendell Wallach (2014) points out that popular images in the media that paint pictures of intelligent robots with a wide array of skills conflict with their current capabilities and that the holy grail of perfect artificial intelligence is a long way off, but in my belief, significant strides are already being made towards that end. As the technical capabilities of the machines more and more closely come to approximate that of humans, care will need to be taken in design in particular, as it relates to ethics. We will have a decision where machines will increasingly be faced with situations that will require a consideration of moral choices. As machines become more competent in basic tasks, increasing levels of responsibility will likely be assigned, meaning that some amount of autonomy and a degree of ‘sensitivity to ethical considerations’ as well as the ability to ‘factor those considerations into the choices they make’ (Wallach, 2014, p. 370) will become necessary as a part of their make-up. Robots will have to be specially trained to ‘evaluate appropriateness or legality of various courses of action’ (Wallach, p. 370). One characteristic that distinguishes humanity is the fact that we are moral agents and the implications and ramifications are far reaching when robots begin to assume the role of moral agents. I have serious doubts where that is concerned. At some point, I believe the ultimate responsibility for decisions made should be the human.
Reference
Wallach, W. (2014). Ethics, law and governance in the development of robots. In Sandler, R. (Ed.), Ethics and emerging
technologies. (1st ed. pp. 363-379).
