
There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society.
Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982's Blade Runner and this year's Ex Machina.
Academic and fictional analyses of AIs tend to focus on human–robot interactions, asking questions such as: would robots make our lives easier? Would they be dangerous? And could they ever pose a threat to humankind?
Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982's Blade Runner and this year's Ex Machina.
Academic and fictional analyses of AIs tend to focus on human–robot interactions, asking questions such as: would robots make our lives easier? Would they be dangerous? And could they ever pose a threat to humankind?
These questions ignore one crucial point. We must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators. For example, if we were to allow sentient machines to commit injustices on one another — even if these 'crimes' did not have a direct impact on human welfare — this might reflect poorly on our own humanity. Such philosophical deliberations have paved the way for the concept of 'machine rights'.
Most discussions on robot development draw on the Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.
Animals that exhibit thinking behaviour are already afforded rights and protection, and civilized society shows contempt for animal fights that are set up for human entertainment. It follows that sentient machines that are potentially much more intelligent than animals should not be made to fight for entertainment.
Source: 2045.com
Most discussions on robot development draw on the Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.
Animals that exhibit thinking behaviour are already afforded rights and protection, and civilized society shows contempt for animal fights that are set up for human entertainment. It follows that sentient machines that are potentially much more intelligent than animals should not be made to fight for entertainment.
Source: 2045.com