humans need a rest but the robot do work continuously with our stopping like humans
Объяснение:
Can Machines Learn Morality by Randy Rieland, discusses what equipping machines with morality wouldlook like and the effect it could have on the world. Though machines are becoming increasingly intelligent and independent, would they be able to know the difference between right and wrong? Explanation: Machines grow smarter and more independent, but do they know the distinction between right and wrong? It won't occur long, but as the machine becomes smarter and more independent, the essential aspect of its transition will be its ability to learn moral. And what about the military or robotic weapons? Will a drone learn not to fire on a house if it knew that innocent people were inside as well? Can computers be trained to comply with international war rules? Ronald Arkin, Georgia Tech's professor of computer science and robotics expert, definitely does. He has created software, known as an "ethical governor" that will allow machinery capable of determining when to fire is necessary. Arkin recognizes that it may be decades away, but he believes that robots may one day be superior to human soldiers physically and ethically and not vulnerable to the emotional pain of war and the need to take revenge. he does not expect a whole robot army, just one where robots support people We don't expect a whole robot army, just one where robots support people and carry out high-risk jobs with demanding fast decisions, such as clearing buildings. Yet some say it's time for this kind of thought to be squashed before it goes too far. The Human Rights Watch study and Harvard Human Rights Clinic released at the end of this past year called on governments to outlaw all armed autonomous vehicles, because they "increased the risk of civilian death or injury in armed conflicts. A group of professors of Cambridge University revealed at around the same time plans to establish what they call the Center for Existential Risk Studies. When it opens, it will push for serious scientific research into what can take place if and when machines get smarter than humans. The danger is, says Huw Price, one of the founders of the Center, thath one day we might deal with "machines which are not malevolent but machines whose interests do not include However Arkin says that "All is in the interest "of creating machines that do not threaten people, but are an advantage, particularly in the messy chaos of war". He said that it was important to start now to concentrate on setting guidelines for the correct robot behaviour. What one must do this new capability when one starts to open this box of Pandora. Further, he said that he thinks there is a possibility that these intelligent robots can lower non-combatant losses, however we should be careful how they are being used, and and not simply release them into the battlefield without proper
Answers & Comments
Verified answer
humans need a rest but the robot do work continuously with our stopping like humans
Объяснение:
Can Machines Learn Morality by Randy Rieland, discusses what equipping machines with morality wouldlook like and the effect it could have on the world. Though machines are becoming increasingly intelligent and independent, would they be able to know the difference between right and wrong? Explanation: Machines grow smarter and more independent, but do they know the distinction between right and wrong? It won't occur long, but as the machine becomes smarter and more independent, the essential aspect of its transition will be its ability to learn moral. And what about the military or robotic weapons? Will a drone learn not to fire on a house if it knew that innocent people were inside as well? Can computers be trained to comply with international war rules? Ronald Arkin, Georgia Tech's professor of computer science and robotics expert, definitely does. He has created software, known as an "ethical governor" that will allow machinery capable of determining when to fire is necessary. Arkin recognizes that it may be decades away, but he believes that robots may one day be superior to human soldiers physically and ethically and not vulnerable to the emotional pain of war and the need to take revenge. he does not expect a whole robot army, just one where robots support people We don't expect a whole robot army, just one where robots support people and carry out high-risk jobs with demanding fast decisions, such as clearing buildings. Yet some say it's time for this kind of thought to be squashed before it goes too far. The Human Rights Watch study and Harvard Human Rights Clinic released at the end of this past year called on governments to outlaw all armed autonomous vehicles, because they "increased the risk of civilian death or injury in armed conflicts. A group of professors of Cambridge University revealed at around the same time plans to establish what they call the Center for Existential Risk Studies. When it opens, it will push for serious scientific research into what can take place if and when machines get smarter than humans. The danger is, says Huw Price, one of the founders of the Center, thath one day we might deal with "machines which are not malevolent but machines whose interests do not include However Arkin says that "All is in the interest "of creating machines that do not threaten people, but are an advantage, particularly in the messy chaos of war". He said that it was important to start now to concentrate on setting guidelines for the correct robot behaviour. What one must do this new capability when one starts to open this box of Pandora. Further, he said that he thinks there is a possibility that these intelligent robots can lower non-combatant losses, however we should be careful how they are being used, and and not simply release them into the battlefield without proper