The scientist and science fiction writer Isaac Asimov as stated in his 1942 short story Runaround
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added a fourth law.
0. A robot may not harm humanity, or, by inaction allow humanity to come to harm.
Should this be how robots should be programmed?