Lets talk about the Ethics of Ai
This is a repost from @Kimberly Davis and we both think this is worthy of a thread of its own. Here was her original question .. ---- I want to pose a question, going back to Asimov's three laws of robotics...restating the three laws below so you don't have to scroll back... 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. How do we define "robot"? Is a self-driving car a robot? I, for example, have begun thinking of my (NOT self-driving) car as my "mech." It is so sophisticated now with cameras and navigation systems that I can control with voice commands. I read in the Houston Chronicle last week about a Cybertruck accident where the driver--allegedly--accidentally set the vehicle to auto and then could not get control of it to prevent an accident. (It was a court ruling, and I know I have details muddled... please don't quote me!) I also want to point out how very ingrained these three laws are in our thinking about the future. They clearly shaped Data's behavior in Star Trek Next Generation. I'm trying to think of explicit examples, but they are everywhere. But at what point do we start calling our helpful devices "robots?" The cars are thinking for themselves, or trying to...