Artificial intelligence is the focus of many forums, blogs, and discussions these days. Everyone is talking about it, from Elon Musk to college first-year students, whether they understand it or not. The truth is, AI is so complex that even tech experts such as Elon Musk admit we still have “a lot to learn.”
Musk’s ambivalence about whether we should even proceed with the further development of AI begs the question, “Are we going too far?” as the development of superintelligences and the resulting technical applications are created at the speed of light.
A famous quote from the now classic movie, “Jurassic Park” comes to mind when any discussion of AI begins today. Jeff Goldblum, playing the statistics and math genius, “Ian Malcolm”, replies to John Hammond, who has just stated that his scientists have developed something that no one has ever done. Goldblum interrupts and says, “Yeah, but your scientists were so obsessed with finding out if they could, they didn’t stop to think if they should.”
Opening Pandora’s Box
AI seems to be yet another Pandora’s box (similar to nuclear weaponry) that, once invented, can never be “uninvented.” So AI is a part of the world now.
Whatever the case, one of the biggest concerns about artificial intelligence is that it does not come with ethics. Steve Jobs’s former partner and Apple co-founder, Steve Wozniak, recently stated, “The problem with AI is that it does good things for us, but it can make horrible mistakes because it does not know what it’s like to be human.”
Inherent in this statement is the fact that, unlike humans, AI technology, though it appears human, will never have the frame of reference of the human experience, never feel emotion, never consider circumstances before making a judgment, and cannot know right from wrong.
In short, with no moral compass, it doesn’t know where to go or how to develop a conscience to “do the right thing” like we humans were born with.
Can You Teach AI Ethics?
The short answer is, “No.” As Steve Wozniak mentioned, since AI has had no experience as a human, they have no reference point (other than what they are taught from its interconnected databases) on what human experience involves. While AI does a pretty good job of emulating humans, it doesn’t understand the responsibilities, level of accountability, or ethical standards of a world of humans, where the solutions affect so many other people and whose outcomes may ripple through the sands of time for many eons.
AI operates in the world of immediate resolution, where ideas and solutions are decided at a moment’s notice in order to solve the problem. AI does not consider the result of their solutions and mistakes on a larger scale and how it may negatively impact people and the world in general.
7 Important Considerations in AI Technologies
1. Safety Concerns
One of the first elements that has to be considered is how safe AI is to human beings. Since AI cannot consider circumstances and can only guess how its decisions will ultimately affect the people around it, it does not have the moral or ethical experience to make decisions based on anything other than what is pragmatic. This leaves a wide array of possible mistakes to occur that may harm those that stand in the way of its goals.
2. Privacy and Transparency Concerns
Steve Jobs’s co-founder, Steve Wozniak, stated recently, AI has never had the experience of being human. Therefore, it cannot possibly base its decisions on ethical or moral grounds. With no frame of reference to what is right and just and fair, it will only operate in the environment and situation it finds itself in. This leaves too much to “chance” and does not take into account privacy concerns or the need to be transparent about its decisions.
3. The Wizard Behind the Curtain
One of the key concerns about AI is that how it operates and what its “motives” are is greatly dependent upon who is operating the programs and the database that is its core. In short, since AI is neither moral nor responsible for its choices when it does something that is “bad,” we can only blame the scientists or the creator behind its development, much like the monster Frankenstein who turned to his creator and asked, “Why did you make me this way?” With no independent recourse to make AI legally or ethically responsible for its actions, we have a runaway train that may run full throttle through a scenario without considering the results of its journey.
4. Incomplete or Untrustworthy Data
AI would be nothing if it were not for its massive database. This is the guts and the brains of artificial intelligence. In fact, it is its very existence. The very fact that it can compute, analyze, and “think through” a solution for any problem depends on the data that it has access to, coupled with the way it was programmed to learn based on its environment. Therefore, it follows that, if the database is inclusive of data that is faulty, incorrect, or untruthful because of the sources of the data and their motives, the resulting AI will produce faulty results.
In essence, all a malicious person would have to do if he so chose would be to program learning to exclude anything that included such standards as the Bible, the US Constitution, or other standards that are considered essential to a humane, civilized society, and remove them from the access of the AI machine. A mindless, inexperienced machine lacking inference, situation ethics, or conscience, delegated to decide the fate of the world? This is indeed a scary thought.
5. Threat to Livelihood
On top of the other concerns people have about AI, they also feel their job is going to be threatened or that they will be removed from the picture once AI machines are established in their previous roles. According to current scientists and researchers, this won’t happen. They state, “Even though a lot of manual labor type jobs will indeed be eliminated, there will be a whole host of other jobs created by the creation of AI.” While this is yet to be seen, given the plan we know they have to replace humans on a large scale across the globe, the statement is less than comforting.
6. Sociopathic Machines?
Can machines be sociopathic? No. Not technically. Since they have no conscience or morality and never will have, they will never be held responsible for any mistakes they make, whether malicious or accidental. Only the creator of such machines and their learning behaviors can be legally and ethically blamed for such errors. Imagine putting someone in charge of people’s safety or well-being who has no interest in whether they are safe or not. And, while they have a kind of simulated “thinking” process that helps them come to solutions, they will never allow “doing the right thing” to enter their consciousness. So you have a machine running rogue about its duties to enact laws or ideas without any consequences whatsoever. This idea presents a scenario that is nothing short of nightmarish.
7. Automating Conscience
One of the best aspects of AI is that it can run on automation indefinitely. Business owners like the amount of time that can be saved when all is left to an automated process that will spend time analyzing and making decisions while people can focus on everything else.
But if we automate the idea of enforcing policies, for example, to a being that neither cares nor understands conscience, the result will likely include none of conscience, empathy, or well-being for human beings and their situations.
The simple truth is we cannot automate “doing the right thing” to a being that is not vested in it. The machine will always choose the best route to solve the problem, even if that means destroying and using humans to carry out its mission.
In short, we still have a long way to go to create a world that is favorable to both humans and machines. And, while it is possible that we can use AI as a useful tool, it is imperative that we keep control with humans in every case.
Just recently, a well-known broadcast personality received a call from someone whom he thought was a good friend, another well-known news personality, until he heard him using foul language and lewd innuendos that would never come from that person’s mouth. How did they do it? They used a recording of the person’s voice and wrote the text for it to say in that voice, making the other person think it was that person for a few seconds. This is just the beginning of what may prove to be a long line of identity theft incidents and defamation games that can easily destroy a person’s reputation or livelihood with the flip of a button.
Steven Spielberg’s visionary movie, “Minority Report,” is a good example of where AI could lead us if left without human guidance and direction. People are arrested for crimes they have not committed yet. Thought police reading your mind through technology via chips put inside the brain, and many other wild realities that could exist if this technology is allowed to run free.
Are there some out there who think AI represents a future without humans? You had better believe it! So, our challenge as human beings is that we must never turn over important duties that involve justice, morality, or ethics to these machines. They may be fascinating. They might be useful. But if you want them to have a conscience, they just don’t have one.
To stay more important than AI, you need specific skills that AI cannot dominate. There will still be a need for coders in the future. To learn more about this topic and apply your tech skills in a way that will pay off big, learn the best way to learn Python and define your future.