Much of our current understanding about Artificial Intelligence is informed by what we see in popular culture. Due to ignorance and popular cultural portrayals of A.I. and machine learning, many people fear machines that are capable of computing complex tasks. There exists the belief that artificial intelligence could operate in a way that is counter to humanity’s best interests, conjuring images of The Terminator and other similar films.
As we explore the boundaries of the technology with consumer-facing technology like self-driving cars, and machine-learning algorithms such as recommendation engines, the general public is learning more and more about how these things work. But will we get to a point where people accept AI?
WHAT IS ARTIFICIAL INTELLIGENCE?
A.I. refers to software that models its programming on human behaviour, mimicking rational and logical decision making processes. Machines cannot conceive of anything outside their programming or make irrational, emotional decisions, so they can only operate with logical and utility. When people hear things like “machine learning,” they imagine it within the context of a movie where a machine eventually learns it’s a machine and decides to do something about it. In reality, it can only “learn” within the boundaries defined by whomever designed it.
Additionally, unlike humans A.I. does not recognize or quantify emotional states. They cannot make purely emotional decisions. A.I. refers to logic rational intelligence that displays a computer’s “rational thought,” although it does not technically have any thoughts of its own.
Machine learning, one of the many subsets of A.I., refers to how computers react and change based on the new circumstances it encounters when making decisions. Advanced recommendation systems are being designed to push the limits and to allow A.I. to develop complex patterns and trends based on data that it is fed. Traditionally, the more data the algorithm has access to, the better result it can provide. It’s an extreme version of what we see in people, where the more experience someone has the better they can predict what will happen next.
COMMON FEARS AND MISCONCEPTIONS OF ARTIFICIAL INTELLIGENCE
Some of the most successful movies television series of all time have focused on “evil robots,” like the T-800/T-850 and T-1000 Terminators from the Terminator series, HAL9000 from 2001: A Space Odyssey, or the Sentinels from the X-men series.
These fears are largely misinformed and are attributable to a fundamental misunderstanding of A.I. Machines’ decision-making processes are as sophisticated as their programming allows and are completely controlled by humans through every step of the way. We should also notice that in these cases, the destructive nature of these machines are either directed by a human, or the machine is adopting the emotional state similar to a human. Our fear is that machines will adopt the same fears and motivations as us.
They cannot “rise up” or “enslave” the human race once they “gain consciousness” because machines are programmed for specific tasks, despite how contextual and complex these tasks may be. In the sense that the human mind can only perceive certain aspects of the universe and comprehend massive lengths of time way beyond our own lifespans, there are boundaries within our own consciousness and ability to learn that prohibit us from reaching certain levels.
HOW ARTIFICIAL INTELLIGENCE PREDICTS BETTER THAN HUMANS
The main difference between A.I. and human decision-making is that the former is completely logic-based, while the latter can be decisively emotional. Logic, to put it very simply, is like the mathematics of reasoning. A study by Neuroscientist Antonio Damasio found that individuals who were emotionally impaired were not able to make even the simplest decisions. Even though these individuals knew the logical choice, they could not function from an emotional state of mind, and they were inept at good decision-making. Going back to the fears listed above, these emotional decisions are often seen as definitively human, so any decision made outside of this completely subjective matrix is seen as “not human,” and thereby “against our unknowable interests.”
On the other hand, machines do not suffer from this sometimes-fatal emotional flaw. A machine will always analyze a dataset without bringing emotion into the mix, and machines have the ability to parse much larger mountains of data at a much faster speed. It will find patterns in the data, draw conclusions, and finalize a logical solution. Machines operate from logic and will therefore make the most logical decisions based on their programs. With machine learning and semantic technologies, AI can refine this process over time to become more effective at it.
From a simple perspective, machine learning is predictable and therefore should not be feared. It is also safer. Chris Urmson, leader of the Google Self Driving Car project, in a 2013 article for Technologyreview.com, stated “[Google’s] car is driving more smoothly and more safely than [their] trained professional drivers.” Two years later, this is still true.
For humans, emotional decisions take into account our biases, random and unrelated thoughts, grudges, or personal beliefs when solving tasks in the moment. We are not always logical and we may not always make good, utilitarian decisions when emotional. Machines, however, can be accountable for purely logical decisions. An important distinction becomes clear when we begin talking like this: is a prediction not just another form of prejudice? Are these semantic technologies not just developing prejudices over time? If so, is that necessarily a bad thing, or is that the ultimate goal?
ARTIFICIAL INTELLIGENCE CAN BE PROGRAMMED TO EXCEL AT DECISION MAKING
Just like Google’s Self Driving Car, A.I. can be synonymous with safety and good decision making. It reacts to human interference, not in spite of it. A machine cannot start, execute, or change its own programming; it requires a human programmer to do this, and to redefine its boundaries. However, machines learning can evolve fluid decision making while still operating within its parameters. This is where recommendation systems excel. As they continue to analyze data and develop patterns, they find new ways to entice your consumers to purchase products and services from your business.
New research findings from MIT found that machines were better at making predictions than humans. A machine competed with human teams to write predictive algorithms that would identify patterns and choose the most relevant variables to said patterns. The machine bested the humans in three different trials, even writing an algorithm much quicker, in a matter of hours versus months.
In summation, we must remember that the thing we fear most about machines is not that they become less human, it’s they become more human. More emotional, more deceitful, and more illogical. We can train AI to beat a human at chess, but chess is something invented by humans. It is not a universal constant, if you showed it to an alien they wouldn’t know what it is. As long as those very strict and narrowly defined parameters are in place, it’s not like it can jump around and take over a satellite. The fears of A.I. are tethered to the unknown, and we fill in the unknown with familiar stories. Once we, as a culture and society, understand A.I. better, we will be telling new stories.