Yes, AI is taking over the world and it might or might not be a good thing depending on who you ask! After a long, slow and often disappointing results, Artificial Intelligence is going through an extraordinary revival and has stirred quite a buzz in recent years. Hardly there is a day goes by without a major journal publishing something about AI. It is not surprising, though! Today, from the home automation to autonomous driving and from mobile devices to life science, the presence of AI is ubiquitous, not to mention the extensive use of AI in the financial sector, surveillance, and military. Despite all the hoopla, the fact is we have just started to tap into the immense potential of AI and at present scratching its surface only. As the promise is AI will make navigating through the intricate world of the complex decision making easier for us, the interest in AI and its use will most definitely grow dramatically in the foreseeable future.
Artificial General Intelligence (Strong AI)
Although AI has gone mainstream, much confusion around AI is still left. The goal of the classical AI, which scientists have been pursuing since the early 1950s, is to create a thinking machine which has intelligence on par or better than any human. The research area of AI which is focused on achieving this goal is called Artificial General Intelligence or Strong AI. Strong AI is an exciting area which amalgamates ideas from various fields that include social science disciplines such as philosophy, economics, sociology, and psychology, on top of math, neuroscience, statistics, physics, electronics and programming. The dream of strong AI is still way far from coming to fruition. Although, some progress has been made in mimicking a number of human behaviors, with present technology it is still impossible to build a machine capable of emulating the rat’s intelligence, let alone human’s. Unfortunately, ever since the inception of the notion of strong AI, its reputation had been somewhat tarnished multiple times due to unsubstantiated hyperbole and unfounded claims made by both some overzealous scientists and unscrupulous media, eager to promote their publications with sensations.
Will it ever be possible to develop a strong AI that will have the cognitive ability of humans? The jury is still out there on this. Some scientists and philosophers do think that it is a possibility within a century or even earlier. Others disagree and contend that understanding of many of the fundamental characteristics that make us human such as wisdom and consciousness is still elusive. Despite some advances in this we still don’t know whether these elements are computable or attainable by any AI at all.
While imagining a cognizant machine, the paragon that we compare with is human intelligence. There is a reason for this. The highest form of intelligence known to us and our understanding of intelligence is based on human intelligence. Although, we still don’t have a universally accepted definition of intelligence we do have a somewhat plausible understanding of the
concept. The definition of intelligence that “it is the thinking and computational ability to achieve goals in changing environments based on contextual and prior knowledge” illustrates this point. Intelligence, per this definition, depends on such factors as perceptiveness, the ability to learn, ability to understand and interpret environmental signals, reasoning, and problem-solving skills. Contextual knowledge here is the information garnered from the reflection of environmental scanning and modeling. Required knowledge in making a decision stems from convergence and divergence of available knowledge through deduction and induction. Informed decision making plays a key role in actions taken in achieving a goal by any intelligent entity.
What is Machine Intelligence?
However, this definition relies in its entirety on our perception of human intelligence.
The definition of machine intelligence with its own idiosyncrasies will probably be different from the prevailing anthropomorphic view for many reasons. First, human intelligence receives and responses to analog signals through sense perception. The process is slow. In comparison, AI works with digital input and this process is instantaneous. Second, in the course of learning humans focus on one object at a time and the awareness of the rest of the context and surroundings for them become secondary or subsidiary. AI won’t require any division of attention as it can simultaneously grasp the entire environment in an instant. At the present level of AI growth, though, for the sake of computational cost reduction attention mechanism such as the recurrent neural network (RNN) is used for focusing on one region at a time for visual environmental scanning and object recognition. Third, human memory relies on abstract representation. It is far from perfect in recalling exact match of an object, concept, or experience. Computer-based AI memory can collect every detail and store the representation in dynamic repositories. AI machine has access to exact information, and no data loss takes place in the process of recalling. Finally, and importantly, the human brain is resourceful and often becomes creative in an unexpected way. Although we have many conjectures, we still don’t understand fully how the creativity phenomenon at cognitive, physical and functional levels works. As far as machines are concerned, we are still at an early stage in embedding machines with the concepts of creativity, intuition, and resourcefulness. To hypothesize what would be called machine intelligence is also difficult because many computational capabilities that we used to consider as intelligent over the years has become rudimentary for programs to perform thanks to ever-improving hardware and software technologies.
Another thing is it is entirely possible that the effort to imitate nature is not necessarily a right approach in achieving similar or better capabilities in machines. One example of this would be the comparison with aviation. Until principles of flight forces emerged, and laws of aerodynamics became evident, for centuries the aspiration for flying was confined within the concept of emulating birds and flying insects. Nature can and should work as inspiration, the outcome and ideas surrounding machine intelligence however like aircraft most probably will deviate from instances found in nature.
The Narrow AI
Since the inception of the field of AI in the mid-1950s, it went through various growth trajectories. With the advancement of technologies, we have started to realize that AI can solve many real-life problems that require extensive intellectual capability and sophisticated knowledge base for a human to perform. Because of this, many AI research, today, is involved in developing applications and agents that are aimed at performing works that if done by humans would be considered as intelligent. This field is called narrow AI. The narrow AI which is the focus for most AI researchers is meant to automate complex processes releasing people from tedious, complicated and repetitive works and help humans to take smart decisions. From this perspective, AI is a group of techniques, algorithms, and technologies that allow agents and apps to understand and model solutions to real-life problems either as an autonomous tool or as a supportive tool for enhancing human capabilities.
Emulating human intelligence is a complex and futuristic goal with many tough issues to surmount. Understanding this, scientists in early days of AI research set more realistic initial objectives to attain. One of such targets was to develop an AI program capable of defeating the best human chess players. Playing chess requires intellectual capability and the best chess players manifest cognitive wit and ingenuity that for centuries have fascinated public. It is no surprise that AI field undertook this challenge.
Although first chess-playing algorithm emerged in 1951, due to hardware constraints developing a master level chess program turned out to be an exceedingly difficult problem. Despite the claim made by prominent AI scholars Simon and Newell in 1958 that computer would gain the capability of trouncing a chess world champion within ten years, in reality, it took four decades to achieve this goal. The slow growth of AI and consistent display of underperformance and broken overly optimistic promises meanwhile prompted many AI naysayers to ridicule the field. Among them Dreyfus, a philosopher, was one of the most prominent. In early 70’s he declared that AI would never be able to compete with the best human chess players. In 1997, finally, the day of judgment arrived. Kasparov, the then reigning world champion, lost to a computer named Deep Blue developed by IBM. However, the success transpired more thanks to the large enhancement of processors in computational capabilities – brute force – rather than the deployment of any new algorithms. Deep Blue still used the same mini/max and alpha/beta algorithms along with heuristic rules that were deployed decades ago. Humans, on the other hand, uses strategies and tactical knowledge with far fewer computations to achieve similar results. This issue prompted an area of the AI research to emphasize on how to reduce the number of calculations and use human-like intelligence in solving problems. Nevertheless, it was a turning point for the AI research. It followed by triumphs of AI which continued with the wins in Jeopardy in 2011 and GO in 2015. The achievement of Deep Blue in chess, IBM Watson program’s win over the best Jeopardy players and Google’s AlphaGo’s victory over reigning Go champion are remarkable milestones for the AI discipline.
While many skeptics do not see much value in the AI programs’ ability to play these games better than humans, the fact is these challenges make AI programs goal-centric problem solvers, the capability which can be extrapolated to other real-life areas. For example, although IBM Watson started with Jeopardy game, today it is quite successful in discovering new solutions in life science and a real-time advanced companion to disease diagnostics.
The Rise of the AI Era
Today, we are firmly ensconced in a new era which is often dubbed as Knowledge era, Second Machine Age, or the era of Fourth Industrial Revolution. The key features of this epoch are the emergence, rapid advancement and confluence of multiple disruptive technologies. These are Quantum Computing, Block Chain technology, 3D printing technologies, Internet of Things (IoT), Genetic Engineering, Nanotechnology, Cognitive Computing, Big Data Analytics, Machine Learning, Robotics, Bioscience, Semantic Technologies among others. AI in this new age has the capability of bolstering, augmenting, combining and reinventing most of these fields modifying and shaping the technology future of the world. The consequences of this are far-fetched. For example, a recent survey conducted by Infosys shows that 64% of the businesses that use AI to some degree consider that mass scale deployment of AI is necessary for their future growth.
In fact, the job market in coming years will change so dramatically that 65 percent of the children enrolling in the schools today will work in areas that don’t exist today. The even more pressing problem is within next five years we will see substantial modification in required skill sets for most jobs and more than a third of present jobs such as truck drivers, office cleaners, warehouse assistants and agricultural helps will disappear.
In a survey of technology professional done by Pew Research a little less than half of the respondents feared that by 2025 technology would kill more jobs than it would create. These forecasts and rapid technological advances have brought forth a sense of urgency in understanding the new technologies, especially, AI, there possible positive and negative impacts and best ways an average person can prepare themselves for the upcoming period of sweeping and radical changes. For the entrepreneurs, businesspersons, and executives the question is how to harness the power of AI in their existing fields, reap benefits and create competitive advantage.
This is the first of a series of five articles on Artificial Intelligence, Machine Learning, IoT and Technology future.