My college studies in information systems taught me that ideas about Artificial Intelligence (AI) have been around since the 1950’s, when the term was coined. In recent years, however, the term has come to be used so often and for so many different things that I became unsure of what everyone meant when they talked about AI. Moreover, the volume of articles published on AI was overwhelming. I craved a framework that would simplify and clarify what AI is all about today and how to think about taking advantage of the technologies.
To find an actionable framework, I have been on a journey over the last several months to get educated about AI. I attended technology conferences, talked to people in my network, took courses at Stanford, and reviewed a wide range of literature. From the outset, I wanted to share what I have learned with other senior Human Resources (HR) leaders. That said, I hope others who are early in this journey can also benefit from some of this information. This will be the first of two articles. In this article, I have grouped what I have learned about AI into two parts: (1) Defining AI and (2) Understanding how AI is being used today.
AI Defined – A Simple Framework
The definition of AI that I found most satisfying came from Dr. Sohila Zadran in a course on AI at Stanford University. She described AI as the ability for a machine to display intelligence that is equivalent to humans. She further defined intelligence as the ability to learn from experience, solve problems, and use the accumulated knowledge to adapt to new situations. She had a great diagram to explain this:
In her definition, artificial intelligence is the output that demonstrates intelligence equivalent to humans. The technologies in the middle (such as machine learning) are simply a means to that end.
It seems there is a lot of debate about the basic definition of AI. For example, in his book Artificial Intelligence: What Everyone Needs to Know, Jeffrey Kaplan views the essence of AI – and of intelligence in general – to be making generalizations in a timely fashion based on limited data. The more complicated the problem, and the quicker that conclusions can be drawn with minimal data, the more intelligent the behavior. Kaplan also points out that machines can perform some tasks much better than humans – meaning that some machines could easily meet and surpass Dr. Zadran’s definition. However, his more nuanced definition requires delving deeper into several aspects of AI that are beyond the scope of this article. (Kaplan’s book is an excellent primer for those who want to learn more about AI). I remain satisfied with Dr. Zadran’s definition as a good, simple starting point.
Dr. Zadran’s definition reminds me of the Turing Test. In recent years this became more widely known as part of the storyline in the movie The Imitation Game. A simplified definition of the Turing Test is this: if you are interacting with a human and a machine and you can’t see either – and you can’t tell the difference between the two – then the machine is exhibiting human equivalent intelligence. This is clearly a very high bar!
Movies often paint a very scary pictures of AI. They picture AI as an existential threat: a “superintelligence” that might pursue goals that puts humankind at risk. Some examples include Terminator, Ava in Ex Machina, or the Borg in Star Trek. Such fears relate to the concept of “artificial general intelligence” (AGI), which would be the equivalent of human-level awareness and capability. But the reality is this does not yet exist, and most forecast it will be a very long time before it does (World Economic Forum Global Risks Report 2017). My observation is that many things people are calling AI would not meet the Turing Test or come anyway near the level of AGI. So, what are all of these things that people are referring to as AI?
One possible framework for thinking about the proliferation of AI references is to think about it like a field of study and research. The field of study of AI is broad and has many components. You can see this perspective in the attached diagram, which is also from Dr. Zadran in her course at Stanford:
Here’s an example to apply this framework. Personal assistants like Siri rely heavily on technologies like speech recognition and natural language processing. I would argue that Siri doesn’t yet produce outputs that pass the test of matching human intelligence on a regular basis. But if you view it simply from the perspective of whether Siri uses these technologies from these fields of study – then you could easily describe it as being a form of AI. Regardless of whether this framework is appropriate, my conclusion is that any application of any of the technologies in the diagram is frequently being described as AI – because it’s one of the fields of study in the AI umbrella – and because it lends cache to the products and services applying these technologies.
An even simpler way of looking at all of this is to consider AI technologies as one of many waves of advances in automation of tasks. No doubt it is moving faster and has broader implications than many other waves of automation – but the outcome is similar.
I am skipping over many aspects of AI to arrive at a simple framework to make sense of things. I am not attempting to create an academically pure definition, explore the probability of creating sentient beings with these technologies or dive into ethical considerations. I want to have a framework that allows me to separate reality from the hype and proceed in a practical manner in the near term with real business challenges and opportunities. Ultimately, I see AI as the broad field of several technologies that are combined together to create new opportunities to automate new tasks and gain new insights.
AI Technologies in Use Today – It’s Here and Growing Rapidly
Using the broader definition of AI based on the portfolio of technologies, it is not surprising that the application of these technologies in many parts of our lives is well underway. Part of the reason why the utilization of AI technologies has proliferated is due to the dramatic increase in computing power coupled with the explosion of data able to train these systems. Applications that were only theories of the past have become reality.
Current AI applications are forms of “narrow” AI or “artificial specialized intelligence” (ASI); they are directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned. Tasks such as trading stocks, writing sports summaries, flying military planes and keeping a car within its lane on the highway are now all within the domain of ASI (World Economic Forum Global Risks Report 2017). Google’s search algorithms and Facebook’s facial recognition tools already use AI technologies to “learn” about us so that we can more quickly achieve what we set out to do on their services.
While the narrow AI assessment makes sense, some software has long been able to outperform humans. This has happened in areas such as processing financial services transactions or determining the optimum delivery routes for companies such as FedEx. Computer vision is advancing to the point where it’s often superior to human quality inspectors on manufacturing lines at companies like Flex.
Both personal experiences and expert forecasts indicate that the application of AI technologies will continue to grow at a rapid pace. One example is the growing adoption of personal assistants like Alexa and Siri. Gartner recently forecasted that by 2020, 85% of customer service interactions will be managed by AI technologies such as “chat bots.” Large amounts of venture capital continue to flow into this space. There is a wide variety of materials currently being published that you can read to dig deeper.
It’s easy to imagine that given recent successes with these use cases, and continued increases in computing power and data, we’ll continue to see growth in the application of AI technologies to more and more business needs. Some equate the status of the adoption of AI as similar to where we stood in the adoption curve with “mobile” a couple of years after the first iPhone was released – seeming like everything would go “mobile” – but just after the very beginning.
In this article I won’t dive deep into implications for job disruption from all these new applications of AI – but suffice to say it is clear that this wave of automation will have broad-ranging effects to many processes and jobs. In the second article in this series, I will share a few key points that senior HR leaders should consider related to the advancement of AI technologies.