Article

Demystifying artificial intelligence

Separate the hype from real value and business opportunities

Artificial intelligence is receiving a lot of attention, as it has the potential to not only transform value chains but also disrupt entire industries. This poses a real threat to companies that hesitate to investigate the potential of AI. Unfortunately, the actual use cases of AI are far too often clouded by hype and misunderstandings. In this article, we examine what AI is to help decision-makers separate hype from actual use cases.

Published

February 2018

Author

Mark Jensen

In recent years, the topics of artificial intelligence (AI), machine learning (ML) and deep learning (DL) have generated a lot of excitement (and hype), but it has also caused some public concerns, for example, will AI lead to a vast amount of unemployment? Regardless of how you feel, AI is already being used in a wide range of applications such as self-driving cars, sales and marketing, fraud detection and healthcare. The scope of these applications is increasing as start-ups and scientists are looking at either how to improve existing AI solutions or how to discover new use cases. The fact that we are able to discover more use cases is in large part due to big data and fast GPU processing power. Without these drivers many use cases would not be feasible.

The increasing interest has also raised questions as to whether the AI hype will soon pass over its glory days, since part of the AI community fears that it cannot live up to the expectations. Concerns such as these are legitimate, but the disappearance of the AI is very unlikely to happen. Especially when you take into consideration that huge companies such as IBM and Microsoft are investing heavily in developing AI enterprise solutions, and that Google is rumoured to have paid approximately £400 million for DeepMind, an AI start-up, in 2014.

While start-ups and big tech giants are already focusing on and developing their AI strategies, others seem to be postponing it. Andrew Ng, Stanford professor and founder of Google Brains, states, “I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy” (Parloff, 2016).

In order to separate the hype from real value, it is important to have some basic understanding of artificial intelligence, machine learning and deep learning. This article highlights different concepts and exemplifies what machine learning entails.

Does artificial intelligence exist?

Over the last few decades, the movie industry has often illustrated AIs as robots who act and look like humans. It is therefore understandable if some people think that this is what is referred to when hearing about AI. However, while the portrait in many of these science fiction films (for example, Blade Runner, Ex Machina) is fascinating, the current state of AI is less exhilarating. Understanding what AI is can be some-what complicated, but we can think of it as either strong or narrow AI. According to John R. Searle (2009), strong AI is where the correct simulation really is the mind, and narrow AI is where the correct simulation is a model of the mind. That is to say, strong AI is understood as a machine that can perform all the same tasks as a human, while narrow AI refers to a machine that can perform only very specific tasks that humans can perform, for example, driving a car or playing chess. If you want to do multiple tasks that are different from each other, you would need to have multiple narrow AIs.

Since the birth of AI in the 1950s, there have been multiple approaches to developing narrow AI. One approach is expert systems, which tries to mimic a human expert decision-making process through a series of IF conditions are true THEN perform action or make conclusion. While this was very popular in the 1980s and 1990s, the interest has since faded. Another approach is machine learning, which today is equated with (narrow) AI, and as such we will not dive deeper into expert systems.

... strong AI is understood as a machine that can perform all the same tasks as a human, while narrow AI refers to a machine that can perform only very specific tasks that humans can perform ...

Machine learning as a form of intelligence

Machine learning is an area of study that is closely related to computational statistics, which is why there is an overlap between machine learning and statistics (Tibshirani, 2012). In recent years the gap between machine learning and statistics has been further reduced, and it will most likely continue. Machine learning can therefore be thought of as a fancy way of doing statistics.

Computers are generally much faster than humans when performing calculations. It can therefore be debated whether or not being fast at performing statistical calculations qualifies as a form of intelligence. Nevertheless, this does not diminish the usefulness of machine learning algorithms as a data analysis tool. In fact, it is the ability to learn without being explicitly programmed (Samuel, 1956) that makes it quite powerful.

Traditional algorithms are usually applied to a problem where a set of well-defined rules exists. A few examples of this could be the way you calculate your
tax returns or how to find the shortest route between you and your destination. However, there are problems where well-defined rules do not exist. In some of those cases, we try to look for patterns that can be used to predict the outcome. With the vast amount of data that is collected every day, this easily becomes a daunting and time-consuming task for any human to solve.

Machine learning is a toolbox for problem-solving

Machine learning is a great toolbox for finding “hidden” patterns that can be used to predict the outcome of a given scenario. As mentioned humans do not explicitly program machine learning algorithms. This means that we do not use predefined rules but rather program the machine to learn from an incomplete data set of examples. The machine learning algorithm will then try to infer its own rules from the examples on how to map input to output. An example could be to predict the date of a heart attack for a given human. Here we might know the input, for example, genetics, lifestyle, living environment and date, as well as the output, which is if a given person with the above-mentioned input had a heart attack. For any human to derive any sort of well-defined rules for a problem like this is not trivial. However, a machine would (hopefully) see a pattern in the data set that can be used to predict a possible outcome.

One area of machine learning, which is the course for the newly found interest in artificial intelligence, is deep learning, also referred to as neural networks. The reason is that over the last few years numerous experiments have demonstrated that deep learning is particularly good at mapping input to output, which is enabled by the huge increase in processing power and big data. And according to Ng (2017), performance of these networks seems to scale a lot better than more traditional machine learning algorithms.

Deep learning is scaling better than other machine learning algorithms for certain types of problems where input is mapped to output (ng, 2017)

This has encouraged many companies to build deep learning into their products. One such example is Tesla, who uses deep learning as part of teaching the car to recognise objects (Shapiro, 2016). Another example is Apple, who uses it to improve interaction between Siri and humans (Levy, 2014).

While there is a range of machine learning algorithms to choose from, we must carefully select which one we choose to work with on a given data set. The reason for this is that the machine performance depends on the problem you are trying to solve as well as the available data.

In certain situations, a given algorithm may also require that you spend time on enriching the data with additional information. This implies that human intelligence is still required, since we need to validate not only the actual machine performance but also the data and its quality.

Machine intelligence does not equal an artificial brain

The idea of developing an artificial brain seems captivating, and it is often said that deep learning is inspired by the brain when trying to explain how it works. This idea has been around as early as the 1950s, where it was predicted that the development of artificial brains was just a few years away (Yadav, 2015). Time has since past, and almost 70 years later we still have not developed an artificial brain. While the brain metaphor is an intriguing notion, it also seems to make people believe that we are developing something within the realm of strong AI. The reality is that we have very little understanding of how the brain works, and therefore, as a result, we understand even less of what is required to build a machine that works like the human brain (Ng, 2017).

The reality is that we have very little understanding of how the brain works, and therefore, as a result, we understand even less of what is required to build a machine that works like the human brain (Ng, 2017).

How do machines actually learn?

With the aim of understanding how machines are able to learn without a human brain, let us first consider what it means to learn. According to Ambrose et al. (2010), learning can be defined as a process that leads to change, which occurs as a result of experience, and increases the potential for improved performance and future learning. From a conceptual point of view, it can be argued that machine learning algorithms learn in a similar fashion. They gain “experience” by inferring information from a given input. And as new data is received, it tries to improve its understanding to make better predictions.

To illustrate the learning process, let us consider a simplified example. A car manufacturer wants to offer better customer service by being better at predicting when a car needs a service check instead of delivering it at fixed time intervals. Let us further assume that over the years, the manufacturer has collected information on motor vibrations and kilometres driven. Based on data examples, a labelled training data set is then created, with each data point labelled as either “No service” or “Service”. The following is a visualisation of the training data set:

Illustrating the whole data set that will be used for training the model

The above figure illustrates when there is a need for an actual service check after kilometres driven and the level of vibration in the motor. All the circles represent data points for when a given car actually needed service, and the X’s are when no service was required. The objective of the algorithm is to figure out how to separate all the data points with what is called a decision boundary. After the first few iterations, it may look something similar to this:

The model after having gone through a few iterations

While more data gets processed, the algorithm continuously updates the decision boundary until it has completed the training data set. Once the algorithm is finished, it has generalised its experience into a model from which we assume it will be able to predict the need for maintenance. The following represents how the model could potentially be visualised:

The final model for predicting when service is required

The above example illustrates a type of learning called supervised learning (mapping input to output), which means that we teach the machine what the correct answers are. While the given example seems simple, real models are usually a lot more complex, since they are trained based on data with more variables in them.

Limitations/challenges

Machine learning has shown it can solve a range of problems better than humans; however, such results do not necessarily come easy. Furthermore, there are also areas where humans certainly outperform machines. It is thus important to understand some of the challenges and limitations that exist within the field of machine learning. Particularly when applying it to a given business problem. This section highlights a few of these challenges and limitations.

Generalisation is a fundamental feature of the human brain that recognises patterns in past experiences and transfers that insight to new situations. If the process of generalisation did not occur, each response would have to be learnt in every specific situation (Walker, 2014). In machine learning, models must be able to generalise well, or else it will be poor at making predictions. There are two challenges when trying to achieve a good generalisation. The first is a frequent problem in machine learning (Grus, 2015) and is called overfitting. This occurs when the model learns the training data set well, which means that it will pick up on noise and learn it as concepts. This is unfortunate, since these concepts will not be present in the new data. The opposite situation to overfitting is called underfitting, which is when the model is too simple to learn the underlying trend in the data. The following is a simple illustration of how overfitting and underfitting could look like:

The line is our model and the x’s are the data points we would like to predict

When looking at the data points, it seems that they follow a curve, but the model in the first example is too simple, and therefore it does not pick up on this trend during training. In the last example, the model is very complex, and the line therefore ends up lying very close to the data points while it over-interpolates between points. Using either the first or last model on new data will lead to disappointment and, in some cases, very devastating results. As an example, Silver (2012) points to the Fukushima nuclear disaster in 2011 as a potential case of overfitting. When the nuclear plant was designed, it was decided that it should be able to withstand a magnitude 8.6 earthquake. Unfortunately, the earthquake that occurred in 2011 was a magnitude 9.0. Silver (2012) highlights a potential cause for the design decision using an earthquake prediction model. When Silver’s model is overfitted, it predicts a significantly lower possibility of the occurrence of at least magnitude 9.0 earthquake. Furthermore, Silver (2012) states that some seismologists had concluded that anything larger than 8.6 was impossible. This implies that the actual model used for making the design decision could have been overfitted. It is cases like this that should remind us of the famous quote by George Box: “All models are wrong, but some are useful.” – make sure that your models will be useful.

For most humans, common sense reasoning is something that comes naturally, but it is something that machines handle poorly.

For most humans, common sense reasoning is something that comes naturally, but it is something that machines handle poorly. This was demonstrated in the first annual Winograd Schema Challenge 2016, which tested a machine’s ability to answer a specific type of common sense questions called a pronoun disambiguation problem (PDP). The following example is taken from Commonsensereasoning.org and illustrates the challenge:

 

The trophy would not fit in the brown suitcase because it was too big

The trophy would not fit in the brown suitcase because it was too big

What was too big/small

 

The challenge here is to understand what “it” refers to. For humans such a question is relatively easy as they scored 90%. However, for a machine this poses too much ambiguity, since the best machine scored 58%. What is more interesting is that random guessing scored 44% in the same test, illustrating that even the best trained machine was just slightly better than guessing randomly. The reason why humans are so much better at answering such problems is that we are able to use our common sense, while contextualising is difficult for AI machines.

Understanding how AI machines make predictions is another challenge, particularly within deep learning. The reason is that deep learning models are generally considered to be black boxes (Géron, 2016). Just like a human brain, dissecting one would not reveal how it works. However, contrary to deep learning, humans can present their thought of reasoning, so that others can understand how a certain conclusion was reached. Making tough business decisions requires transparency, particularly if we are to make the decisions based on predictions. Satya Nadella, CEO of Microsoft, underlines the need for transparency by stating: “We want not just intelligent machines but intelligible machines.”

We want not just intelligent machines but intelligible machines.

Data quality is the Achilles heel in most, if not all, machine learning algorithms. A study (Das et al., 2016) that sought to understand the inner working of deep learning ended up highlighting the problem. Even though the machine gave correct predictions, it derived the predictions in a way that would seem illogical to humans. For example, when the AI model was asked what was covering the windows in a given picture, the machine answered “Blinds” by looking at a bed at the bottom of the picture instead of looking at the windows. The researchers pointed out afterwards that the data might have been biased (Vincent, 2016). This potentially means that there might have been an over-representation of pictures with beds and blinds in them compared to what you would find in the real world. The given example may seem innocent, but bias can have large societal consequences if not dealt with properly. This highlights that humans still need to evaluate whether the answers we get from deep learning make sense and proves a further need to understand how a machine reaches a given answer.

Besides data quality, the amount of data required for machine learning is another challenge. It is difficult to state how much data is required for a given solution, but it can be anything from 10,000 samples and upwards. Currently, Baidu, a Chinese web service company, is using 200 million images to train an image recognition application (Ng, 2017). Acquiring that amount of data is not trivial.

The challenges listed above are just a few examples of the many considerations that need to be considered when building a machine learning model.

Wrapping it all up

The boundaries of what is possible with artificial intelligence are constantly being pushed at a higher pace than we have seen before. It will therefore be imperative for companies and organisations to reflect on how this will affect them, because those who do not are at risk of being overtaken by their competitors. And according to a recent study, 30% of large companies have already developed an AI strategy (Ransbotham, 2017).

With the newly gained intuition, we have now taken a key step towards a better understanding of what artificial intelligence is capable of. Having that level of intuition can help us to avoid the marketing hype and prevent us from chasing the wrong business opportunities. The current level of artificial intelligence cannot solve all problems, but it can help us gain a deeper insight. To gain insight, we first need to consider which questions are important to have answered. An example could be to question which internal processes could be optimised. Furthermore, do not be afraid to experiment. Artificial intelligence is still a relatively new technology. This means that some attempts will most likely fail, which is to be expected, but this is how humans learn to master various disciplines. Every attempt, small and large, will create organisational learning that will help you move forward with artificial intelligence.

References

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. John Wiley & Sons.

Das, A., Agrawal, H., Zitnick, C. L., Parikh, D., & Batra, D. (2016). Human attention in visual question answering: Do humans and deep networks look at the same regions? arXiv preprint arXiv:1606.03556.

Géron, A. (2017). Hands-on machine learning with Scikit-Learn and TensorFlow: Concepts, tools, and techniques to build intelligent systems. O’Reilly Media.

Grus, J. (2015). Data science from scratch: First principles with Python. O’Reilly Media.

Levy, S., (2014, 30 July). The iBrain is here – and it’s already inside your phone. Wired. https://www.wired.com/2016/08/an-exclusive-look-at-how-ai-and-machine-learning- work-at-apple/

Ng, A., (2017, 2 Feb). Artificial intelligence is the new electricity. Stanford Graduate School of Business. https://www.youtube.com/watch?v=21EiKfQYZXc

Parloff, R., (2016, 28 September). Why deep learning is suddenly changing your life. Forbes. http://fortune.com/ai-artificial-intelligence-deep-machine-learning/

Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business
with artificial intelligence: Closing the gap between ambition and action. MITSloan Management Review.

Searle, J., (2009). Chinese room argument. Scholarpedia, 4(8):3100., revision #66188. http://dx.doi.org/10.4249/scholarpedia.3100

Shapiro, D., (2016, 20 October). Tesla Motors’ self-driving car “Supercomputer” pow-ered by NVIDIA DRIVE PX 2 technology. https://blogs.nvidia.com/blog/2016/10/20/tesla-motors-self-driving/

Silver, N. (2012). The signal and the noise: Why so many predictions fail – but some don’t. Penguin Books.

Tang, J., Liu, R., Zhang, Y. L., Liu, M. Z., Hu, Y. F., Shao, ... & Zhang, W. (2017). Application of machine-learning models to predict tacrolimus stable dose in renal transplant recipients. Scientific Reports, 7.

Tibshirani, R. (2012). Statistics 315a – Glossary: machine learning vs statistics. Stanford University. https://statweb.stanford.edu/~tibs/stat315a/glossary.pdf

Vincent, J., (2016, 12 July). First Click: Deep learning is creating computer systems we don’t fully understand. The Verge. https://www.theverge.com/2016/7/12/12158238/first-click-deep-learning-algorithmic-black-boxes

Walker, J. E., Shea, T. M, Bauer, A. M. (2014, May 5). Generalization and the effects
of consequences. Education. https://www.education.com/reference/article/generalization-effects-consequences/

Yadav, N., Yadav, A. & Kumar, M. (2015). An introduction to neural network methods for differential equations. Springer.