You can enter this question in an internet search engine and check what answers you get. Rather than doing that I will answer this question simply by “rewriting” history, so I can say it is “based on a true story”.
During the Second World War, Alan Turing and his colleagues at Bletchley Park succeeded in building a machine that used electromechanical components to decipher the communications to and from German submarines operating in the Atlantic. The German messages were encrypted using the famous Enigma machine. Arguably, the success of Turning and his colleagues was a decisive factor that helped Britain to win the war. In fact, what Turning and his colleagues did was build a special purpose computer designed to solve one particular problem; a problem that was supposed to be solvable only by an intelligent human being. Although the computer that Turing built remained a secret for many years after the war, the idea that general purpose computers, as we know them today, could solve intelligent problems was considered and became an active area of research. This was the genesis of Artificial Intelligence (AI).
After that, researchers started asking the important and related question: “What is intelligence?” The answer included something like “the ability to learn”. Therefore, the term Machine Learning (ML) was introduced and popularized. It meant that the computer will be able to learn, on its own, how to solve certain problems from some data presented to it. Another issue faced by researchers in the new field of ML was the question: “How can I make the machine learn, not only from data, but also from a human being who knows the subject well (an expert)?”. This led to many years of research and development in the (now almost forgotten) field of Expert Systems – which we’re not including in the title of this article!
While ML was an active research topic, the following question was raised: “What exactly is the machine trying to learn?” The answer, which was driven by commercial interests, was “to extract trends and patterns from data” – of which applications included defaulting on a loan (credit risk) and responding to a marketing offer. Then, someone thought: “but statisticians have been doing just that for years. What is new?” The answer was something like “yes, but this is not simply statistics, this is Data Mining (DM)”. Because we are looking for the gold nuggets in the data!
Years after implementing DM in business, and other fields, someone asked the question: “What is Data Mining trying to do anyway?” The answer was “mainly to predict customer behaviour with respect to risk, and purchasing patterns”. So, it is really more like Predictive Analytics because that’s what it is really trying to do – to predict.
In the last decade or so, with Predictive Analytics having been more widely accepted and deployed in businesses, the (same) analysts who have been doing the work all this time – who used to be called data miners, then predictive analytics specialists – started saying something like “but it’s much more than just making predictions. The work involves applying many techniques to work with data until a predictive model is developed and deployed. And, it is not an art. It is a science. It’s Data Science (DS)”.
The above “history” is based on my own personal experience and work in the field for the last 25 years. I admit I took the liberty of being a bit creative. However, I did that on purpose to explain the relationship between these supposedly different fields.
The reader will find many articles and blogs on the internet trying to link specific techniques to some of these titles, and use that to devise some classification and relationship between them.
So, if you are like me working for a vendor of platforms and tools used by “data scientists” and “predictive analytics specialists”, the next time someone asks you if your software supports the latest “Machine learning” or “Artificial Intelligence” techniques/methods that he/she heard about in the latest conference, you know how to answer!