Digital transformation

What's the Difference Between Data Science, Machine Learning, and Artificial Intelligence?

The data science, machine learning, and artificial intelligence fields do have a great deal of overlap, but they are not interchangeable.

twa-2018-01-ds-mi-al-hero.jpg
Getty Images

When I introduce myself as a data scientist, I often get questions like “What’s the difference between that and machine learning?” or “Does that mean you work on artificial intelligence?” I have responded enough times that my answer easily qualifies for my “rule of three”:

twa-2018-01-ds-mi-al-drob-twitter.jpg

The fields do have a great deal of overlap, and there is enough hype around each of them that the choice can feel like a matter of marketing. But they are not interchangeable: most professionals in these fields have an intuitive understanding of how particular work could be classified as data science, machine learning (ML), or artificial intelligence (AI), even if it is difficult to put into words.

So in this post, I’m proposing an oversimplified definition of the difference between the three fields:

  • Data science produces insights
  • Machine learning produces predictions
  • Artificial intelligence produces actions

To be clear, this isn’t a sufficient qualification: not everything that fits each definition is a part of that field. (A fortune teller makes predictions, but we would never say that they are doing machine learning.) These also aren’t a good way of determining someone’s role or job title (“Am I a data scientist?”), which is a matter of focus and experience. (This is true of any job description: I write as part of my job but I’m not a professional writer).
But I think this definition is a useful way to distinguish the three types of work, and to avoid sounding silly when you are talking about it. It is worth noting that I’m taking a descriptivist rather than a prescriptivist approach: I’m not interested in what these terms “should mean,” but rather how people in the field typically use them.

Data Science Produces Insights

Data science is distinguished from the other two fields because its goal is an especially human one: to gain insight and understanding. Again, not everything that produces insights qualifies as data science (the classic definition of data science is that it involves a combination of statistics, software engineering, and domain expertise). But we can use this definition to distinguish it from ML and AI.

The main distinction is that in data science there is always a human in the loop: someone is understanding the insight, seeing the figure, or benefitting from the conclusion. It would make no sense to say “Our chess-playing algorithm uses data science to choose its next move,” or “Google Maps uses data science to recommend driving directions.”

This definition of data science thus emphasizes:

  • Statistical inference
  • Data visualization
  • Experiment design
  • Domain knowledge
  • Communication

Data scientists might use simple tools: they could report percentages and make line graphs based on SQL queries. They could also use very complex methods: they might work with distributed data stores to analyze trillions of records, develop cutting-edge statistical techniques, and build interactive visualizations. Whatever they use, the goal is to gain a better understanding of their data.

Machine Learning Produces Predictions

I think of machine learning as the field of prediction: of “Given instance X with particular features, predict Y about it.” These predictions could be about the future, but they also could be about qualities that aren’t immediately obvious to a computer. Almost all Kaggle competitions qualify as machine learning problems: they offer some training data, and then see if competitors can make accurate predictions about new examples.

There’s plenty of overlap between data science and machine learning. For example, logistic regression can be used to draw insights about relationships (“the richer a user is the more likely they’ll buy our product, so we should change our marketing strategy”) and to make predictions (“this user has a 53% chance of buying our product, so we should suggest it to them”).

Models like random forests have slightly less interpretability and are more likely to fit the “machine learning” description, and methods such as deep learning are notoriously challenging to explain. This could get in the way if your goal is to extract insights rather than make predictions. We could thus imagine a “spectrum” of data science and machine learning, with more interpretable models leaning toward the data science side and more “black box” models on the machine learning side.

Most practitioners will switch back and forth between the two tasks very comfortably. I use both machine learning and data science in my work: I might fit a model on Stack Overflow traffic data to determine which users are likely to be looking for a job (machine learning), but then construct summaries and visualizations that examine why the model works (data science). This is an important way to discover flaws in your model, and to combat algorithmic bias. This is one reason that data scientists are often responsible for developing machine learning components of a product.

Artificial Intelligence Produces Actions

Artificial intelligence is by far the oldest and the most widely recognized of these three designations, and as a result it’s the most challenging to define. The term is surrounded by a great deal of hype, thanks to researchers, journalists, and startups who are looking for money or attention. This has led to a backlash that strikes me as unfortunate, since it means some work that probably should be called AI isn’t described as such.

Some researchers have even complained about the AI effect: “AI is whatever we can’t do yet.” (It doesn’t help that AI is often conflated with general AI, capable of performing tasks across many different domains, or even superintelligent AI, which surpasses human intelligence. This sets unrealistic expectations for any system described as “AI.”)

So what work can we fairly describe as AI?

One common thread in definitions of AI is that an autonomous agent executes or recommends actions (e.g., Poole, Mackworth and Goebel 1998, Russell and Norvig 2003). Some systems I think should be described as AI include:

  • Game-playing algorithms (Deep Blue, AlphaGo)
  • Robotics and control theory (motion planning, walking a bipedal robot)
  • Optimization (Google Maps choosing a route)
  • Natural language processing (Bots - by “bots” here I’m referring to systems meant to interpret natural language and then respond in kind. This can be distinguished from text mining, where the goal is to extract insights (data science) or text classification, where the goal is to categorize documents (machine learning)
  • Reinforcement learning

Again, we can see a lot of overlap with the other fields. But there are also distinctions. If I analyze some sales data and discover that clients from particular industries renew more than others (extracting an insight), the output is some numbers and graphs, not a particular action. (Executives might use those conclusions to change our sales strategy, but that action isn’t autonomous.) This means I would describe my work as data science: it would be cringeworthy to say that I’m “using AI to improve our sales.”
The difference between AI and ML is a bit more subtle, and historically ML has often been considered a subfield of AI (computer vision in particular was a classic AI problem). But I think the ML field has largely “broken off” from AI, partly because of the backlash described above: most people who choose to work on problems of prediction don’t like to describe themselves as AI researchers. (It helped that many important ML breakthroughs came from statistics, which had less of a presence in the rest of the AI field). This means that if you can describe a problem as “predict X from Y,” I’d recommend avoiding the term AI completely.

Case Study: How Would the Three Be Used Together?

Suppose we were building a self-driving car, and were working on the specific problem of stopping at stop signs. We would need skills drawn from all three of these fields.

MI: The car has to recognize a stop sign using its cameras. We construct a dataset of millions of photos of streetside objects, and train an algorithm to predict which have stop signs in them.

AI: Once our car can recognize stop signs, it needs to decide when to take the action of applying the brakes. It’s dangerous to apply them too early or too late, and we need it to handle varying road conditions (for example, to recognize on a slippery road that it is not slowing down quickly enough), which is a problem of control theory.

Data science: In street tests we find that the car’s performance isn’t good enough, with some false negatives in which it drives right by a stop sign. After analyzing the street test data, we gain the insight that the rate of false negatives depends on the time of day: it is more likely to miss a stop sign before sunrise or after sunset. We realize that most of our training data included only objects in full daylight, so we construct a better dataset including nighttime images and go back to the machine learning step.


David Robinson is a data scientist at Stack Overflow, and works in R and Python. The full version of the article was originally published by the author on varianceexplained.org and is republished under the website’s Creative Commons Attribution-ShareAlike 4.0 International License.