What content strategy and technology will leading brands use in 2025?

The state of digital content

Artificial Intelligence

Uncovering the mystery of deep learning

by David Tenenbaum  |  January 31, 2024

4 min. read
Demystifying deep learning and how it works

Everyone is talking about AI these days: is it good? Is it bad? Will it add or cost jobs? Will everything be different?

AI is surrounded by an air of mystery, about how it can do what it does, and how it knows how to do these things.

But AI isn’t mysterious. It works exactly like how a five-year-old learns things.

Let’s look at deep learning, the flavor of AI used most these days.

Understanding deep learning

By the time a child is five years old, they have absorbed a lot of information. But how does a five-year-old (we will call him little Johnny) develop an understanding of what a car looks like?

Odds are that someone read him a children’s book containing a picture of a car. Johnny probably has been driven to see relatives, or the doctor, in the family car. Maybe he saw a children’s TV show with a car.

All these data points formed a pathway in their brain that equates a car with that thing with four wheels. Each of those points were part of that five-year-old’s training set.

Child learning with labeled images of a dog, car, airplane, and house over a green background.

This learning process is not without errors! For example, let’s say you had a set of cards with the picture of an object (dog, car, airplane, house) on one side, and the word for it on the other. The first time through the deck, Johnny may say “cat!” when shown a photo of a dog. Once Johnny is corrected, the next time through the deck, he is likely to get it right.

This is almost exactly how an AI model learns things.

A biological neural network illustration of a man seeing and recognizing a car over a green background.

The human brain is a neural network, with biological neurons. AI has a neural network, much the same.

How AI neural networks work

We can illustrate how an AI neural network works with some rows and columns of dots (we call those hidden layers), where each dot is connected to others and has a little math behind it (it can multiply any input by a number and add or subtract from any input by a number). You can think of the dots as neurons made of software.

A digital neural network illustration of the input, hidden layers, and output neurons over a green background.

Now, let’s ask the AI to identify an object like an airplane.

Inputting an airplane photo into an AI model for training.

At this point, the AI neural network model knows nothing. The math at each node is just a bunch of random numbers, kind of like a newborn baby that has seen no inputs yet. Given the random numbers in our model, the AI model is almost sure to answer incorrectly.

An untrained AI model getting the wrong answer during training with an airplane photo over a green background.

Remember when little Johnny told us cat when the right answer was dog? We corrected Johnny, so let’s do the same thing with AI and send a correction back through its network. The correction will slightly nudge the math at all the nodes towards the right answer and away from the wrong answer (called backpropagation):

An illustration of an AI model learning the right answer by correcting the math on a few nodes and sending it back through over a green background.

Now, to train our AI model in this example, we would need to send lots of photos of dogs, cars, airplanes, and houses, and we must send them through millions of times. We can consider this a massive training set, which means lots of photos that are correctly labelled as dogs, airplanes, or whatever we wish the AI to be trained on.

Without a great training set, we have no chance of getting a useful AI model.

Given a great training set and using advances in technology (with chips called graphics processing units or GPUs), we can quickly end up with an AI model that can correctly classify a picture it has never seen before as containing either a dog, car, airplane, or house!

And, do it as well as a human can.

What can AI help us with?

Of course, we could train a new model to identify types of cancer, troubleshoot automobile issues, suggest new Amazon purchases, or solve lots of other problems we might care about.

While the neural networks will improve over time, the reality is AI only works with a huge training set. For example, if you want to train a model to understand writing in English, then you better use a training set the size of all of Wikipedia!

AI is good at recognizing (sometimes very subtle) patterns inside training sets. As an example, there is an AI model trained on slides of lung cancer, and it can identify if lung tissue is cancerous or not, and if it is, what type of cancer is involved. It can do it correctly about 96% of the time, which is roughly the same as the recognition rate of top oncologists. In some cases, it picks up on slide patterns so subtle that humans cannot see.

Bottom line: AI models are only as good as the training set you use to develop them.

And that is how deep learning works. Not much of a mystery, is it?