Friday, June 14, 2024
HomeDigital TransformationSummary of 7 Deep Learning Algorithms – Explaining the Differences from Machine...

Summary of 7 Deep Learning Algorithms – Explaining the Differences from Machine Learning

Deep Learning

Now that the demand for deep learning is increasing, deep learning algorithms have been introduced in many fields and places and are playing an important role.

Under such circumstances, deep learning and machine learning tend to be confused, and many people may not understand the decisive difference between algorithms.

For such people, this article introduces what deep learning algorithms are, their differences from machine learning algorithms, and typical deep learning algorithms.

Table of contents

  • What is a deep learning algorithm?
    • Algorithm Importance
    • Algorithm features
  • Definitive difference from machine learning algorithms
  • 7 Deep Learning Algorithms
    • (1) Convolutional Neural Network (CNN)
    • ② Recurrent Neural Network (RNN)
    • ③LSTM (Long Short Term Memory)
    • ④GAN (Generative Adversarial Network)
    • ⑤Dropout
    • ⑥ Stochastic Gradient Descent (SGD)
    • (7) Activation function (ReLU)
  • summary

What is a deep learning algorithm?

Complex processing and analysis performed by AI is performed by a huge accumulation of simple calculations. Algorithm in AI is a word that expresses “procedure of calculation”.

Representative examples of deep learning algorithms include “CNN (Convolutional Neural Network)” and “GAN (Generative Adversarial Network).”

If you want to know a little more about the algorithm, please refer to the following article, which introduces the algorithm with a specific example.

Algorithm Importance

In order for AI to perform complex processing and analysis, it is necessary to find features and patterns in vast amounts of data.

For example, typical AI functions such as image recognition, voice recognition , and natural language processing are realized by learning the features and patterns of structured images, voice data, and languages.

Therefore, it is no exaggeration to say that algorithms for finding features and data necessary for complex processing and analysis of AI are extremely important.

Algorithm features

A feature of deep learning algorithms is the existence of “hidden layers”, and complex processing is realized by overlapping these “hidden layers”.

A “hidden layer” refers to the gaps that occur between layers when neural networks that mimic human neural circuits are stacked in layers.

The Japanese translation of deep learning, deep learning,'' also derives its name from the fact thathidden layers” overlap, which is expressed as “deep.”

From this, we can see that the “hidden layer” is a feature of deep learning algorithms.

Definitive difference from machine learning algorithms

The decisive difference between deep learning algorithms and machine learning algorithms is that they can discover the characteristics and patterns of data without being specified by humans.

In machine learning algorithms, AI could not learn unless humans specified the features and patterns of the data.

However, deep learning algorithms can find patterns and features in the data they are given, so they can learn features and patterns that humans are not aware of.

For example, if you want AI to distinguish between dogs and cats, the machine learning algorithm must learn by specifying features such as the shape of the ears and eyes that distinguish dogs and cats.

However, in the case of deep learning algorithms, it automatically finds and learns the characteristics that distinguish dogs from cats.

Therefore, deep learning algorithms that can automatically find and learn important features may perform processing with higher accuracy than machine learning algorithms.

7 Deep Learning Algorithms

So far, I have explained deep learning algorithms while also paying attention to the differences from machine learning algorithms.

Finally, we introduce the following seven deep learning algorithms.

  1. Convolutional Neural Network (CNN)
  2. Recurrent Neural Network (RNN)
  3. LSTM (Long Short Term Memory)
  4. GAN (Generative Adversarial Network)
  5. Dropout
  6. Stochastic Gradient Descent (SGD)
  7. activation function (ReLU)

I will explain each.

(1) Convolutional Neural Network (CNN)

CNN is an algorithm in which a large number of “convolutional layers” and “pooling layers” are alternately incorporated into the neural network mentioned above. So what’s going on in the two layers?

This time, we will explain ” image recognition ” as an easy-to-understand example.

First, in the “convolution layer”, the input image is cut into a certain pixel size, and calculation is performed on the cut part to identify and extract “what is the feature”.

And in the “pooling layer”, the features extracted locally by the convolutional layer are compressed and the irrelevant parts are removed.

CNN is mainly applied in the field of image recognition mentioned above, and is widely used for image recognition in unmanned cash registers such as Amazon Go, object detection using drive recorders, and image diagnosis in medical care.

If you want to know more about CNN, please refer to the article below.

② Recurrent Neural Network (RNN)

In conventional neural networks, it was assumed that “outputs in each layer are independent of each other”.

However, in RNN, the output of each layer not only feeds into the next layer, but also influences the output of other layers. Therefore, it is possible to make AI learn using time-series data that has relevance to the information before and after it.

For example, it can learn the relationships between words and between sentences, and perform natural language processing without feeling strange.

In this way, RNNs, which can handle time-series data related to previous and subsequent information, have achieved results in the fields of natural language processing such as machine translation and future prediction.

③LSTM (Long Short Term Memory)

LSTM is an evolution of the RNN mentioned above.

This is because RNN has the disadvantage that it cannot distinguish whether the relevance of data is long-term or short-term, and LSTM is an algorithm that overcomes that disadvantage.

Specifically, by developing the structure of the “hidden layer” and leaving important data and forgetting unnecessary data, we eliminated the disadvantages mentioned earlier.

In addition, long-term learning is also possible by adding a memory called “memory cell” that can save the learning situation to LSTM and a “forgetting gate layer” between the “input gate layer” and the “output gate layer”. did.

Like RNNs, LSTMs are also active in fields such as natural language processing and prediction of time series data.

④GAN (Generative Adversarial Network)

GAN is an ” unsupervised learning ” algorithm that learns data without giving correct data, and is widely used to generate non-existent images and videos.

The algorithm is divided into a “generator” and a “discriminator”, which work by competing against each other.

Taking image generation as an example, the “generator” generates a realistic image that does not exist, and the “discriminator” identifies whether the image generated by the generator is genuine or not.

By repeating the competition between the “generator” and the “classifier” in this way, it is possible to gradually generate a realistic image.

From the name of the generative adversarial network (GAN), it is easy to associate the mechanism of “generating something that does not exist by the hostility between the generator and the discriminator”.


Dropout is an algorithm to prevent AI overfitting. Overfitting refers to over-learning with experimental data, which may not be able to handle unknown data that exists in reality.

As a mechanism, by setting Dropout in part of the “hidden layer”, part of the function that should be exhibited is suppressed, preventing over-learning.

Suppressing a part of the function is called “inactivation”, but you can adjust AI learning by specifying the ratio of inactivation by Dropout.

Dropout, which suppresses overlearning, has high versatility due to its function, and is used in a wide range of fields such as speech recognition , image processing, and natural language processing.

⑥ Stochastic Gradient Descent (SGD)

SGD is one of the methods for solving optimization problems. An optimization problem is a problem of maximizing or minimizing an objective function under certain constraints.

For example, in economics, the happiness that individuals derive from consuming and saving goods is represented by a “utility function,” and the problem of maximizing this within a budget determined by an individual’s salary is solved. Just this is a prime example of an optimization problem.

In fact, AI also learns by solving various optimization problems.

For example, when AI learns natural language processing, it solves the problem of minimizing the degree of divergence between the learning language data, which is the objective function, and the model, under the constraint that the language model should not be too complicated.

This makes the sentences recognized and created by AI more realistic and natural. .

SGD is one of the algorithms for AI to solve the aforementioned optimization problem, and is widely used for AI learning.

(7) Activation function (ReLU)

ReLU (regularized linear function) is a type of activation function, and an activation function is an algorithm that determines how to transform the input numerical value and output it.

The activation function is always built into the neural network, and the activation function enables complex calculations and advanced processing by AI.

Among them, the ReLU function is an activation function that “outputs 0 when the input value is negative and outputs the same value as the input value when the value is 0 or more”.

For example, if you want to exclude negative values ​​that exist in the data as outliers, you can output only positive value data by incorporating ReLU.

Thus, activation functions, including ReLU functions, are essential algorithms embedded in all neural networks.


This time, I explained the difference between deep learning and machine learning, focusing on the differences in algorithms.

I also hope that you have gained a better understanding of the structures and functions of the seven deep learning algorithms.

Deep learning and machine learning are almost always mentioned when researching AI.

For the time being, just knowing that there is a big difference in whether or not it is necessary for humans to specify the characteristics and patterns of data will help you understand AI even more.



Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments