Do neural networks work like the human brain?

Neural networks are mainstream in artificial intelligence. One fundamental question that seems to come up frequently is: do neural networks really work like the human brain?

Short answer: no. While artificial neural networks are conceptually inspired by the human brain they are not implemented to work like it. Let me explain the reason why.

Human brain

The human brain has around 100 billion neurons that communicate with each other through synapses. Each neuron has around 7000 synapses, this means in average each neuron communicates directly with other 7000 neurons. Synapses are the little gaps where neurotransmitters are exchanged between the axion of one neuron to the dendrite of another.

When a neuron receives neurotransmitters from another neuron these neurotransmitters are accumulated in the Soma. The Soma is the cell body where neurotransmitters are accumulated. Based on this accumulated signals the neuron will decide whether to fire a spike. Here is a scheme of a neuron:

Scheme of a human neuron.

Artificial neural networks

Artificial neural networks are similar to the human brain in the aspect that they also accumulate a sort of signal and then create a spike that is propagated to the neurons in the neighborhood. Here is a scheme of an artificial neural network:

Scheme of a simple feed-forward neural network.

In the artificial neural network above we can see through the arrows where the signals are propagated. From the first layer (input layer) they are propagated to the second layer (hidden layer 1). From the second layer, they are propagated to the third layer and so on. The total signal that arrives at one neuron is a linear sum of the signal sent by the neurons in the previous layer. The spike created by the neuron may not be a linear function of the total signal received and actually, it usually is not! This is why neural networks work. The spike created by a neuron usually is a nonlinear function of the total input received. Common functions to calculate the output signal are ReLU, Sigmoid, Tanh, Leaky ReLU, etc. These are called activation functions. If you use one linear function as your activation function then your neural network would be no different from a simple linear regression (or a multivariable linear regression).

Alright, but what I just described above is very similar to how real neurons work. So why are artificial neural networks different from the human brain? Well, the main difference and the most important one is how they learn! A human brain is a powerful machine that learns in a way that we can not explain. Well, we know that when you learn something new connections are created between your neurons, others are lost and others become stronger. This is how neural networks learn too, but, while for neural networks we know why connections are made stronger or weaker (it is related to backpropagation) we don’t know which connections are created and why they are created for the human brain. In other words, we use backpropagation to make a neural network learn but we don’t know which algorithm the human brain uses.

To give you an idea, using back-propagation and a basic convolutional neural network with 4500 neurons (convolutional neural networks are a kind of artificial neural network used for image classification) you would need thousands of training data samples and some hours of training in a GPU to make it capable of recognizing between photos of dogs and photos of cats. So, to learn how to classify photos of cats and dogs you would need much more time and energy than the human brain. The human brain consumes around 10 Watts, my GPU consumes around 279 Watts. With a human brain, you need a few seconds and just a few examples to learn the difference between two animals you never saw before. With backpropagation and one of the best GPUs on the market, you need some hours and you will be also spending around 28 times more energy per second than the human brain.

Besides this impressive algorithm that the human brain uses to learn there are also some differences between a human neuron and an artificial neural network neuron. The overall network architecture of neurons in the brain is much more complex than most ANNs. Especially when compared with feed-forward networks, where each layer is connected only to the previous and next layers as you saw in the image above. Even when compared to multi-layered RNNs, or residual networks, the network of neurons in the brain is ridiculously complex, with tens of thousands of dendrites crossing “layers” and regions in numerous directions. On the other hand, it’s very unlikely that the brain uses methods like backpropagation — leveraging the chain rule over partial derivatives of an error function to learn.

Feed-forward neural network learning with back-propagation.

Thanks for reading my article until the end 🙂 If you want to get notified for new updates on the blog subscribe to my newsletter below. If you have any questions leave it on the comments section. I will give my best to clarify your doubts.

 

Daniel

 

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: