# Deep Learning A-Z™: Hands-On Artificial Neural Networks

Deep Learning A-Z™: Hands-On Artificial Neural Networks

Learn to create Deep Learning Algorithms in Python from two Machine Learning & Data Science experts. Templates included.

22:15:26 of on-demand video • Updated April 2021

A free video tutorial from

### Kirill Eremenko

Data Scientist

### Udemy Coupon ED

Course summary

- Understand the intuition behind Artificial Neural Networks
- Apply Artificial Neural Networks in practice
- Understand the intuition behind Convolutional Neural Networks
- Apply Convolutional Neural Networks in practice
- Understand the intuition behind Recurrent Neural Networks
- Apply Recurrent Neural Networks in practice
- Understand the intuition behind Self-Organizing Maps
- Apply Self-Organizing Maps in practice
- Understand the intuition behind Boltzmann Machines
- Apply Boltzmann Machines in practice
- Understand the intuition behind AutoEncoders
- Apply AutoEncoders in practice

### Udemy Course

-: Alright, exciting tutorial ahead. Welcome back to the course on deep learning. Today we're talking about how do neural networks work. Now we've laid a lot of groundwork we've talked about how neural networks are structured, what elements they consist of, and even their functionality. Today we're going to look at a real example of how a neural network can be applied and we're actually gonna work step-by-step through the process of its application so we know what's going on. So let's have a look. What example are we going to be talking about? We're going to be looking at property valuation, so we're going to look at a neural network that takes in some parameters about a property and values it. And the thing here, there's a small caveat for today's tutorial, and that is, we're not actually going to train the network. So a very important part in neural networks is training them up, and we're going to look at that in the next tutorials in this section. For now we're going to focus on the actual application so we're going to work with a neural network that we're going to pretend is already trained up. And that will allow us to focus on the application side of things and not get bogged down in the training aspect. And then we'll cover off the training when we already know the end goal we're working towards. Sound good? Alright, let's jump straight into it. So let's say we have some input parameters. So let's say we have four parameters about the property. We have area in squared feet, we have the number of bedrooms, the distance to the city in miles, the nearest city and the age of the property. And all of those four are going to comprise our input layer. Now of course there're probably way more parameters that define the price of a property but for simplicity's sake we're going to look at just these four. Now in its very basic form, a neural network only has an input layer and an output layer, so no hidden layers and our output layer is the price that we're predicting. So in this form, what these input variables would do is they would just be weighted up by the synapses and then the output layer would be calculated or basically the price would be calculated and we'd get a price. For instance a price could be calculated as as simple as, the weighted sum of all of the inputs. And again here you could use pretty much any function. You could use what we're using now, we could use any of the activation functions we had previously, you could use a logistic regression, you could use a squared function, you could use pretty much anything here. But the point is that you get a certain output. And moreover, most of the machine learning algorithms that exist can be represented in this format. This is basically a diagrammatic representation of how you deal with the variables, right. By changing the weights, the formulas, you can accomplish quite a lot of the machine learning algorithms that we've talked about before and put them into this form. And that just stands to show how powerful neural networks are, that even without the hidden layers we already have a representation that works for most other machine learning algorithms. But in neural networks, what we do have is an extra advantage that gives us lots of flexibility and power which is where that increase in accuracy comes from, and that power is the hidden layers. And there we go, that's our hidden layer we've added it in and now we're going to understand how that hidden layer gives us that extra power. And in fact to do that we're going to walk through an example. So as we agreed this neural network has already been trained up, and now we just going to plug in we're going to imagine that we're plugging in a property, and we're going to walk step-by-step through how the neural network will deal with the input variables and calculate the hidden layer and then calculate the output layer. So let's go through this, this is going to be exciting. We've got all four variables on the left, and we're going to first start with the top neuron on the hidden layer. Now as we previously saw in the previous tutorials, all of the neurons from the input layer they have synapses connecting it each one of them to the top neuron in the hidden layer. And those synapses have weight. Now let's agree that some weights will have a non-zero value, some weights will have a zero value. Because basically not all inputs will be valid or not all inputs will be important for every single neuron. Sometimes inputs will not be important. Now here we can see two examples. That X1 and X3, the area and the distance to the city in miles are important for that neuron whereas bedrooms and age are not. And let's think about this for a second. Why, how would that be the case. Why would a certain neuron be linked to the area and the distance. What could that mean? Well that could mean that normally the further away you get from the city, the cheaper real estate becomes and therefore the space in square feet of properties becomes larger. So for the same price you can get a larger property the further away you go from the city. That's normal, right, that makes sense. And probably what this neuron is doing is it is looking specifically like a sniper, it's looking for area of properties which have, which are not so far from the city but have a large area. So for their distance from the city, they have an unfair square foot area, right. So something that's abnormal, it's higher than average. So they're quite close to the city but they're still large as opposed to the other ones at the same distance. So that neuron, again we're speculating here but that neuron might be picking out, laser picking out, those specific properties and it will activate, and hence the activation function, it will activate, it'll fire up only when the certain criteria is met that you know, the distance and the area of the prop, distance to the city and the area of the property and it performs its own calculations inside itself and it combines those two and as soon as certain criteria is met it fires up and that contributes to the price in the output layer. And therefore this neuron doesn't really care about bedrooms and age of the property because it's focused on that specific thing. That's where the power of the neural network comes from because you have many of these neurons and we'll see just now how the other ones work. But what I wanted to agree here is that let's not even draw these lines for the synapses that are not in play so that we don't clutter up our image that's the only reason we're not gonna draw them. Let's just get rid of those two and that way we will know exactly, okay, so this neuron is focused on area and distance to the city. As long as we agree on that let's move on to the next one. Let's take the one in the middle. Here we've got three parameters feeding into this neuron so we've got the area, the bedrooms, and the age of the property. So what could be the reason here? Again let's try to understand the intuition, the thinking of this neuron. How is this neuron thinking, why is it picking these three parameters, what could it be, what could have it like found in the data so we've already established this is a trained up data set. The training has happened a long time ago maybe like a day ago or somebody's already trained up this data set now we're just applying and we know that this neuron through all the thousands of examples of properties has found out that the area plus the bedrooms plus the age combination of those parameters is important. Why could that be the case? For instance, maybe in that specific city, in those suburbs that this neural network has been trained up in perhaps there's a lot of families with kids, with two or more children who are looking for large properties with lots of bedrooms but which are new. Which are not old properties because maybe in that area most of the properties are kind of like big properties are usually old but there's lots of modern families and maybe there has been a socio-demographic shift and, or maybe there's been a lot of some growth in terms of employment and jobs for the younger side of population maybe just, the population demographics have changed and now younger couples or younger families are looking for properties but they prefer newer properties. So they want the age of the property to be lower. And hence, from the training that this neural network has undergone it knows that when there's a property with a large area and with lots of bedrooms at least three bedrooms for the parents for the first child, for the second child for at least three bedrooms, maybe a guest room when a new property with higher area and lots of bedrooms that is valued. In that market, that is valuable. So that neuron has picked that up it knows that okay, so this is what I'm gonna be looking for I don't care about the distance to the city in miles wherever it is, as long as it has high area, lots of bedrooms, as soon as that criteria is met that neuron fires up and the combination of these three parameters and this is again, this is where the power of the neural network is coming from because it combines these three parameters into a brand new parameter, into a brand attribute that helps with the evaluation that helps with the evaluation of the property combines them into a new attribute and therefore it's more precise. So there we go that's how that neuron works. And let's look at another one let's look at the very bottom one. For instance this neuron could be could even have picked up just one parameter it could have just picked up age and not any of the other ones. And how could that be the case? Well this is a classic example of when age could mean as we all know, the older property is usually, it's less valuable because it's worn out, probably the building is old probably things are falling apart, more maintenance is required. So the price drops in terms of the price of the real estate whereas a brand new building, it would be more expensive because it's brand new. But perhaps if a property's over a certain age that could indicate that it's a historic property. For instance, if a property's under 100 years old then the older it is, the less valuable it is. But as soon as it jumps over 100 years old all of a sudden it becomes a historic property because this is a property where people used to live hundreds of years ago, it tells a story, it's got all this history behind it and some people like that. Some people value that. In fact quite a lot of people would like that and would be proud to live in a property and especially in the higher socioeconomic classes they would show off to their friends or things like that. And therefore, properties that are over 100 years old could be deemed as historic, and therefore this neuron as soon as it sees a property over 100 years old it will fire up and contribute to the overall price. And otherwise, if it's under 100 years old then it won't even work. And this is a good example of the rectifier function being applied. So here you've got a very a zero, until a certain point, and then let's say 100 years old and then after 100 years old, the older it gets, the higher the contribution of this neuron to the overall price. And this just a wonderful example of very simple example of this rectifier function in action. So there we go, that could be this neuron. And moreover, the neural network could have even picked up things that we wouldn't have thought of ourselves for instance bedrooms plus distance to the city. Maybe that's in combination somehow contributes to the price. Maybe it's not as strong as the other neurons and it contributes, but it still contributes. Or maybe it detracts from the price, that could also be the case. Or other things like that. And maybe a neuron picked up all four, a combination of all four of these parameters and as you can see the that these neurons, this whole hidden layer situation allows you to increase the flexibility of your neural network and allows you to really look, allows the neural network to look for very specific things and then in combination, that's where the power comes from. It's like that example with the ants like an ant by itself cannot build an anthill, but when you have 1,000 or 100,000 ants they can build an anthill together. And that's the situation here. Each one of these neurons by itself cannot predict the price but together they have superpowers and they predict the price and they can do quite an accurate job if trained properly, if set up properly. And that's what this whole course is about in understanding how to utilize them. There we go so that is a step-by-step example and walkthrough of how neural networks actually work. I hope you enjoyed today's tutorial, and I can't wait to see you next time. Until then, enjoy deep learning.

## Post a Comment for "Deep Learning A-Z™: Hands-On Artificial Neural Networks"