Quantcast
Viewing latest article 14
Browse Latest Browse All 25

How do neural network models learn different weights for each of the neuron in a single layer?

I had had an overwiew of how neural networks work and have come up with some interconnected questions, on which I am not able to find an answer.

Considering one-hidden-layer feedforward neural network: if the function for each of the hidden-layer neurons is the same

a1 = relu (w1x1+w2x2), a2=relu(w3x1+w4x2), ... 

How do we make the model learn different values of weights?

I do undestand the point of manually-established connections between neurons. As shown on the picture Manually established connections between neurons, that way we define the possible functions of functions (i.e., house size and bedrooms number taken together might represent a possible family size which the house would accomodate). But the fully-connected network doesn't make sense to me.

I get the point that a fully-connected neural network should somehow automatically define, which functions of functions make sense, but how does it do it?

Not being able to answer this question, I don't also understand why should increasing the number of neurons increase the accuracy of model prediction?


Viewing latest article 14
Browse Latest Browse All 25

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>