Neural networks in models and in reality

I have recently read a modern book on neural systems in biology and found a lot of misconceptions between current models and real systems.

At first, real neurons use both inhibitory (negative, -) and motor (positive, +) actions, that corresponds to both negative and positive neuronal weights (between -1 and 1). But it seems like in lots of models neurons use only motor actions in range (0;1).

Also it looks like real neural systems are predefined by design using genes. For example, all sensory data (audio, visual and somatosensory) use the same pathways – at first to talamus, then to primary cortex areas (like V1 for vision) using the same pattern between standard 6 neuronal layers in cortex. This talamus-cortex path pattern always send (+) data to layer #4, it sends to processing layers 1-3. Layers 1-3 sends (+) data to layer #5 that resends it (+) to layer #6, which in turn is able to send both (+,-) data to layer #4 and back to talamus (regulating feedback).

This pattern is common for all the senses types and all people by the nature design.

So it looks like in nature there is no abstract backpropagation and the general design schema is quite predefined (but the number of neurons in layers and connections may differ). Also there are many cases of “lateral” actions between neurons in one layer in nature, but it looks like it is missed in common models too.

So the question is, why in models we use only positive weights, only straightforward layers signal processing, no lateral processing and no regulating feedbacks at all ? Doesn’t it reduce the “quality” of models and are there any features in models that mimic this real world neuronal systems properties ?

Feel free to discuss this topic in details in our artificial intelligence ai.stackexchange post.

Leave a Reply