Introduction to ANN | Set 4 (Network Architectures)

Course Curriculum

Introduction to ANN | Set 4 (Network Architectures)

Introduction to ANN | Set 4 (Network Architectures)

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired from the brain. ANNs, like people, learn by examples. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning largely involves adjustments to the synaptic connections that exist between the neurons.

The model of Artificial neural network which can be specified by three entities:

  • Interconnections
  • Activation functions
  • Learning rules

Interconnections:
Interconnection can be defined as the way processing elements (Neuron) in ANN are connected to each other. Hence, the arrangements of these processing elements and geometry of interconnections are very essential in ANN.
These arrangements always have two layers which are common to all network architectures, Input layer and output layer where input layer buffers the input signal and output layer generates the output of the network. The third layer is the Hidden layer, in which neurons are neither kept in the input layer nor in the output layer. These neurons are hidden from the people who are interfacing with the system and acts as a blackbox to them. On increasing the hidden layers with neurons, the system’s computational and processing power can be increased but the training phenomena of the system gets more complex at the same time.

There exist five basic types of neuron connection architecture :

  1. Single-layer feed forward network
  2. Multilayer feed forward network
  3. Single node with its own feedback
  4. Single-layer recurrent network
  5. Multilayer recurrent network

1. Single-layer feed forward network

 

In this type of network, we have only two layers input layer and output layer but input layer does not count because no computation performed in this layer. Output layer is formed when different weights are applied on input nodes and the cummulative effect per node is taken. After this the neurons collectively give the output layer compute the output signals.

Multilayer feed forward network

 

This layer also has hidden layer which is internal to the network and has no direct contact with the external layer. Existence of one or more hidden layers enable the network to be computationally stronger, feed-forward network because information ?ows through the input function, and the intermediate computations used to de?ne the output Z. There are no feedback connections in which outputs of the model are fed back into itself.

Single node with its own feedback

 

 

When outputs can be directed back as inputs to the same layer or preceeding layer nodes, then it results in feedback networks. Recurrent networks are feedback networks with closed loop. Above figure shows a single recurrent network having single neuron with feedback to itself.
Single-layer recurrent network

 

Above network is single layer network with feedback connection in which processing element’s output can be directed back to itself or to other processing element or both. Recurrent neural network is a class of artificial neural network where connections between nodes form a directed graph along a sequence. This allows it to exhibit dynamic temporal behavior for a time sequence. Unlike feed forward neural networks, RNNs can use their internal state (memory) to process sequences of inputs.

Multilayer recurrent network

 

In this type of network, processing element output can be directed to the processing element in the same layer and in the preceding layer forming a multilayer recurrent network. They perform the same task for every element of a sequence, with the output being depended on the previous computations. Inputs are not needed at each time step. The main feature of an Recurrent Neural Network is its hidden state, which captures some information about a sequence.

(Next Lesson) How can I get started with Machine Learning