It has been shown in Activities 14 and 15 that we can simulate how humans classify objects. Features were extracted from the images of objects belonging to different classes. Half of the objects were used as training, where features are analyzed, and half were used as test, where features are compared to the those of the training set. Classification based on Minimum Distance and Linear Discriminant Analysis were successful, albeit with some errors. In this activity, we will model how the brain works, as we look into Artificial Neural Network classification.
The first step is to train our neural network. In the training set, I used 5 coins, 6 flowers, 8 rectangular leaves, and 5 long leaves. The features are fed into the input layer then "processed" by the hidden layer. At this point, the "brain" has learned how one class differs from another class. We then feed the network the test images it will classify based on the features learned.
For the test set, I used 5 coins, 7 flowers, 8 rectangular leaves, and 5 long leaves. Below are the classification results for different learning rates.
For a low learning rate, the neural network failed to correctly classify the coins, all of which were placed in the flower category. When the learning rate was increased, the neural network got better, yielding higher success rates. The summary is shown below.

Increasing the learning rate of the neural network increases the accuracy of prediction, all of which are at least 3 times the Chance Proportion Criterion of 26.08%. However, the result is lower than the accuracy of the Linear Discriminant Analysis. The accuracy may probably reach 100% if the learning rate is increased further.
I give myself 10 points for this activity as I was able to use Neural Network classification. Thanks to Jeric Tugaff and Cole Fabros for the code and guide. Thanks to Master explaining how the target array works.

The first step is to train our neural network. In the training set, I used 5 coins, 6 flowers, 8 rectangular leaves, and 5 long leaves. The features are fed into the input layer then "processed" by the hidden layer. At this point, the "brain" has learned how one class differs from another class. We then feed the network the test images it will classify based on the features learned.
For the test set, I used 5 coins, 7 flowers, 8 rectangular leaves, and 5 long leaves. Below are the classification results for different learning rates.
For a low learning rate, the neural network failed to correctly classify the coins, all of which were placed in the flower category. When the learning rate was increased, the neural network got better, yielding higher success rates. The summary is shown below.

Increasing the learning rate of the neural network increases the accuracy of prediction, all of which are at least 3 times the Chance Proportion Criterion of 26.08%. However, the result is lower than the accuracy of the Linear Discriminant Analysis. The accuracy may probably reach 100% if the learning rate is increased further.
I give myself 10 points for this activity as I was able to use Neural Network classification. Thanks to Jeric Tugaff and Cole Fabros for the code and guide. Thanks to Master explaining how the target array works.

0 comments:
Post a Comment