Machine Vision Inspection Systems, Machine Learning-Based Approaches. Группа авторов
set {XN}; one image for each character category. The Siamese network is fed with X, Xn couples and predict the similarity. Belonging category, n* is selected as category with the maximum similarity as in Equation (2.3). The argmax function denotes the index of n that maximize F function.
The model is evaluated by N-way classification, N varying in the range [1, 40] and results are depicted in Figure 2.2.
Figure 2.2 Omniglot one-shot learning performance of Siamese networks.
According to Figure 2.2, the proposed model of this study, capsule layer-based Siamese network classification has on par results with Koch et al.’s model with the convolutional Siamese network classification. However, our model has 2.4 million parameters, which is 40% less compared to 4 million parameters in Koch et al.’s model. Although the overall performance of Koch et al.’s model with the convolutional classification, and the proposed model in this study which is based on capsule network, are on par, there are certain cases our model shows superior performance. For instance, the proposed model has a superior capability of identifying minor changes in characters.
For the n-way classification task, the statistical approach random guessing techniques are defined, such that if there are n options and if only one is correct, the chance of prediction being correct is 1/n. Thus, for the repeated experiment the accuracy is considered as a percentage of that probability. Here, the classification accuracy has dropped with the growth of the reference set, because then the solution space is large for the classification task. Nearest neighbor shows exponential degrades while Siamese networks have less reduction with a similar level of performance.
Figure 2.3 shows the classification results obtained by different models, namely the 20-way classification task (top), Capsule Siamese network (middle) and Convolutional Siamese network (bottom). The figure shows the samples of the test images and the corresponding classification results. Capsule based architecture was able to identify small changes in image structure, as shown in the middle row.
Figure 2.3 illustrates a few 20-way classification problems in which the proposed capsule layers-based Siamese network model outperforms the convolutional Siamese network. In most of the cases, the convolutional network fails to identify minor changes in the image, such as small line segments, curves. However, with the detailed features extracted through capsules, such decisions were made easy in the proposed capsule network model.
Figure 2.3 Sample 1 classification results.
Figure 2.4 depicts a few samples, where the proposed capsule network model fails to classify characters correctly. For certain characters, there is a vast difference in the writing styles between two people. In such cases, the proposed capsule layers-based Siamese network underperforms compared to the CNN. Capsule network model fails in certain cases while convolutional units successfully identify the character.
As a solution to the decrease of n-way classification accuracy, we propose n-shot learning instead of one-shot learning. In one-shot learning, we use only one image from each class in the reference set, however, n-shot learning, we use n images for each category and select the category with highest similarity as in Equation (2.4), where argmax is the argument maximizing the summation, X denotes the image and the function F(x, xi,n) states the similarity score.
Figure 2.4 Sample 2 classification results.
Accuracies obtained with n-shot learning for 2-, 6-, 20- and 28-way classification are illustrated in Figure 2.5. There is no significant improvement for test cases with small classification set, however, when the classification set is large n-shot learning can significantly improve the performance. For instance, 28-way classification accuracy is improved from 78 to 90% by using 20 images for each class in the reference set. Here, the classification accuracy improves with the increase of the number of samples that are used to compare against. For n-way classification with smaller n with few samples 100% accuracy achieved while more complex task needs a greater number of samples.
2.4.2 Within Language Classification
In n-way testing, we use characters from different languages, but the accuracy obtained for individual language is the main determinant for research. Language-wise classification accuracy was evaluated by preparing one-shot tasks with characters taken from a single alphabet, and the results were illustrated in Table 2.3. These results are based on the nearest neighbour, 1-shot capsule network classifications within individual alphabets. We have selected the Nearest neighbor method because it is a simpler classification method that uses raw pixel values. Thus, it is evident that language level classification accuracy is proportional to the number of characters in the language. Another critical factor that influences accuracy is the structural similarity between characters.
Figure 2.5 Omniglot n-shot n-way learning performance.
Table 2.3 Classification accuracies within individual alphabets.
Model | Characters | Nearest neighbor | 1-shot capsule network |
---|---|---|---|
Aurek-Besk | 25 | 6.40% | 84.40% |
Angelic | 19 | 6.32% | 76.84% |
Keble | 25 | 2.00% | 71.20% |
Atemayar Qelisayer
|