FUNCTIONSSelect image: read the input image.Add selected image to database: the input image is added to database and will be used for training.Iris Recognition: iris matching. The selected input image is processed using pre-computed filter.GA Optimization: GA optimization for feature extraction.Delete Database: remove database from the current directory.
iris recognition project using matlab pdf download
This paper presents the detailed analysis of implementation issues occurred during preparation of the novel iris recognition system. First, we shortly describe the currently available acquisition systems and databases of iris images, which were used for our tests. Next, we concentrate on the feature extraction and coding with the execution time analysis. Results of the average execution time of loading the image, segmentation, normalization, and feature encoding, are presented. Finally, DET plots illustrate the recognition accuracy for IrisBath database.
Generally acquisition of the iris should be implemented in accordance with the standards. The application interface has to be built using ANSI INCITS 358-2002 (known as BioAPITM v1.1) recommendations. Additionally, the iris image should comply with ISO/IEC 19794-6 norm [1].
Experimental studies were carried out using the databases containing photos of irises prepared by scientific institutions dealing with this issue. Two publicly available databases were used during our experiments, as shown in Section 4. The first database was CASIA [2], coming from the Chinese Academy of Sciences, Institute of Automation, while the second IrisBath [24] was developed at the University of Bath. We have also obtained an access to UBIRIS v.2.0 database [22] and the database prepared by Michael Dobeš and Libor Machala [7].
CASIA database is presented in three versions. All present photographs were taken in the near infrared. We used the first and the third version of this database in our experimental research. Version 1.0 contains 756 iris images with dimensions 320280 pixels carried out on 108 different eyes. The pictures in CASIA database were taken using the specialized camera and saved in BMP format. For each eye 7 photos were made, 3 in the first session and 4 in the second. Pupil area was uniformly covered with a dark color, thus eliminating the reflections occurring during the acquisition process.
CASIA-Iris-Distance contains iris images captured using self-developed long-range multi-modal biometric image acquisition and recognition system. The advanced biometric sensor can recognize users from 3 m away. CASIA-Iris-Thousand contains 20,000 iris images from 1,000 subjects. CASIA-Iris-Syn contains 10,000 synthesized iris images of 1,000 classes. The iris textures of these images are synthesized automatically from a subset of CASIA-IrisV1.
The IrisBath database is created by the Signal and Image Processing Group (SIPG) at the University of Bath in the UK [24]. The project aimed to bring together 20 high resolution images from 800 objects. Most of the photos show the irises of students from over one hundred countries, who form a representative group. The photos were performed with the resolution of \(1\text,280 \times 960\) pixels in 8-bit BMP, using a system with camera LightWise ISG. There are thousands of free of charge images that have been compressed into the JPEG2000 format with the resolution of 0.5 bit per pixel.
The first technique of iris location was proposed by the precursor in the field of the iris recognition i.e. by Daugman [5]. This technique uses the so-called integro-differential operator, which acts directly on the image of the iris, seeking the maximum normalized standard circle along the path, a partial derivative of the blurred image relating to the increase of the circle radius. The current operator behaves like a circular edge detector in the picture, acting in the three-dimensional parameter space (x, y, r), i.e. the center of the coordinates and the radius of the circle are looked for, which determine the edge of the iris. The algorithm detects first the outer edge of the iris, and then, limited to the area of the detected iris, it is looking for its inside edge. Using the same operator, but by changing the contour of the arc path, we can also look for the edges of the eyelids, which may in part overlap the photographed iris.
The last stage of the feature extraction, which encode the characteristics, aims to extract the normalized iris distinctive features of the individual and to transform them into a binary code. In order to extract individual characteristics of the normalized iris, various types of filtering can be applied. Daugman coded each point of the iris with two bits using two-dimensional Gabor filters and quadrature quantization.
The best result were received using the IrisBath database by means of the log-Gabor1D filter. We obtained EER = 0.0031% for angular span of iris normalization from 120 to 180. It can be inferred from our experiment that increasing this parameter over 120 does not improve identification.
A broad patent for iris recognition expired in 2005 which opened it up to a bigger market. Today, iris recognition technology is used as a form of biometric identification that can verify the uniqueness of an individual with exceptional accuracy.
Each iris-scan template should be compressed using the JPEG 2000 format. This format preserves image quality and minimizes the occurrence of artifacts (image distortions) that result from other compression methods.
Recently, deep learning-based iris recognition approaches have been increasingly studied. Deep CNN is usually used as a feature extractor, which encodes the iris image to a set of feature vectors and then measures their distance as the aforementioned classic method does. Gangwar et al. [42] proposed a deep CNN model with less risk of overfitting for extracting the iris feature. Nguyen et al. [43] explored the encoding ability of the pre-trained CNN architecture, with results showing that the network, such as AlexNet and VGG-net trained on other large-scale image databases, can be effectively transferred to the task of iris texture feature extraction. Raja et al. [44] extracted robust multi-patch iris features by CNN with sparse filters. More recently, Wang et al. [26] and Liu et al. [45] collected iris features using dilated residual network and capsule network, respectively. In addition, deep CNN can also be directly utilized as a classifier. In this way, the pairwise training dataset was generated with all possible combinations of training samples. In the testing phase, paired images were fed into CNN, and the result was provided to examine whether the images belonged to intra-class or inter-class. With this approach, few training samples were needed for the deep neural network. This type of method was first discussed in detail in the work of Zagoruyko et al. [46], and different types of networks, including siamese, pseudo-siamese, 2-channel (2-ch) deep networks, were constructed for image patch comparison. The experimental results showed that the 2-ch network outperformed the other networks at the cost of computational complexity. Some efforts have also been focused on iris verification using 2-ch network. Liu et al. [47] proposed a 2-ch CNN architecture named DeepIris for heterogeneous iris verification. In their algorithm, six forward propagations were required to prevent rotation differences, which led to the heavy computational burden. Špetlík et al. [48] modified the 2-ch CNN with a unit-circle layer for iris verification. Proença et al. [49] integrated an iris segmentation deep learning model and a 2-ch iris classification CNN for segment-less iris verification.
Though the existing deep learning-based methods are proven to be effective for automatic end-to-end iris feature extraction and classification, several issues remain to be further addressed. For example, due to high computational complexity, the 2-ch methods have only been successfully applied to the iris verification scenario. In addition, the deep learning model is sensitive to image contamination and training data scale, which poses a challenging problem to real-time iris recognition. Furthermore, the hyperparameters in CNN architecture, such as the number of layers and kernels, have not been fully optimized.
(2) Horizontal shift: To overcome the varying rotation degree in various subjects, we perform the translation on normalized iris using a random offset. Figure 3c depicts a sample of horizon shifts in the right direction. According to the definition of the normalized iris image, the horizon shift in normalized iris corresponds to the rotation in the original iris image.
where L indicates the loss function of the network and B the mini-batch size in training phase. The radial attention layer weights different regions along the radial direction in the corresponding original iris image. It is proven that this layer can provide better recognition performance and help to prune the model.
Additionally, the branch-pruned network can be further condensed by channel pruning. For this purpose, we calculate the accumulated L1 norm of the output channel. By applying the aforementioned fixed threshold Tprune, the unimportant output channels together with their corresponding input channels are cut off permanently. Similarly, the corresponding weights in batch normalization (BN) layers are pruned. The application of L1 norm can preserve more useful kernels, which can lead to a less performance loss. Figure 4c depicts the architecture of the final pruned network (Structure C). It is clear that the whole network, especially the last two convolution blocks of the network, has far fewer parameters compared to the network without channel pruning (Structure B). Actually, Structure C only contains 33,268 parameters, which is far smaller than all the CNN architectures employed in previous iris recognition studies to the best of our knowledge. Figure 5 depicts an example of channel pruning. Figure 5a shows the 16 32 channel map in the 2nd convolutional layer in branch-pruned 2-ch CNN (Structure B). In this layer, we have a total of 512 convolution kernels. As shown in Figure 5c, the channel map is reduced to a size of 11 22. The output channel pruning also leads to the input channel pruning in the next layer, which is presented in Figure 5b,d as horizontal black lines. The 3rd channel map is hence pruned from the size of 32 64 to 22 51. 2ff7e9595c
Commentaires