Face Duplication Identifier Using Artificial Nerves

The facial recognition system develops a basic identity verification system based on the natural features of human faces. The study included duplicate passport identification, which checks each person's facial accuracy through a sample of facial data. The data used in this study were 180 face samples at the training stage and 30 face samples at the testing stage. The face sample taken is a forward-facing face that is not obstructed by an object. Face image recognition in this study combines GLCM method, color moment, shape extraction and backpropagation algorithm. The test process recognition rate is 78.83%.


I. INTRODUCTION
Humans have the ability to recognize tens or even hundreds of faces during their lives. A person can recognize another person's face even if it is not for some time and there has been a change in the face of the known person. Changes include variations in facial expressions, use of glasses, changes in color and hairstyle. The face is a complex, multidimensional and meaningful representation of visual stimulation. One particularly interesting biometric technique is an app that detects and identifies faces. Currently, facial recognition through computer applications is needed to overcome various problems, including in the identification of criminals, the development of security systems, image and film processing, and human-computer interaction.
According to the Head of Public Relations and Foreign Relations of the Ministry of Law and Human Rights, before 2012, out of 700,000 biometric passports, 1,800 duplicate passports were found. Of the 700,000 passports, 75 percent are Indonesian Migrant Worker (TKI) passports, which is 525,000 passports. This means that 1800 duplicate passports were found with the same assumption that 75 percent of Indonesian Workers (TKI) were found to be around 1,350 duplicated passports were TKI passports.
In 2006, the Ministry of Law and Human Rights, implemented a biometric passport system that has high security in accordance with the recommendations of the International Civil Aviation Organization (ICAO). This system is online at immigration headquarters and 103 immigration offices throughout Indonesia. The verification system allows tracking of people who attempt to create multiple passports. This can be done for duplication efforts within one immigration office or across immigration offices. The application of the biometric system is also able to reduce the rate of passport forgery. In addition, the implementation of this system led to the computer system being used throughout the immigration office uniformly. (detikInet.com/2012).
To reduce the number of cases of passport duplication, there are several methods that can be implemented, one of which is to apply the Backpropagation Algorithm method and the Extraction method of Texture, Shape and Color Characteristics. The Backpropagation Algorithm used because it has the ability to learn from the data trained and the Backpropagation Neural Network is not programmed, but trained based on information or input (image) received. As well as the extraction of texture characteristics used is the Gray Level Co-Occurrenceance Matrix (GLCM) because the extraction of 4-way GLCM (0 0 , 45 0 , 90 0 , and 135 0 ) with a distance of d = 1 has a fairly good accuracy in classifying gender recognition based on faces, which is 74.75%. [1] And shape extraction is used to distinguish the object you want to train from other objects. What is used as a parameter of extraction of shape characteristics is area, circumference, metric, eccentricity. And the color extraction used in this study is the color moment, color momet here is used because it is quite effective in representing the distribution or distribution of color from a digital image [2].

II. METHODS
Identification is the process of recognition, placing objects or individuals in a class according to certain characteristics [3] [4]. "Identification is the determination or establishment of the identity of a person or thing". According to psychoanalysts, identification is a process that a person does, unconsciously, in whole or in part, on the basis of emotional attachment to a certain character, so that he behaves or imagines himself as if he were that character.
Duplication is a technique for producing or making radiographs of the original radiographs of relatively the same quality. In terms of final size, duplication is divided into 3 types, namely: − Enlargement copy, which is the size of the duplicated result is larger than the original. − Facsimili copy, which is the size of the duplicated result is the same as the original. − Miniature copy, which is the size of the duplicated result smaller than the original.
These types of duplication can produce small, equal or similar images and magnifications. Duplication functions are medical needs, demonstration , education, training, exhibition.
The face is a part of the human body that is the focus of attention in social interactions. The face plays a vital role by showing one's identity. The face is mainly used for facial expressions, appearance, as well as identity. No single face is absolutely alike, not even identical twins. There are several definitions and definitions of faces based on the point of view of the scientific field. The definition of a face according to the big Indonesian dictionary is part of the head; expression; face. The face or face is the front part of the head in humans covering the area from the forehead to the chin. The parts that include the facial region are hair, forehead, eyebrows, eyes, nose, cheeks, mouth, lips, teeth, skin, and chin. In facial health science based on anatomy is the anterior part (front structure) of the head, with the border of both ears in the lateral (the furthest structure from the midline of the body), the chin in the inferior (lowest part) and the borderline of hair growth in the superior (the upper part). The face is made up of the spine and the soft tissues located on it (muscle tissue, cartilage tissue, blood vessels, nerves, lymph vessels and glands), which together give the appearance and function of the face. The face is a part of the human body that can be recognized by biometric technology. In biometric technology, biological characteristics of humans such as faces can provide unique information. The unique information can be in the form of characteristics of facial patterns according to each individual. Because the characteristics of facial patterns can be measured and analyzed for detection and authentication processes. Therefore the face can be used as an organ of the human body that is used as an indication for the recognition of a person [5] [6].
Artificial Neural Network (JST) is an information processing system that has characteristics resembling biological neural networks (JSB) Artificial Neural Network is created as a generalization of mathematical models of human understanding (human cognition) [7]. Like the human brain, neural networks are also made up of several neurons, and there are connections between these neurons. In figure 2. indicates the structure of neurons in which neurons will transform the information received through their output connections into other neurons. In neural networks, this connection is known as weights. The information is stored at a certain value in that weight. Both or maybe more to get data redundancy. This is processed by a propagation function that will sum the values of all future weights. The result of this summation is then compared with an information called input sent to the neuron with a certain arrival weight. Enter a specific threshold value through the activation function of each neuron In nerve tissue, neurons will be collected in layers called neuron layers. Usually neurons in one layer will be connected with the layer before or after except the insert layer and the output layer. The information provided to the neural network will propagate from layer to layer, through from the input layer to the output layer through the hidden layer. The learning algorithm determines which direction the information will propagate, figure 3 shows a simple neural network neuron with an F activation function.
In figure 3 a neuron will process N inserts (X1, X2, X3,...., Xn) each of which has a weight of W1, W2, W3,..., Wn with the formula: Activation fusion is a function of processing the sum of input data into output data. The characteristics that must be present in the activation function Backpropagation continuous, can be derived, continuous and not decreased monotonously, the activation function is a sigmoit curve [8].

Sigmoid Bipolar
The bipolar sigmoid function has a range of (-1,1). Figure 5. Below is a graph of bipolar function.

A. Skin color segmentation.
The color received by the human eye from an object is determined by the color of the rays reflected by the object. The combinations that provide the widest color range are Red (R), Green (G) and Blue (B). The three colors are named principal colors and are often abbreviated as RGB colors. While the YCbCr color model is a color model resulting from non-linear encoding of RGB signals. The Y element is a luma component, while Cb and Cr are chroma components, the difference between blue and red.
YCbCr is an international standard for digital encoding of television images defined in the CCIR Recommendation. Y is the luminance component, Cb and Cr are the chrominance components. In monochrome monitors the luminance value is used to represent RGB colors, psychologically it represents the intensity of an RGB color received by the eye. The YcbCr color space is used because it separates luminance which expresses brightness (Y) and separates chrominance (Cb and Cr) which expresses chromatic color tone and saturation. In the YcbCr color space, the color elements Cb and Cr are used to find the skin color on the face using the equation below. 77≤Cb≤127 …………………...…………(10) 133≤Cr≤173 …………………...…………(11) The range of values of the equation above is the result of Chai & Ngan analysis obtained from histogram analysis conducted on the YcbCr model of many images to optimize the range of values corresponding to human skin and the range of values only applies to caucasoid or white skin color [6].

B. Color Moment Color Extraction
Color moment is a representation of color features that can characterize the color of the image. Moment calculation is needed to obtain the color similarity of an image, where the value of the similarity is used to compare the training data image and the test data image. Color moments assume the color distribution of an image as a probability distribution. Mean, standard deviation and skewness are the first three color moments that have been shown to efficiently and effectively represent the distribution of colors in an image [7] [9]. The formula for calculating the mean, standard deviation, and skewness values in color images of pixel NxM size, is defined in Equation 2.12, Equation 2.13 and Equation 2.14,

…………………...(14)
Where X̅ i is the average for each channel i (H, S and V), j is all pixels on channel i, ∂i is the standard deviation, and Si is the skewness for each channel i [7] [9]. HSV channels can be converted into RGB images using

C. Ekstraksi Tekstur Gray Level Co-occurance Matrix (GLCM)
GLCM has an N×N square matrix, where N represents the number of gray levels of an image. An element p(i,j,d,θ) of the GLCM of the image represents the relative frequency, where i represents the grayness level at the location (x,y), and j represents the grayness of the neighboring pixels with a distance of d and orientation θ from the location (x,y). The distance (d) used is usually 1 pixel and the angular orientation used is usually 0•, 45•, 90•, and 135•. The mathematical formulation features below [10].

D. Shape Extraction (Morphology)
To recognize facial imagery in an image, you must first look for some features as a characteristic of an object. These features must be able to distinguish an object from other objects. For the extraction of morphological features, there are several features that can be calculated, such as area and Perimeter. Area is the number of pixels occupying the image object, and perimeter is the number of pixels around the object. In addition, major axis and minor axis can also be calculated. Based on the calculation of area, perimeter, major axis, and minor axis, other morphological features can also be calculated. Here are some formulas used to extract morphological features [11]:

Eccentricity (ECC)
Is the ratio of the distance between the focus of the ellipse to the length of the main axis (major axis) of an object. Eccentricity is worth between 0-1. Eccentricity is a technique for describing an object with an elliptical shape

E. Backpropagation Algorithm
Back propagation is one of the supervised training algorithms. This algorithm is commonly used to change the weights connected to neurons in its hidden layer. Back propagation uses error output to change the values of the weights backward [11].
Back propagation has multiple units present in one or more hidden layers. Figure 6. Below is a back propagation architecture with n inputs (plus a bias), a hidden layer consisting of p units (plus a bias), and m output units [11]. Vji is a line weight from the input unit xi to the hidden layer unit zj (vj0 is the weight of the line connecting the bias in the input unit to the hidden screen unit zj). WKJ is the weight from the recessed layer unit ZJ to the output unit YK (WK0 is the weight from the recessed layer bias to the output unit ZK).
The process of making an Indonesian citizen's passport is often found cases of facial duplication that are not known directly because the biometric process can only be seen by the interview officer by comparing the face of the passport applicant and the face listed from the results of the Biometrics System where the system only displays image captions that have a low, medium or high level of similarity.
Based on the analysis of the problem above, an application is needed that can detect biometric facial photos during the photo and fingerprint biometric process. The application must also be able to detect people's faces from several facial expressions, and be able to display the applicant's name if they have similar faces even though the data is different. This application only implements several digital image processing algorithms, namely texture feature extraction (GLCM) and Color Moment color extraction, shape extraction and also face identification algorithms with artificial neural networks, namely Backpropagation algorithms. The solution to the problem will be illustrated with a block diagram as shown below The picture above describes the main process flow of the face duplication identification system from start to finish. The process starts from image acquisition which is the process of taking an image of a person's face with a camera. Then the image is processed with image processing techniques that go through several stages and trained with Neural Network (JST) techniques. Once identified, the system can test the input image by providing the results of facial image testing to the officer.
The steps taken to identify facial duplication in making Indonesian passports using Color Moment color extraction methods, Gray Level Co-Occurance Matrix texture extraction, Metric and Eccentricity shape extraction and Backpropagation Algortima Neural Network are as follows: 1.

Thresholding the YcbCr color space
Segmenting skin color in facial images for the thresholding process with the following formula: The color space that the thresholding process performs has a range of values if Y>100, 77<Cb<127, 133<Cr<173 Thresholding Results

Performing morphological operations
By using closing, area opening, and clearing norder operations Results of facial images after morphological surgery

Cropping images
After morphological operations, the image is cropped to a size of 87x67 pixels, and a new RGB value will be formed c. Metric and Eccentricity shape extraction To calculate the value of the shape extraction, the only metric and accentricity values used using the equation ( The value of the result of extracting the above characteristics represents one image, the stage for the next image is equal to 30 faces of the person tested and the number of 180 images / photos.

IV. CONCLUSIONS
The results of texture extraction, shape extraction and color extraction are used as data sets to be trained with the Backpropagation method. In this training phase, each data set is given a target to be recognized. This study used 180 images / photos consisting of 30 people, where each photo of 30 people had a variety of 6 different kinds of expressions. Of the 6 photos, 4 photos were used for training and 2 photos were used for trials. From the trials conducted by the author, there was a percentage of accuracy results of 78.83% when testing for the first identification data and 78.83% when testing for the second identification.