
Face recognition technology has a wide range of application prospects and can be applied to a variety of different security areas. Because of its uniqueness, uniqueness and relative stability, recognition features have gradually become a very popular research topic. Many typical face recognition algorithms and application systems are for standard or specific face databases, use the face inside the library for training, and realize face recognition in the same library. However, in special applications such as software protection and computer security, the identity authentication only performs face recognition for a single object, and the existing face recognition method is not competent for such identification tasks. For this reason, this paper discusses the key technologies of single-object face detection and recognition based on the characteristics of single-object face recognition. Based on this, a single-object face recognition algorithm is proposed. The experimental results demonstrate the effectiveness of the proposed method. .
1. Characteristics of single-object face recognition Compared with typical face recognition, single-object face recognition has the following four characteristics:
The area of ​​application of face recognition in applications is very broad, such as criminal investigation, document verification, security monitoring, etc., while single-object face recognition is mainly used in software protection, computer security locks, specific object tracking and other fields.
The ultimate goal of the recognition target system for single target face recognition is that the system must have a high degree of security and reliability, ie, the recognition error rate tends to zero. Although the recognition rate will be reduced while reducing the recognition error rate, it can be improved by prompting the user to adjust the posture (such as looking at the camera, etc.).
Skin color model Since the single-object face recognition is only for a specific object, the skin color model for face detection can adjust the skin color range adaptively.
Classification methods Single-face face recognition does not exist in face databases. Commonly used minimum distance classification methods cannot correctly identify specific objects. Only thresholds can be used as criteria. Therefore, the selection of the threshold is very important. If the threshold is too large, it is prone to misjudgment and there is a security risk. If the threshold is too small, it will affect the recognition efficiency.
2. Face Detection and Normalization Face detection is a prerequisite for face recognition. For a given image, the purpose of face detection is to determine if there is a face in the image, and if so, return to its position and spatial distribution. Using face color and facial features, face detection is divided into two phases: external face detection and internal face positioning. The external face detection mainly uses the face skin color to perform a preliminary face detection, and the skin color region is segmented. The internal face detection is used to verify and locate the facial features in the external face region.
3.1 External Face Detection The task of external face detection is to find out and mark possible face areas in the image to be examined. The steps are as follows:
(1) According to the regional characteristics of human skin color in the color space, it may be detected for human face pixels. In order to make better use of the skin color characteristics, HIS and YcbCr are used to binarize the image. The skin color range is limited to H∈[0,46], S∈[0.10, 0.72], Cb∈[98, 130], Cr∈[128,170]. The pixels satisfying the condition are marked as skin-color pixels, and the rest are non-skin color pixels.
(2) Denoising. The number of skin-color pixels is counted in a 5×5 neighborhood centered on each skin color point, and when more than half of the skin points, the center point is reserved for skin color, otherwise it is considered non-skin color.
(3) The skin color blocks in the binary image are merged into regions, and the proportion and structure of the target region are analyzed to filter out the impossible human face regions. The height/width ratio of the target area is limited to 0.8 to 2.0.
3.2 Inner Face Detection and Positioning The area containing the eyes, eyebrows, nose, and mouth is called the inner face area. The inner face region can express face features well and is not easily interfered by factors such as background and hair. Therefore, the detection and location of the inner face region is very important for subsequent feature extraction and recognition.
In the upper half of the outer face region, the binary image is projected horizontally and vertically, and two rectangular regions containing black dots are determined as the approximate area of ​​both eyes. In the determined two regions, the region of the black spot is expanded to obtain the basic outline of the eye and the corner of the left eye. The average value of the black point coordinates serves as the position of the pupil.
Let the coordinates of the left and right pupils be (Lx, Ly) and (Rx, Ry), and the distance between the two pupils be d. According to the geometrical characteristics of the human face, we define the inner face region as: width=-d×1.6 , Height=-d×1.8, upper left coordinates are (Lx-d×0.3, (Ly+Ry)/2-(-d)×0.3). Experiments show that this area can express human face features well.
3.3 Normalization of Inner Face Area Since the face size in each measured image has a large randomness, it is necessary to normalize the inner face area. Face normalization refers to scaling and transforming the image of the inner face region to obtain a uniform size standard image. In the experiment, we specified that the size of the standard image is 128×128. The normalization process ensures the consistency of the face size and reflects the size invariance of the face in the image plane.
Figure 1 is an example of face detection and normalization, where the original image was taken from a laboratory scene.
Figure 1. Face Detection and Normalization 4. Face Feature Extraction and DWT-DCT Average Face Pair Normalized face image, using the combination of wavelet transform and DCT to extract face features. Firstly, the face image is decomposed by 3-layer wavelet transform, and the low-frequency sub-image LL3 is taken as the face feature extraction object to obtain the low-frequency sub-images of each training sample or test sample; then the discrete cosine transform (DCT) is performed on the low-frequency sub-images. The number of DCT coefficients is equal to the size of the sub-image (ie, 256). Since the image is DCT-transformed, energy is concentrated in the low-frequency part, so only 136 of the low-frequency coefficients are taken as feature vectors.
In order to make the test samples comparable to the training samples, the feature vectors of all the training samples are extracted and the average characteristics of all the training samples are calculated to constitute the DWT-DCT average face, ie:
Where N is the number of training samples, xk, i denotes the k-th feature vector of the i-th sample, and mk is the k-th feature vector of the average face, k = 1, 2, ..., 136.
5. Face Recognition After completing the training process and obtaining the features of the sample to be tested, face recognition can be performed. This paper uses Euclidean distance for classification.
5.1 Calculating the Euclidean distance between the sample and the average face Using m and x to represent the eigenvectors of the average face and the sample, the Euclidean distance between the sample and the average face is:
Where mk denotes the k-th feature vector of the average face, and xk denotes the k-th feature vector of the sample to be measured. In identity authentication, the Euclidean distance between the sample to be measured and the average face is calculated and compared with the adaptive threshold of the specific object, and the sample smaller than the threshold is judged as the face of the object, that is, the authentication passes.
5.2 Selection of adaptive threshold Different from the typical face recognition method, the single object face recognition has no face database, and the minimum distance cannot be used as a criterion, and only the threshold value can be used as a criterion. The selection of the threshold should take into account both the recognition rate and the accuracy of the recognition. In the experiment, we take the average Euclidean distance between the training sample and the average face as the classification threshold, that is:
Where N is the number of training samples, this value should not be too small; di is the Euclidean distance between the i-th sample and the average face.
6. Experimental results and analysis This experiment was conducted using the subview library of the Oriental Human Face Library (AI&R) of the Institute of Artificial Intelligence and Robotics of Xi’an Jiaotong University. The database includes each subject at 19 different viewpoint angles (10° One unit) shot of a neutral expression image. Experiments include in-class testing and inter-class testing. The in-class test is used to examine the recognition rate of single-object face recognition, while the inter-class test is used to examine the misrecognition rate. Five individuals were randomly selected. Each person used 7 images (-30° to +30°) as training samples. The average face and self-adaptive thresholds, intra-class recognition rate and intra-class distance were calculated. In addition, 50 individuals were selected. A frontal image was used as an inter-class test sample, and 5 subjects were tested between classes. The experimental results are shown in Table 1. From the experimental data, the following results can be drawn:
(1) The recognition rate in the class is not high because the adaptive threshold is the average of the Euclidean distance between the training sample and the average face. Some images in the training sample cannot be identified. In the laboratory, we improved the shooting quality of the image by prompting the subjects to look at the camera and adjust the posture appropriately, so that the recognition rate was significantly improved.
(2) In the 50-person test, the minimum distance is greater than the threshold, which means the false recognition rate is zero. The same results were obtained in the laboratory's field tests.
(3) The single object face recognition method proposed in the paper can successfully identify specific objects and can accurately exclude other objects. It can be used for the authentication of systems such as software protection and computer security.
7. Conclusion The single-object face recognition method proposed in this paper, based on the characteristics of single-object face recognition, comprehensively considers the recognition rate and the accuracy of the authentication, and uses the average face method to effectively reduce intra-class distance and expand the inter-class distance. Take the average Euclidean distance between the training sample and the average face as the classification threshold. Experimental results show that the method has the effectiveness of recognition and reliability of authentication, and it is a feasible method in the practical application of single-object face recognition.
DANYANG TRUST IMPORT & EXPORT COMPANY LIMITED , https://www.dytrust-tools.com