> 연구성과 > 국외저널
We propose a 3D gaze-tracking method that combines accurate 3D eye and facial-gaze vectors estimated from a Kinect v2 high-definition face model. By using accurate 3D facial and ocular feature positions, gaze positions can be calculated more accurately than with previous methods. Considering the image resolution of face and eye regions, two gaze vectors are combined as a weighted sum, allocating more weight to facial-gaze vectors. Hence, the facial orientation mainly determines the gaze position, and eye-gaze vectors then perform minor manipulations. The 3D facial-gaze vector is first defined, and the 3D rotational center of the eyeball is then estimated; together, these define the 3D eye gaze vector. Finally, the intersection point between the 3D gaze vector and the physical display plane is calculated as the gaze position. Experimental results show that the average gaze estimation root-mean-square error was approximately 23 pixels from the desired position at a resolution of 1920×1080.