바로가기메뉴
메뉴바로가기
본문바로가기
탑메뉴
메인메뉴
> 연구성과 > 국외저널
Inquiry
책임교수
논문명
논문종류
----- 선택하세요 ----
SCI
SCOPUS
LNCS / LNAI / LNEE
기타저널
제1저자
교신저자
공동저자
Impact Factor
개제학술지명
Keyword
게재일
----- 년 ----
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
----- 월 ----
01
02
03
04
05
06
07
08
09
10
11
12
비밀번호
내용
Despite the extensive research into 3D eye-tracking methods, such methods remain dependent on many additional factors such as the processing time, pose, illumination, image resolution, and calibration procedure. In this paper, we propose a 3D eye-tracking method using the HD face model of Kinect v2. Because the proposed method uses accurate 3D ocular feature positions and a 3D human eye scheme, it can track an eye gaze position more accurately and promptly than previous methods. In an image captured using a Kinect v2, the two eye-corner points of one eye are obtained using the device’s high-definition face model.
The 3D rotational center of the eyeball is estimated based on these two eye-corner points. After the center of the iris is obtained, the 3D gaze vector that passes through the rotational center and the center of the iris is defined. Finally, the intersection point between the 3D gaze vector and the actual display plane is calculated and transformed into pixel coordinates as the gaze position. Angle kappa, which is the gap between the actual gaze vector and the pupillary vector, is compensated through a user-dependent calibration. Experiment results show that the gaze estimation error was an average of 47
pixels from the desired position.