바로가기메뉴
메뉴바로가기
본문바로가기

연구성과

  • 국외저널
  • 국내저널
  • 특허
  • 학술발표
  • SW
  • 기술이전
  • 표준화활동
  • 기술홍보
  • 연구자별 성과검색

연구성과

NUI/NUX 플랫폼 연구센터의 연구성과 입니다.

more

국제협력

NUI/NUX 플랫폼 연구센터의 연구성과 입니다.

more

산합협력

NUI/NUX 플랫폼 연구센터의 산학협력 현황 입니다.

more

갤러리

NUI/NUX 플랫폼 연구센터의 활동사진 입니다.

more

home > 연구성과 > 국외저널

사용후기
책임교수 조경은, 성연식
논문명 Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing
논문종류 SCI
제1저자 Yunsick Sung, Kyungeun Cho
교신저자 -
공동저자 Yong Jin; Jeonghoon Kwak; Sang-Geol Lee
Impact Factor 2.075
개제학술지명 Sustainability
Keyword -
게재일 2018 년 03 월
Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.