Intelligent-Vehicle

 

 

Introduction

Intelligent vehicle is an application in which the words “intelligent autonomous systems” represent not only an important research topic but also a strategic solution to the mobility problems of the coming years. Vehicles that can move autonomously and navigate in everyday traffic, in highway, urban, and unstructured scenarios, will become a reality in the next decades. Besides the obvious advantages of increasing road safety and improving the quality and efficiency of people and goods mobility, the integration of intelligent features and autonomous functionalities on vehicles will lead to major economic benefits from reduced fuel consumption, efficient exploitation of the road network, and reduced personnel. However, Intelligent vehicle is not possible commercially. Because There are many limitation such as sensor price, exception from variety of environment etc. So, ADAS (Advanced Driver Assistance System) such as ACC (Adaptive Cruise Control), LDW (Lane Departure Warning), APAS (Advanced Parking assistance system), BSIG (Blind-Spot Information Guidance), AVV (Around View Visualization) and Traffic Light/Sign Recognition have been developed more important than intelligent vehicle and they are currently on the market for luxurious automobiles. ADAS aim to support drivers by either providing warning to reduce risk exposures, or automating some of the control tasks to relieve a driver from manual control of a vehicle. ADAS functions can be achieved through an autonomous approach with all instrumentation and intelligence on board the vehicle, or through a cooperative approach, where assistance is provided from roadways and/or from other vehicles. The Vision based ADAS and Intelligent vehicle are our main study currently, but we are trying to converge information from others, such as internet, GPS, IMU, either.

Intelligent vehicle researches performed in our laboratory can be divided into 4 categories


Advanced AroundView Visualization

The Panorama generation using rear view cameras of the vehicle

A method for panorama-view generation from images acquired from four cameras mounted on left, right, and rear side of a vehicle. The main idea of this method is that four images are stitched in the virtual top view image by the projection matrix obtained from camera calibration. Then the view point of stitched image is appropriately adjusted by specified weight value. This paper covers camera calibration method for generating virtual top view image, and procedures to register them.

System configuration

For making the panorama image, the proposed process has three steps as shown in below :
1) Image capture and rectification.
2) Transforming to virtual top view image using homography matrix and image stitching.
3) Changing the view point and making the panorama image.
4) Blending for visually clear image

Process of generating panorama image
Core Algorithm

Demo:

EXE  DB1 DB2

Related papers:

Yongji Yun, Eunsoo Park, Hyoungrae Kim, Rui Zhang, JongHwan Lee and Hale Kim, “The Panorama generation using rear view cameras of the vehicle”, ITC-CSCC, 2012 (PDF)

Image Registration using Rear Camera

When the vehicle is reversing, this algorithm is available to restoring the blind spot in the vehicle rear-side. And proposed algorithm is cost-effective way than “Around-View Monitor(Nissan)”.

Blind spot registration overview

This algorithm has two main assumptions in below:

  1) The road is flat.

  2) Vehicle’s speed(velocity) does not change rapidly, i.e., piece-wise linear.

For Blind Spot Registration, calculate ego-motion and stitch the images in blind spot, this algorithm is shown as following the flow chart:

Flowchart

Result images:

Input and Result Images

Related papers:

Hyoungrae Kim, Eunsoo Park, Yongji Yun, Rui Zhang, Jonghwan Lee and Hakil Kim: Image Registration using Rear Camera of Vehicle. IPIU 2012. (PDF)

Back to top


Lane Departure Warning System

Robust & Hybrid Lane Detection Algorithm by Converging Camera and GPS on Smart phone

A fast and robust lane detection algorithm using a smart phone for Advanced Driver Assistant System. The main goal of this algorithm is to make use of both camera and GPS sensor in a smart phone for lane detection. The position of a vehicle acquired by a GPS sensor is mapped over a road map, and an instant curvature of the forward road is calculated and utilized to confine the region of interests for detecting lanes. The proposed algorithm is developed in an Android smart phone and demonstrates the efficiency under real driving conditions in highways.

Lane Departure Warning

The proposed lane detection algorithm consists of the navigation part and the vision part. While the navigation part calculates the instant curvature at the current location given by the GPS sensor, the vision part detects lines from an adjusted region of interest (ROI) according to the curvature. The instant curvature reduces the ROI and the processing time while increasing the robustness.

The performance is evaluated in terms of the accuracy and the elapse time. Herein, the error rate is the sum of the false alarm rate and the miss rate. All the detection errors occurred when the car is starting from the shoulder of the road. And, the average processing time is about 10.5 msec/frame (640400 pixels/frame) which is fast enough for real-time processing.

Result of Process

Result of Lane Departure Warning

Related Papers:

Jonghwan Lee, Hakil Kim, “Robust & Hybrid Lane Detection Algorithm by Converging Camera and GPS on Smart phone”, ITS-CSCC 2012 (PDF)

Back to top


Vehicle vision for Night time

Robust moving object detection using beam pattern for night-time driver assistance

According to the research of U.S Department of Transportation, the fatal crash rate for night-time driving is much higher comparing to that of day-time, even though the traffic flow at night is substantially lower. Therefore, Advanced Driver Assistance Systems (ADAS) at night-time is of vital importance.In order to detect and localize objects within the low beam region, a real-time processing, efficiency and low complexity method is proposed. The proposed method has three stages.

Overview of the proposed system

The low beam pattern model (LBPM) is computed by perspective transformation and nonlinear regression from the difference signal between the Non-Beam Frame and the Beam Frame. Then, moving objects are detected by differencing the real-time test videos with LBPM. Lastly, the distance and direction is computed to remind drivers. In order to complete the LBPM, nonlinear regression by skew distribution is adopted.

Low Beam Pattern Modeling

A Sony digital camcorder video camera DCR-SX83 is mounted in the middle of the front window of a car which is measured by the center of two headlights. The size of the recorded image sequences is 720 480 and the system is implemented in MATLAB.

Performance evaluation

Robust Vehicle Detection in Nighttime using Color Model for Driver Assistance

Motivated by developing an efficient mechanism to evaluate the safety of nighttime driving conditions, a robust method to detect vehicles based on vehicle lamps in night environment is proposed. The proposed framework is a four-step method that includes preprocessing, vehicle light selection, tracking, and false alarm reduction.

Overview of the proposed system

A high dynamic range camera (OV10630, OmniVision, U.S) is mounted in the middle of the front window of a car that is measured by the center of two headlights. The proposed system is implemented on an Intel 2.4 GHz platform in MATLAB® (R2011a, MathWorks, U.S.). The proposed vehicle detection system is applied to continuously monitor the traffic scene at a driving speed between 40 and 100 km/h. The frame rate of the vision system is 30 frames/s. The size of each frame of grabbed image sequences is 1280 pixels by 720 pixels per frame. To enhance the performance of the processing time, the input frames were resized to 720 pixels by 480 pixels.

Quantitative comparison of the proposed algorithm with the state-of-the-art methods

 

       Related papers:

Rui Zhang, Eunsoo Park, Yongji Yun and Hakil Kim, ”Robust moving object detection using beam pattern for night-time driver assistance”, IEEE 75rd VTC Spring, 2012(PDF)

Back to top


Smart Parking Assistance System

Development of Long Range Ultrasonic Sensor for Smart Parking Assistance System

the range of General ultrasonic senser is 4.5m. These distance limits force the car to run slow when smart parking and vertical parking is impossible. In this project, Objective is developing long lange (6,5m) ultrasonic sensor. And It will enable more faster the smart parking in the variety environment. In Addition, Classfication among vehicle, pilar, curb and the type of gound would be possible.

Smart Parking Assistance System

Back to top


Vehicle Detection and Tracking in Real time

Real Time Vehicle Detection and Tracking using Haar-like Features

A method for vehicle detection and tracking using Haar-like classifier from images acquired from single camera. There are two main ideas of this method. For fast operation speed, divide the Region of Interest to sub regions and only operate generation process in selected valid regions. In tracking process, stepwise enhancement is adopted. Combination of resizing and blurring enhancement at local window that surrounding detected vehicle region assist the performance of Haar-like classifier.

Flowchart

After calibration process, set the initial ROI and divide it to segments. Define the valid segment that is not including any vehicles and operate detection process only in this valid segment instead of the whole ROI. This ROI segmentation process ensure real time processing.

ROI segmentation 

The performance of Haar-like classifier can be affected the illumination change and size of the image. For this reasons, stepwise enhancement is required for adaptive detection and tracking. Gaussian blurring and histogram equalization can reduce the affect of illumination change and noise. So combination of blurring, histogram equalization and resizing enhancement is using at tracking vehicle. And this enhancement is performed at local window which is small region that surrounding detected vehicle.

 Stepwise Enhancement 

Performance evaluation

Back to top


Pedestrian Detection for Moving Vehicle

Parallel HOG Algorithm for Pedestrian Detection Based on Multi-Core and SIMD Architecture

Techniques about pedestrian detection using HOG(Histograms of Oriented Gradients) based on multi-core and SIMD(Single Instruction Multiple Data) architecture. Algorithm includes method of extracting HOG features for applying SIMD by using SSE(Streaming SIMD Extension) instructions for reducing operating time, and paralleling the detection window by using multi-threading technique for running on video sequence more fast.

Flowchart

In whole procedure, part of the HOG feature extraction is conformable to a standard HOG method. But, applied parallel techniques to each process of feature extraction. In a particular, for calculating magnitude m(x,y) and orientation θ(x,y) of each pixel, can use SSE instructions for applying SIMD operation. This process is as follows.

Compute gradients,

use SSE instructions for calculating d(x,y)

After extracting features, collect the HOG features per detection window and operate the detection window for whole image pixels in parallel. And the final results are as below.

Experimental Results

Back to top