Research Areas:






















Current Research

Transfer Learning as a Tool for Efficient Machine Learning
Traditionally, machine learning requires a lot of labelled data to perform reasonably well. However, annotating data requires a lot of efforts. Even if we generate lot of data, most of them are unlabeled. To exploit lot of unlabeled data or few labelled data, transfer learning is used in different forms.
The simplest form is unsupervised domain adaptation (UDA) where we have labelled source domain data but unlabeled target domain data with same tasks across source and target. The goal is to find a transformation of a source domain so that a model on the transformed source domain performs well in the target domain.
The second form is hypothesis transfer learning to novel categories (HTL). Here we have a large number of source hypothesis but we do not have access to source domain data. The goal is to learn a hypothesis for a new target task which has few labelled samples using the source hypothesis.
The third form is few-shot learning (FSL) where we have very few labelled samples. The goal is find a relation between the model learnt from less number of samples and the model learnt from large number of samples so that we can use the relation to find the large sample model whenever there are few samples for training.

Potential Applications
Machine vision, knowledge transfer, data categorization.
Learning Time-Variant Graph-Structured Data for Robotics
Early deep learning methods such as convolutional neural networks and recurrent neural networks mainly handled grid-structured inputs of audio or image. Recently, new deep learning approaches have focused on representing and modeling graph-structured data. Besides, performing real-time task is critical issue in robotics, which requires managing time-variant data. In this research, we aim to focus on a new framework for deep learning based on time-variant graph-structured data.

Potential Applications
3D Object detection, Visual mapping and Autonomous driving
Deep Reinforcement Learning for Robotics
Nowadays, deep reinforcement learning has achieved promising results in many physical control problems. It offers a powerful set of tools to handle rich sensory inputs and make complicated decisions. Deep reinforcement learning is capable of learning policies in high-dimensional and continuous action space, which makes it possible to train a real-world robot system. In this research, we aim to explore how deep reinforcement learning can be used to help non-expert end-users train robot systems. Our research focuses on incorporating non-expert human feedback into deep reinforcement learning to solve complex tasks in robotics.

Potential Applications
autonomous manipulation, locomotion and driving

Humanoid Research

Ladder Climbing Motion Generation & Control for Humanoid Robots

In this project, we have developed a framework on ladder-climbing control for humanoid robots. In collaboration with Indiana University, we participated in DRC-Hubo team (Track A) at DRC trial (Dec 2013 , Miami) funded by DARPA. The ladder-climbing control was proposed to model after stair-climbing minimizing the use of gripping force for climbing allowing us to use existing humanoid robots to perform ladder climbing tasks. Indiana University developed motion planning framework & software that can handle coliision avoidance better and Purdue developed algorithms and dynamics computations and control based on whole-body model of humanoid robots.

1. "Motion Planning of Ladder Climbing for Humanoid Robots" Y. Zhang, J. Luo, K. Hauser, R. Ellenberg, P. Oh, H. A. Park, M. Paldhe, and C.S.G. Lee, In proceedings of IEEE Conf. on Technologies for Practical Robot Applications (TePRA), April 2013
2. "Motion Planning of Ladder Climbing for Humanoid Robots" Jingru Luo, Yajia Zhang, Kris Hauser, H. Andy Park, Manas Paldhe, C. S. George Lee, Michael Grey, Mike Stilman, Jun Ho Oh, Jungho Lee, Inhyeok Kim, and Paul Oh, To be presented to IEEE Conf. on Robotics and Automation (ICRA 2014), May 2014

Posture-based Control Concept
In this ongoing research, posture-based control concept is proposed for humanoid robots which are modeled as 14 body segments connected by 15 points. A posture is defined by the 3D positions of these 15 points.

Human-human interaction, falls, action/activity recognition, etc.
Open-source Humanoid Robot Control Software Package
We have been developing open-source software packages for controlling a humanoid robot. The packages use architecture similar to that of Robot Operating System (ROS), but are more suitable for controlling humanoid robots. Some of these software packages were used during the DARPA Robotics Challenge.

The software packages can be used for performing various tasks like ladder-climbing, valve-turing, etc.
Whole-body Balancing for Humanoid Robots
The goal of this project was to develop a general framework that enables us to generate a library of balanced whole-body motions by merging upper-body motions transferred from human with the lower-body motions generated by existing walking pattern planners. From our observation on human movements where balancing is achieved by coordinated-leg-movements, we have applied cooperative-dual-task-space representation used for coordinated motion tasks of two arm systems to describe and control the coordination between the legs. This representation gave us a nice decoupling between variables regarding feet constraints and waist position, allowing us adjust the balance only maintaining the feet constraints. The proposed method enabled us to achieve balance in a unified manner for static and dynamic lower-body movements which have been handled very differently in other methods.

"Cooperative-Dual-Task-Space-based Whole-body Motion Balancing for Humanoid Robots" H. Andy Park and C.S. George Lee In proceedings of IEEE Conf. on on Robotics and Automation (ICRA 2013), May 2013

Uneven Walking Pattern Generation for Humanoid Robots
In this project, our goal was to generalize an existing walking pattern planners for uneven terrain environment such as slopes and stairs. In a modification to Convolution-Sum-based walking pattern generator, we removed jerkiness of generated Center-of-Mass (CoM) motion by using a low-pass-filter, and generated an additional ankle joint movement and vertical CoM movement adapting to the uneven terrain environment. The proposed method was successfully validated for Hoap-2 robot in Webots.

"Convolution-Sum-Based Generation of Walking Patterns for Uneven Terrains" H. Andy Park, Muhammad A. Ali and C.S. George Lee In proceedings of IEEE-RAS Conf. on Humanoid Robots (Humanoid 2010), Dec. 2010
Closed-Form Inverse Kinematic Joint Position Solutions for Humanoid Robots
In this project, we aimed to obtain closed-form joint position solutions for most existing humanoid platforms -- Hubo KHR-4, ASIMO, HRP-2, and HOAP-2. We have developed a novel "Reversed-Decoupling" approach that enables us to solve for closed-form position solutions in upper/lower limbs with 6-DoFs at most.

1. “Closed-Form Inverse Kinematic Joint Solution for Humanoid Robots” H. Andy Park, Muhammad A. Ali and C.S. George Lee The International Journal of Humanoid Robotics, Vol. 9, No. 3 (2012)

2. “Closed-Form Inverse Kinematic Joint Solution for Humanoid Robots” Muhammad A. Ali, H. Andy Park and C.S. George Lee In proceedings of IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), October 2010

3D Point Cloud for Human-Pose Estimation

Using a Stationary Sensor

Extract a 3D-point-cloud feature (VISH) from the observation of a depth sensor to reduce feature/depth ambiguity and estimate human poses using the result of action classification and a kinematic model.

1. K. C. Chan, C. K. Koh, and C. S. G. Lee, “A 3D-Point-Cloud Feature for Human-Pose Estimation,” in ICRA2013, May 2013, pp. 1615–1620.
2. K. C. Chan, C. K. Koh, and C. S. G. Lee, “Using Action Classi cation for Human-Pose Estimation,” in IROS2013, Nov. 2013, pp. 1176–1181.

Using a Moving Sensor
Human poses are estimated from the best viewpoint with the minimum mean-squared error of the estimates in a training database.

1. K. C. Chan, C. K. Koh, and C. S. G. Lee, “Selecting Best Viewpoint for Human-Pose Estimation,” in ICRA2014, May 2014, pp. 4844-4849.


Object Tracking

Track objects using motion similarity measure for pairs of objects, with an emphasis on collaborative tracking.

1. K. C. Chan, C. K. Koh, and C. S. G. Lee, “Collaborative Object Tracking with Motion Similarity Measure,” in ROBIO2013, Dec. 2013, pp. 964 - 969.


Multiple lane detection using a novel edge similarity metric and Bezier curves

Detect multiple lane markings both on straight and curved roads for following applications.

  • Autonomous lane changing – knowledge of multiple lanes is a must in highways for lane changing.

  • Lane localization – to be able to sense which lane among all the lanes ego-vehicle is traveling in.

  • Smart navigation - Guide based on ego-lane

KITTI Dataset from Karlsruhe Institute of Technology (KIT)

Total Frames - 355
False Positives - 22 (6.19%)
Detection Rate for Lane 1 (Immediate lane markers) - 98.31%
Detection Rate for Lane 2 - 98.31%
Detection Rate for Lane 3 - 93.8%
Suburban Bridge Dataset from University of Auckland

Total Frames - 850
False Positives - 293 (34.47%)
Detection Rate for Lane 1 (Immediate lane markers) - 93.76%
Detection Rate for Lane 2 - 93.17%
Suburban Trailer Follow from University of Auckland

Total Frames - 700
False Positives - 43 (6.14%)
Detection Rate for Lane 1 (Immediate lane markers) - 92.71%
Detection Rate for Lane 2 - 99.85%

Trajectory Generation for Autonomous Lane Change

To generate trajectory using Bezier curves for autonomous lane changing in highway scenario. The ongoing research focuses on 3 important aspects required for effective lane changing

  • Sensing using 2D Laser-scanner and processing of the scanned data.

  • Velocity estimation of the vehicles in the neighboring lanes using Interactive Multiple Model (IMM) based Kalman Filter .

  • Decision making regarding lane change followed by the generation of trajectory based on sensor inputs and given constraints

Experimentation of the proposed algorithm on Pioneer3-DX mobile robot platform.
Example Scenario

Stage-1 Video
Rviz Simulation
Rviz Simulation