Sangpil Kim

Sangpil Kim

Sangpil Kim is a Ph.D. student in the School of Computer Engineering at Purdue University. He is working on the deep learning algorithm and virtual reality. To be more specific, he develops the generative model, video segmentation, and hand pose estimation with a depth sensor. Currently, he is working on combining virtual reality and deep learning algorithm.
Object Synthesis by Learning Part Geometry with Surface and Volumetric Representations

Object Synthesis by Learning Part Geometry with Surface and Volumetric Representations

Sangpil Kim, Hyung-gun Chi, Karthik Ramani
Computer-Aided Design Volume 130, January 2021

Abstract: We propose a conditional generative model, named Part Geometry Network (PG-Net), which synthesizes realistic objects and can be used as a robust feature descriptor for object reconstruction and classification. Surface and volumetric representations of objects have complementary properties of three-dimensional objects. Combining these modalities is more informative than using one modality alone. Therefore, PG-Net utilizes complementary properties of surface and volumetric representations by estimating curvature, surface area, and occupancy in voxel grids of objects with a single...

read more
First-Person View Hand Segmentation of Multi-Modal Hand Activity Video Dataset

First-Person View Hand Segmentation of Multi-Modal Hand Activity Video Dataset

Sangpil Kim, Hyung-gun Chi, Xiao Hu, Anirudh Vegesana, Karthik Ramani
In proceedings of the 31st British Machine Vision Conference (BMVC)

Abstract:  First-person-view videos of hands interacting with tools are widely used in the computer vision industry. However, creating a dataset with pixel-wise segmentation of hands is challenging since most videos are captured with fingertips occluded by the hand dorsum and grasped tools. Current methods often rely on manually segmenting hands to create annotations, which is inefficient and costly. To relieve this challenge, we create a method that utilizes thermal information of hands for efficient pixel-wise hand segmentation to create a multi-modal activity video dataset. Our method is...

read more
A Large-scale Annotated Mechanical Components Benchmark for Classification and Retrieval Tasks with Deep Neural Networks

A Large-scale Annotated Mechanical Components Benchmark for Classification and Retrieval Tasks with Deep Neural Networks

Sangpil Kim*, Hyung-gun Chi*, Xiao Hu, Qixing Huang, Karthik Ramani
In proceedings of 16th European Conference on Computer Vision (ECCV)

We introduce a large-scale annotated mechanical components benchmark for classification and retrieval tasks named Mechanical Components Benchmark (MCB): a large-scale dataset of 3D objects of mechanical components. The dataset enables data-driven feature learning for mechanical components. Exploring the shape descriptor for mechanical components is essential to computer vision and manufacturing applications. However, not much attention has been given on creating annotated mechanical components datasets on a large-scale. This is because acquiring 3D models is challenging and annotating...

read more
Latent transformations neural network for object view synthesis

Latent transformations neural network for object view synthesis

Sangpil Kim, Nick Winovich, Hyung-Gun Chi, Guang Lin, Karthik Ramani
The Visual Computer (2019): 1-15.

We propose a fully convolutional conditional generative neural network, the latent transformation neural network, capable of rigid and non-rigid object view synthesis using a lightweight architecture suited for real-time applications and embedded systems. In contrast to existing object view synthesis methods which incorporate conditioning information via concatenation, we introduce a dedicated network component, the conditional transformation unit. This unit is designed to learn the latent space transformations corresponding to specified target views. In addition, a consistency loss term is...

read more