2018-05-04 09:00:00 2018-05-04 10:00:00 America/Indiana/Indianapolis PhD Seminar - Wenqi Wang "Multi-dimensional data analytics and deep learning via tensor networks" GRIS 302

May 4, 2018

PhD Seminar - Wenqi Wang

Event Date: May 4, 2018
Hosted By: Dr. Vaneet Aggarwal
Time: 9:00 - 10:00 AM
Location: GRIS 302
Contact Name: Cheryl Barnhart
Contact Phone: 4-5434
Contact Email: cbarnhar@purdue.edu
Open To: all
Priority: No
School or Program: Industrial Engineering
College Calendar: Show
“Multi-dimensional data analytics and deep learning via tensor networks”

ABSTRACT

With the booming of big data and multi-sensor technology, multi-dimensional data, known as tensors, has demonstrated promising capability in capturing multi-dimensional correlation via efficiently extracting the latent structures, and drawn considerable attention in multiple disciplines such as image processing, recommender system, data analytics and etc. In addition to the multi-dimensional nature of real data, artificial designed tensors, referred as layers in deep neural networks, have also been intensively investigated and achieved the state-of-the-arts performance in image processing, speech processing, and natural language understanding. However, algorithms related with multi-dimensional data are generally expensive in computation and storage, thus limiting its application when the computational resources are limited. Although tensor factorization has been proposed to reduce the dimensionality and alleviate the computational cost, the trade-off among computation, storage, and performance has never been well studied.

To this end, we first investigate an efficient dimensionality reduction method using a novel Tensor Train (TT) factorization. Specifically, we propose a Tensor Train Principal Component Analysis (TT-PCA) and a Tensor Train Neighborhood Preserving Embedding (TT-NPE) to project data onto a Tensor Train Subspace (TTS) and effectively extract the discriminative features from the data. Mathematical analysis and simulation demonstrate TT-PCA and TT-NPE achieve better trade-off among computation, storage, and performance than the bench-mark tensor-based dimensionality reduction approaches.  We then extend the TT factorization into general Tensor Ring (TR) factorization and propose a tensor ring completion algorithm, which is able to utilize 10% randomly observed pixels to recover the gun shot video at an error rate of only 6.25%. Inspired by the novel trade-off between model complexity and data representation, we introduce a Tensor Ring Nets (TRN) to significantly compress the deep neural networks. Using the state-of-the-arts 28-layer WideResNet architectures, TRN is able to compress the neural network by 243x with only 2.3% degradation in Cifar10 image classification.