My research interests lie in the areas of programming languages, compilers and systems. Specifically, I am interested in various language features, compiler techniques and run-time system support that will be necessary to unlock the potential of emerging, complex computation platforms such as multicore processors, heterogeneous architectures, sensor networks and distributed systems.

Some of my research is supported by gifts from Intel Corporation.

I am currently involved with several projects, both with my students and with collaborators at Purdue and elsewhere (completed and dormant projects are described here.):

Automatically optimizing irregular applications

This project looks at developing frameworks to automatically analyze, transform and tune irregular applications, which operate over pointer-based data structures, to improve their locality, parallelism and performance. While irregular applications seemingly have little commonality, this project is premised on the insight that at higher levels of abstractions, there are common behaviors in irregular applications that can be exploited to develop automatic transformations to enhance performance.

Optimizing computational science applications by exploiting semantics

This project develops techniques to optimize computational science applications such as computational mechanics solvers by exploiting domain semantics. Rather than building ad hoc domain-specific languages (DSLs) for each domain, the key insight of this project is to leverage the existing semantics captured by the domain libraries scientists use to write their applications. We envision a generic compiler and run-time infrastructure that leverages domain libraries to essentially provide domain-specific optimizations. The main website for this project is here.

Elastic applications for distributed and cloud computing

This project aims to develop programming models that will allow programmers to deploy elastic applications to cloud and distributed systems. Elastic applications can adjust their execution to adapt to changing resources (e.g., automatically launching additional processing tasks to take advantage of available computing resources), making them ideally suited to cloud execution environments where the availability and characteristics of resources are dynamic and unpredictable. This work builds on the Mace project.

Students

  • Wei-Chiu Chuang (advised by Charles Killian)

Current collaborators

Publications

Detecting and diagnosing bugs in large-scale distributed systems

This project looks at statistical techniques to detect and diagnose bugs in large-scale distributed systems. The standard approach to such detection is to use "ground truth" profiling runs that are known to be bug free to build a model of normal behavior, and then look for deviations from that model to detect bugs. This approach breaks down at large scales for two reasons. First, developers may not have access to production-scale systems when building models, and second, it may be impossible to collect verifiably bug-free runs at large scales. The goal of this work is to infer scaling properties of a program's behavior, and use that information to build models of large-scale behavior using bug-free runs at smaller scales.

Students

Current collaborators

Publications

Funding

Effective computation offloading

A promising strategy for writing applications intended to run on resource-limited devices such as mobile phones is to offload computation from those devices to cloud-computing services. This project looks at approaches that will allow programmers to write their programs in a unified style (without considering offloading) and rely on compiler and run-time support to automatically make appropriate decisions regarding how to partition the application between the mobile device and the cloud. In particular, we are looking at ways to enable more sophisticated offloading by considering multiple offloading sites and multiple offloading granularities.

Students

Publications