Purdue Computer Architecture
This is an exciting time to be a computer architect!
The decades long transistor scaling guarantees that have enabled the meteoric rise of the
microprocessor industry are slowing down and no longer give us exponential increases in performance and efficiency "for free".
The coming decades will challenge computer architects to create innovative designs
that can execute the ever increasing and diverse computing demands of society in the most efficient way possible.
Here at Purdue, we have a long history of strong presence in the top architecture and systems venues.
In the past, we have some of the earliest and well-cited papers on cache leakage,
low-power architectures, fault tolerance, and multicore cache hierarchies.
More recently, we have made cache coherence both provably verifiable
(a decades-old problem) and scalable in performance.
Our interests are also broad: On the wild side, we have defined architectures
for programmable microfluidics where programs operate on fluids instead of values!
(this work gave rise to a start-up, Microfluidic Innovations).
We have also contributed a breakthrough in Internet router hardware for packet classification,
another decade-old problem (EffiCuts). Embracing Cloud and datacenter-scale computing,
we have well-received papers on datacenter network transport layer and MapReduce.
We are now looking at architectures for machine learning and Big Data, processing near memory by leveraging 3-D stacking, architectural
options beyond Moore's Law and architectural support for datacenter networks
and cloud computing.
These are exciting times - there are so many fun, high-impact problems to solve.
Talk to us if you wish to join us!
Tim's research is generally focused hardware architectures and software systems that improve performance, energy-efficiency and programmer productivity. Tim is specifically interested in general-purpose hardware accelerators like GPUs, heterogeneous architectures for machine learning, memory system performance and exploring energy-efficient full stack solutions for a future where problems continue to scale at alarming rates, but transistors do not.
Mithuna's research broadly spans the areas of computer architecture and (more recently) distributed systems with a focus on areas of interconnection networks in multicores and high-performance computers, storage performance modeling and optimization, architectures for machine learning, multicore memory hierarchies, storage-tier and memory-caching tiers of distributed stores.
Vijay's current emphasis is on processing near memory, data parallel architectures for machine learning, architectural options beyond Moore's Law, and architectural support for datacenter networks. He is also interested in datacenter networks, and coherence and consistency in geo-distributed systems.