Our Research
Advancing VLSI design, computer architecture, and hardware for AI systems
Generative AI Architectures & Efficiency
We explore computer architecture and system-level techniques to enable efficient execution of Generative AI workloads. Our research targets the dominant performance and energy bottlenecks of these models, particularly in computation and memory bandwidth. By exploiting models resiliency to approximation, specialization, decomposition and compression, we design architectures optimized for efficiency without sacrificing application-level quality.

Accelerating AI via General-Purpose Approximated Processor
MScHardware-Aware Approximation in Systolic Arrays for High-Throughput Generative AI Inference
MScValue Locality in Specialized Vision Transformers
MScArchitectural and Microarchitectural Innovations for Efficient Inference of Language Models
MScArchitectural and Microarchitectural Innovations for Efficient Inference of Small and Medium-Sized Language Models
MScHardware Reliability & Aging
Our research in hardware reliability focuses on understanding, predicting, and mitigating long-term degradation effects in modern digital systems. We research how aging mechanisms and power integrity phenomena evolve under real workloads, with an emphasis on chip architecture, runtime behavior and in-field operation. Data-driven techniques are leveraged to enable proactive and adaptive reliability-aware design.

Asymmetric Transistor Aging in FPGAs
PhDDynamic Voltage Drop
MScAging Cyber Attack
MScChips-on-Plastic & Flexible Integrated Circuits

Edge AI on Flexible Electronics
MScEnabling Generative AI on Extreme Edge Devices
MScJoin the Future of Hardware
Interested in collaborating or joining our lab? We welcome motivated students and researchers.