Abstract

My research focuses on building systems with strong theoretical foundations. I am engaged in the design, analysis, and implementation of algorithms and systems.

A major emphasis of my work is on processing-near-memory (PNM) architectures. PNM offers an alternative computing paradigm that reduces data movement by integrating lightweight compute units close to memory. To support large memory capacities, practical systems often incorporate multiple memory modules, resulting in a distributed architecture with a central processor and numerous memory-side cores. While extensive research exists on hardware implementations of PNM, few studies address these systems from a theoretical perspective. To bridge this gap, we propose a computational model for PNM systems and demonstrate that significant reductions in data movement can be achieved through careful data placement and task scheduling. We have designed several indexing algorithms tailored for PNM and implemented them on UPMEM’s commercial hardware. The results confirm that our approach effectively reduces data movement, both theoretically and empirically. I am also interested in applying PNM techniques to address memory-related challenges in other systems, as well as in deriving fundamental principles for the design of PNM algorithms.

Additionally, I work on learned indexes, an emerging research topic that uses instance-optimized models to represent datasets, thereby enabling efficient operations for data drawn from easy-to-learn distributions. My goal is to design learned indexes that are robust to skew, support incremental updates, and allow incremental modifications. As a first step, we have developed a new structure that greatly reduces model complexity on simple datasets by improving how noise is handled. I am open to collaborations on developing updatable learned index structures, as well as on exploring other forms of instance-optimized algorithms.