Angelina Lee           I-Ting Angelina Lee
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory (CSAIL)
"Memory Abstractions for Parallel Programming"
Tuesday, December 3, 4:00 PM
Packard Lab, Room 466

Abstract: In this talk, I will describe how we augment the virtual memory mechanism to support two types of "memory abstractions" that ease the task of parallel programming. A memory abstraction is an abstraction layer between the program execution and the memory that provides a different "view" of a memory location depending on the execution context in which the memory access is made. We augmented the virtual memory mechanism to provide support for thread-local memory mapping (TLMM), a new memory mechanism that designates a region of the process's virtual-address space as "local" to each thread that occupies the same virtual-address range but can be mapped independently with different physical pages. The support for TLMM in turn enables two types of memory abstractions that ease the task of parallel programming. The first memory abstraction is the "cactus stack memory abstraction," an important memory abstraction for dynamic multithreaded concurrency platforms. In a concurrency platform such as Cilk, the runtime system incorporates a "cactus stack" to support multiple stack views for all the active children simultaneously. The use of cactus stacks, albeit essential, forces concurrency platforms to trade off between performance, memory consumption, and interoperability with serial code due to cactus stack's incompatibility with linear stacks. We proposes a new strategy to build a cactus stack using TLMM, which allows each of the active children to have its own view of the linear stack. This cactus stack memory abstraction enables a concurrency platform that employs a work-stealing runtime system to satisfy all three criteria simultaneously. The second memory abstraction is reducer hyperobjects (or reducers for short), a linguistic mechanism that helps avoid determinacy races in dynamic multithreaded programs. We devised a memory-mapping approach to support reducers using TLMM, which yields 4x faster access time compared to the existing approach of implementing reducers.

Bio: I-Ting Angelina Lee is a postdoctoral associate in Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, working with Prof. Charles E. Leiserson. She aims to make parallel programming accessible for everyone, so that every programmer, particularly the non-experts, can rapidly develop high performance software that takes advantage of commodity multicore hardware. Her prior research spans the areas of linguistics for parallel programming, runtime system support for multithreaded software, and supporting parallel programming abstractions by building novel mechanisms in operating systems and hardware. In her Ph.D. thesis, she investigated several memory abstractions which help ease the task of parallel programming. She devised a light-weight hardware mechanism for location-based memory fences, which behave like an ordinary memory fence, but incur overhead only when synchronization is necessary. She studied ownership-aware transactions, a transactional memory design that incorporates "open nesting" into a transactional memory system in a modular and disciplined fashion, thereby providing provable safety guarantees akin to the notion of "abstract serializability" in databases. She also designed and built JCilk, which focused on investigating exception handling in the context of dynamic multithreading, She received her Bachelor of Science in Computer Science from UC San Diego, where she worked on the Simultaneous Multithreading Simulator for DEC Alpha under the supervision of Prof. Dean Tullsen.

© 2014-2016 Computer Science and Engineering, P.C. Rossin College of Engineering & Applied Science, Lehigh University, Bethlehem PA 18015.