Hittinger      Data X Seminar Series

Jeffrey Hittinger


Computational Scientist
Center for Applied Scientific Computing
Lawrence Livermore National Laboratory

"Making Every Bit Count: Variable Precision?"

Thursday, October 19, 4:00 PM
Packard Lab 466

Abstract:   Decades ago, when memory was a scarce resource, computational scientists routinely worked in single precision and were more sophisticated in dealing with the pitfalls finite-precision arithmetic. Today, however, we typically compute and store results in 64-bit double precision by default even when very few significant digits are required. Many of these bits are representing errors – truncation, iteration, roundoff – instead of useful information about the solution. This over-allocation of resources is wasteful of power, bandwidth, storage, and FLOPs; we communicate and compute on many meaningless bits and do not take full advantage of the computer hardware we purchase.

Because of the growing disparity of FLOPs to memory bandwidth in modern computer systems and the rise of General-Purpose GPU computing – which has better peak performance in single precision – there has been renewed interest in mixed precision computing, where tasks are identified that can be accomplished in single precision in conjunction with double precision. Such static optimizations reduce data movement and FLOPs, but their implementations are time consuming and difficult to maintain, particularly across computing platforms. Task-based mixed-precision would be more common if there were tools to simplify development, maintenance, and debugging. But why stop there? We often adapt mesh size, order, and models when simulating to focus the greatest effort only where needed. Why not do the same with precision?

At LLNL, we are developing the methods and tools that will enable the routine use of dynamically adjustable precision at a per-bit level depending on the needs of the task at hand. Just as adaptive mesh resolution frameworks adapt spatial grid resolution to the needs of the underlying solution, our goal is to provide more or less precision as needed locally. Acceptance from the community will require that we address three concerns: that we can ensure accuracy, ensure efficiency, and ensure ease of use in development, debugging, and application. In this talk, I will discuss the benefits and the challenges of variable precision computing, highlighting aspects of our ongoing research in data representations, numerical algorithms, and testing and development tools.

Bio:   Dr. Jeffrey Hittinger is a computational scientist in the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory, where he currently serves as Acting Deputy Director of CASC and leader of the Scientific Computing Group. At Livermore, he also leads a large interdisciplinary Strategic Initiative project on Variable Precision Computing. Dr. Hittinger has been actively involved in the Department of Energy (DOE) planning for exascale computing and co-chaired the working group that produced the Applied Mathematics Research for Exascale Computing community report for the DOE Office of Science Advanced Scientific Computing Research program. His current research interests include high-order numerical methods for hyperbolic systems, computational plasma physics, high-performance parallel computing, a posteriori error estimation, and code and solution verification. Dr. Hittinger earned his Ph.D. in Aerospace Engineering and Scientific Computing from the University of Michigan, where he also earned master's degrees in Applied Mathematics and in Aerospace Engineering. He is a graduate of Lehigh University, with a bachelor's degree in Mechanical Engineering.

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

© 2014-2016 Computer Science and Engineering, P.C. Rossin College of Engineering & Applied Science, Lehigh University, Bethlehem PA 18015.