AI-Driven Adaptive Multiresolution Molecular Simulations on Heterogeneous Computing Platforms

Argonne’s featured DOE booth speaker, Arvind Ramanathan, will present “AI-driven Adaptive Multiresolution Molecular Simulations on Heterogeneous Computing Platforms.” The talk will address how emerging hardware tailored for artificial intelligence (AI) and machine learning (ML) provides a novel means to couple AI and ML methods with traditional high performance computing (HPC) workflows involving molecular dynamics (MD)

ExaWorks: Developing Robust and Scalable Next Generation Workflows, Applications, and Systems

ExaWorks is focused on enabling scientists to take advantage of these trends via a sustainable workflows SDK. This tutorial will present the ExaWorks SDK, and its constituent components: Flux, Parsl, RADICAL-Cybertools (RCT), and Swift/T. These components are widely used highly scalable tools for developing workflow applications. We will outline today’s most common workflow motifs on

Powering HPC Discoveries through Scientific Software Ecosystems and Communities

Argonne’s Lois Curfman McInnes will present an invited talk entitled Powering HPC Discoveries through Scientific Software Ecosystems and Communities that addresses HPC software, a cornerstone of long-term collaboration and scientific progress, and the increasing complexity required in software products as we leverage unprecedented HPC resources, face disruptive changes in computer architectures, and tackle new frontiers

Great Edge-pectations: How Edge and Exascale Found Love

It is no secret that the world’s largest supercomputers are often quite lonely. Until recently, live-streaming sensor data into massive simulations was more exhibit booth demo than reality. Extreme-scale computational models were isolated, admiring from afar the excitement of edge computing, the Internet of Things, smart cities, autonomous cars and intelligent laboratory experiments. From the

Resilient Error-Bounded Lossy Compressor for Data Transfer

Today’s exascale scientific applications or advanced instruments are producing vast volumes of data, which need to be shared/transferred through the network/devices with relatively low bandwidth (e.g., WAN). Lossy compression is an important strategy to resolve the big data issue, however, little work was done to make it resilient against silent errors, which may happen during

KAISA: An Adaptive Second-Order Optimizer Framework for Deep Neural Networks

Kronecker-factored Approximate Curvature (K-FAC) has recently been shown to converge faster in deep neural network (DNN) training than stochastic gradient descent (SGD); however, K-FAC’s larger memory footprint hinders its applicability to large models. We present KAISA, a K-FAC-enabled, Adaptable, Improved, and ScAlable second-order optimizer framework that adapts the memory footprint, communication, and computation given specific