As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specifically are broadening their scope to include AI.
This is certainly the case for the 85th HPC/AI User Forum held at Argonne National Laboratory and hosted by Hyperion Research. This twice-yearly event used to just be called the HPC User Forum, but the laboratory has expanded its purview to include the enormous contributions AI has made to scientific research.
While it would be impossible to summarize everything that happened at this exciting event, I was able to jot down some notes during the presentations and came up with a quick wrap up that showcases some of my favorite talks.
Rick Stevens
As you might imagine if you follow Argonne’s computing research, Rick Stevens was the headliner for the HPC/AI User Forum. He started out with a talk that summarized Argonne’s history within the field of scientific research, specifically mentioning that the laboratory began with the goal to develop peaceful uses for nuclear power.
After a quick history lesson, Stevens expanded his talk into the global techno-economic landscape surrounding AI. He stated that AI is rapidly becoming the “dominate driver/signal of techno-economic progress and competition in the next decade.” He mentioned AI’s pervasiveness and the massive competition in AI between western democracies and semi-aligned states as well as adversaries.
A slide from Stevens’s presentation discussing Six Conceptual Clusters. Credit: Argonne National Laboratory
He pointed out that the U.S. no longer leads in all science metrics, displaying a graph showing that the U.S. accounted for 16.54% of the global share of science and engineering articles published in peer-reviewed journals in 2018. China, on the other hand, took the top spot with 20.67%.
Stevens also underscored just how much AI has improved. He played a hip-hop song made by MusicLM 18 months ago and one made by Suno recently. As a rap fan myself, I can tell you the advancement made in just those 18 months was absolutely astounding. What’s more, Stevens stated that he uses AI in his daily life.
“I do almost no coding these days without using AI as a helper,” Stevens said.
Stevens also made a point to mention the areas that Argonne will be focusing on in AI development in the years to come:
AI for science
Autonomous discovery
Coherent x-ray science
Imaging and detection of signatures
Climate actions
Carbon management
Circular economy
In his second talk at the conference, Stevens mentioned the amazing work that is planned for the Trillion Parameter Consortium (TPC.) This will be an international collaboration formed by scientists from federal laboratories, research institutes, academia, and industry with the goal to create large-scale generative AI models for scientific discovery. This collaboration hopes to leverage these resources to build trillion-parameter AI models.
As of now, the TPC has three major goals:
Build an open community of researchers who are interested in creating state-of-the-art large-scale generative AI models aimed broadly at advancing progress on scientific and engineering models by sharing methods, approaches, tools, insights, and workflows.
Incubate, launch, and coordinate projects to build specific models at specific sites and attempt to avoid unnecessary duplication of effort and to maximize the impact of the projects in the broader AI and scientific community. Where possible, the TPC will work out what they can do together for maximum leverage vs. what needs to be done in smaller groups.
Create a global network of resources and expertise that can help facilitate teaming and training the next generation of AI and related researchers interested in the development of use of large-scale AI in advancing science and engineering.
Rick was also quick to mention that this organization isn’t about sitting around and talking about the AI revolution. He expects collaborative work to be done.
“The TPC itself is not building a model,” Stevens said. “What it’s trying to do… is build an open community and incubate projects… It’s not a spectator organization, it’s a participant organization.”
Arvind Ramanathan
Another fantastic presentation came from another Argonne alum named Arvind Ramanathan. A computational biologist and Computational Science Leader at Argonne, Ramanathan is someone we’ve had our eye on for quite some time. His presentation at the forum discussed the opportunities AI presents within the field of medicine.
As he discussed, the current paradigm for discovery is inefficient, time-consuming, labor intensive, and costly. There is a reproducibility crisis within medical research that is compounded by data sets that are clustered, sparse, less-interpretable, and incomplete.
Therefore, Argonne has set its sights on using AI and robotics to enable autonomous discovery. The vision now is a system that starts with a high-level description of a hypothesis and autonomously carries out computational and experimental workflows to confirm or reject that hypothesis.
The AI for Medicine work performed in the GenSLMs project won the ACM Gordon Bell Special Prize at SC22. Credit: Argonne National Laboratory
AI, in combination with robotic equipment, can help close the loop on planning, executing, and analyzing experiments.
Specifically, Ramanathan mentioned work Argonne scientists are performing on antimicrobial peptides (AMPs.) AMPs are short chains of amino acids, typically ranging form 10 to 50 amino acids in length. These can target and kill viruses, bacteria, fungi, and other pathogens.
He pointed out that a traditional approach to protein synthesis is entirely impossible. There are 20 possible amino acids, which means there are 2020 possible combinations to search – an astronomically high number that would be impossible for human researchers to entirely experiment in a lab. As such, scientists must turn to Ai if we hope to solve these problems.
Karthik Kashinath, NVIDIA
Stepping away from the Argonne-specific presentation, I was truly impressed with the work Nvidia is doing on it’s Earth-2 project. Karthik Kashinath, a Principal Scientist and Engineer at Nvidia, gave an interesting talk about the company’s work toward a kilometer-scale digital twin of the Earth for weather and climate.
Stating that “AI for Weather has exited its infancy,” Kashinath discussed where he sees the technology progressing in the near future. While we’ve made major strides in the past few years, we will need a 30,000X speedup of compute before we reach a 1km scale.
With such a daunting task in mind, Earth-2 was mandated by Nvidia CEO Jensen Huang to “achieve three miracles” at the Berlin Summit for Earth Virtualization Engines in July, 2023:
High-Resolution Climate Simulation: The goal here is to simulate the climate quickly and with a high enough resolution at the scale of a square kilometer. This level of detail is necessary to predict and assess climate change impacts at a granular, local level.
AI-Powered Climate Prediction: Nvidia will endeavor to use AI, particularly GenAI, to emulate and accelerate climate predictions. This would allow researchers to explore multiple possible future scenarios quickly and with high fidelity.
Interactive Visualization: Nvidia wants to use Earth-2 to visualize and interact with the massive amounts of dta generated by simulations and AI models.
To achieve these goals, Earth-2 is in collaboration with international weather and climate science organizations such as NOAA, NASA, the Barcelona Supercomputing Center, and MeteoSwiss.
What’s more, Nvidia has the hardware to get the job done. Kashinath stated that the Grace-Hopper GPUs accelerate these simulations 20X, resulting in an 18X energy efficiency from accelerated computing.
Related