Skip to main content

San Diego Supercomputer Center Launches New Diversity Program

HPC@MSI broadens base of researchers and educators who use advanced computing resources

African American data engineer holding laptop while working with supercomputer in server room
Credit: seventyfour74, 123RF

By:

  • Kimberly Mann Bruch

Media Contact:

Published Date

By:

  • Kimberly Mann Bruch

Share This:

Article Content

The San Diego Supercomputer Center (SDSC) at UC San Diego has recently announced the creation of HPC@MSI, a program aimed at facilitating the use of high-performance computing (HPC) by Minority Serving Institutions (MSI).

The HPC@MSI program is designed to broaden the base of researchers and educators who use advanced computing by providing an easy on-ramp to cyberinfrastructure that complements what is available at their campuses. Additional goals of the program are to seed promising computational research, facilitate collaborations between SDSC and MSIs, and help MSI researchers be successful when pursuing larger allocation requests through the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, the successor to the National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) program.

Applicants from MSIs who do not have an active award through XSEDE or ACCESS, or previous allocations via SDSC’s HPC@UC or HPC@MSI programs, can receive up to 500,000 core hours (or GPU-hour equivalent) on Expanse. Full details of eligibility and terms of the awards can be found here.

“I am excited about the launch of this new initiative, which is modeled on a previous successful effort focused on researchers at University of California campuses,” said Nicole Wolter, an SDSC user support specialist. “In addition, HPC@MSI furthers our goals of reaching a broader community.”

Resources offered through the HPC@MSI program include:

  • Expanse CPU resource: ~5 PFlop/s system featuring 728 nodes, each with two 64-core AMD EPYC 7742 processors, 256 GB memory and 4 large memory nodes, each with two 64 core AMD EPYC 7742 processors and 2 TB of memory;
  • Expanse GPU resource: 52 GPU nodes, each with 4 V100 GPUs/node;
  • Data Resources: Over 12 PB of high-speed storage made available via Lustre parallel file systems, as either short term Performance Storage used for temporary files, or long term, non-purged Project Storage that persists for the life of the project. A Durable Storage resource provides a second copy of all data in Project Storage file system;
  • Applications: A large installed base of applications for HPC and big data analytics; and
  • Expertise: SDSC staff have broad expertise in the application of advanced computing and stand ready to assist users in making the best use of these resources for research.

For more information about HPC@MSI, please visit the webpage.

Share This:

Category navigation with Social links