MPI Appliance for HPC Research on Chameleon

Hi Everyone,
Iβm Rohan Babbar from Delhi, India. This summer, Iβm excited to be working with the Argonne National Laboratory and the Chameleon Cloud community. My project focuses on developing an MPI Appliance to support reproducible High-Performance Computing (HPC) research on the Chameleon testbed.
For more details about the project and the planned work for the summer, you can read my proposal here.
π₯ Community Bonding Period
Although the project officially started on June 2, 2025, I made good use of the community bonding period beforehand.
- I began by getting access to the Chameleon testbed, familiarizing myself with its features and tools.
- I experimented with different configurations to understand the ecosystem.
- My mentor, Ken Raffenetti, and I had regular check-ins to align our vision and finalize our milestones, many of which were laid out in my proposal.
π§ June 2 β June 14, 2025
Our first milestone was to build a base image with MPI pre-installed. For this:
- We decided to use Spack, a flexible package manager tailored for HPC environments.
- The image includes multiple MPI implementations, allowing users to choose the one that best suits their needs and switch between them using simple Lua Module commands.
π Thatβs all for now! Stay tuned for more updates in the next blog.
Thanks for reading!