Spack in a Multi-User HPC Environment on Azure
Spack is a package management tool designed for HPC environments. In this article we will demonstrate how to install and configure Spack in a multi-user HPC environment on Azure with shared and user repositories. An environment could be easily created with Azure HPC On-demand Platform (Az-HOP) or Azure CycleCloud. This allows the system administrator to provide a collection of shared packages to all users and individual users can build packages on top of this in their own repository without rebuilding or interfering with one another.
Initial setup
Create a user to store shared packages, for example spackuser. Log on as spackuser. Download spack from GitHub:
Use the development branch as it contains latest Intel packages. To load the environment:
Changing the number of processors to build with:
Fixes for Azure
Spack detects the CPU using CPUID flags. Azure platform exposes subset of the flags supported by underlying CPUs, in some cases causing spack to detect architecture incorrectly. In particular, “clzero” flag is not exposed in Azure HB VM and “pku” flag is not exposed in Azure HBv3. These flags are used to detect “zen” and “zen3” platforms. To correct spack detection on Azure, these flags need to be removed from the microarchitecture definition:
Installing Compilers
Using System Compilers
At the time of writing, gcc-9.2.0 is included in the CentOS HPC image, and the following command will register it with spack:
However, when initially installing it will find a compiler in the path. You can check installed compilers using:
Installing latest GCC
First command will download sources for GCC and dependencies, and build them. Second command will set up environment variables, and the last command registers the newly built compiler. Parameter “--scope site” tells spack to store the compiler configuration in $SPACK_ROOT/etc/spack/ directory. It will be local to this instance of spack and shared between all users using it. Default is “--scope user” and it stores the settings in $HOME/.spack.
To check what versions of GCC are available to download from spack repositories, run the following command:
The output will show the preferred version that will be used by default. However, you can select a version using the @ syntax:
To use the specific compiler when installing other packages, for example to install intel-mpi-benchmarks compiled with gcc-12.1.0, the syntax is:
Installing MPI
MPI libraries can be installed from spack repository using spack install. Alternatively, existing MPI packages already installed on the system can also be configured. In this example we install Intel OneAPI MPI using spack, and set up preinstalled Mellanox HPC-X MPI.
Intel MPI
The latest Intel MPI can be installed from SPACK and it will correctly use InfiniBand. For example:
HPCX MPI
HPC-X MPI with other components (UCX, HCOLL) is preinstalled in the HPC image in /opt/hpcx-<version>. To add it to spack, add the following lines (note: version may be different depending on the Azure HPC image version) to the packages section of $SPACK_ROOT/etc/spack/packages.yaml:
The following code may be used to find and install all installed versions of hpcx on an Azure HPC image:
After the change to the packages.yaml is done, you need to install (register) HPCX in spack:
Install and run Intel MPI Benchmarks (IMB)
The Intel MPI benchmarks package can be used to benchmark the cluster and test the communications are working. Use spack install command to build the package with the different compilers and MPI versions, for example:
Load the package and run:
Chaining Spack Installation for Users
For users to be able to use shared packages provided by the system administrators and to create/install their own packages, one of the ways is to create a chained spack installation. To do that, individual users will need to install a private copy of spack in their home directory and connect it with the shared (“upstream”).
Install local copy of spack
Log on to the cluster as normal user and clone the repository:
Load the local environment and set your preferred “make -j” value:
Note the location of the shared install (make sure the directory is readable by users):
Copy Azure CPU definition file from the shared install:
Copy compiler configuration:
Create file spack/etc/spack/upstreams.yaml with following contents:
The chained spack is now ready to use. You can check that all upstream compilers and packages are available:
Creating a new package
Create a new package:
The initial setup will be guessed by spack. Here are updates to build:
The example we are using here does not have any build scripts associated with it and uses the base Package class with spack. The spack package install is creating the bin directory and running the mpicc command directly to build the executable. Spack has support for many different build systems available and should be autodetected from the source code. The documentation is available here.
The package can be installed, loaded and run as follows:
Published on:
Learn moreRelated posts
Azure Developer CLI (azd): Run and test AI agents locally with azd
New azd ai agent run and invoke commands let you start and test AI agents from your terminal—locally or in the cloud. The post Azure Developer...
Microsoft Purview compliance portal: Endpoint DLP classification support for Azure RMS–protected Office documents
Microsoft Purview Endpoint DLP will soon classify Azure RMS–protected Office documents, enabling consistent DLP policy enforcement on encrypte...
Introducing the Azure Cosmos DB Plugin for Cursor
We’re excited to announce the Cursor plugin for Azure Cosmos DB bringing AI-powered database expertise, best practices guidance, and liv...
Azure DevOps Remote MCP Server (public preview)
When we released the local Azure DevOps MCP Server, it gave customers a way to connect Azure DevOps data with tools like Visual Studio and Vis...
Azure Cosmos DB at FOSSASIA Summit 2026: Sessions, Conversations, and Community
The FOSSASIA Summit 2026 was an incredible gathering of developers, open-source contributors, startups, and technology enthusiasts from across...
Azure Cosmos DB at FOSSASIA Summit 2026: Sessions, Conversations, and Community
The FOSSASIA Summit 2026 was an incredible gathering of developers, open-source contributors, startups, and technology enthusiasts from across...
Dataverse: Avoid Concurrency issues by using Azure Service Bus Queue and Azure Functions
Another blog post to handle the concurrency issue. Previously, I shared how to do concurrency via a plugin in this blog post and also how to f...
March Patches for Azure DevOps Server
We are releasing patches for our self‑hosted product, Azure DevOps Server. We strongly recommend that all customers stay on the latest, most s...