
Installing Software
conda/mamba, compilers, Spack,
and containers — where to look
Resources
Section 4 — 15 min


Four routes — pick by what your software needs, not in a fixed order
| Route | When to reach for it | Today? |
|---|---|---|
| mamba / conda | Package available on conda-forge or another channel | Primary |
| System modules + compilers | Compile your own code;
PrgEnv-gnu / gcc-native |
Stretch |
| Spack | Complex compiled projects needing system-specific flags | Follow-up |
| Podman-HPC / Singularity | Deployment wrapper; use any package manager inside | Follow-up |
Modules on Isambard 3 are bare-bones — almost no research software is
available through module avail. Their main value is loading
the system compiler (gcc-native) when you want to compile
your own code.
Containers are orthogonal to the others: a container
can wrap a conda or Spack environment and gives access to package
managers like apt or nix that are otherwise
unavailable on a shared HPC system.
Almost no research software here — but essential for the system compiler
# What is currently loaded?
module list
# Reset to system defaults --- good habit at the top of a job script
module reset
# Browse everything available
module avail
# Search for a specific tool
module avail python
module avail gcc
# Load a module (one of the few tools available this way)
module load brics/emacs
# Unload one module
module unload brics/emacsCompiling your own code? PrgEnv-gnu
loads the GNU compiler toolchain (gcc, g++,
gfortran) and the Cray-wrapped MPI library. It is used in
later sections whenever C or Fortran code is compiled. Almost everything
else you need will come from mamba instead.
Stretch: try module avail and search for a tool you
already use.
Quick show of hands
Which of these have you used on a remote system?
brew (Homebrew)misepip directly into the system Pythonconda or mambapiximodulesSpackThere is no wrong answer here — this tells us where to spend the next few minutes.
pip install or
sudo apt?Shared systems, supply chain risk, and reproducibility
Safer routes
Avoid on shared systems
sudo apt install / system-level installs (you do not
have sudo)pip install into the base Python (pollutes shared
paths, breaks other jobs)bash from unknown sources
(supply chain risk)brew, mise, etc. without understanding
what they pull in“Supply chain attack”: a malicious package masquerading as a popular one. Use trusted channels (conda-forge, PyPI with known package names) and pin versions in production.
Get the bootstrap scripts onto Isambard 3
Go to https://github.com/UniExeterRSE/gw4-isambard-3-practical-workshop-2026 and click Fork
# Generate an SSH key and load the agent:
ssh-keygen -t ssh-ed25519 -C "${ISAMBARD_HOST}"
. <(ssh-agent -s)
ssh-add ~/.ssh/id_ed25519
# Install `gh` and authenticate:
bash <(curl -L https://raw.githubusercontent.com/UniExeterRSE/gw4-isambard-3-practical-workshop-2026/refs/heads/main/bootstrap/install/gh.sh) install
gh auth login --git-protocol ssh --web
# clone (replace `UniExeterRSE` with your username)
mkdir -p ~/git
cd ~/git
git clone git@github.com:UniExeterRSE/gw4-isambard-3-practical-workshop-2026.git
cd gw4-isambard-3-practical-workshop-2026
pwdGet the bootstrap scripts onto Isambard 3
Architecture-aware shell config — skip if you already have your own
.bashrc / .zshrc
The script symlinks .bashrc, .bash_profile,
.zshrc, and .zshenv from
bootstrap/dotfiles/ into $HOME, and symlinks
~/.config to bootstrap/dotfiles/.config/. Any
existing file or directory is backed up as <name>.bak
first.
Why bother? Isambard 3 has a shared home
directory across both its x86_64 and Arm
(aarch64) login nodes. The dotfiles detect the current
architecture at login (uname -sm) and route software
installations into an arch-specific prefix:
~/.local/opt/Linux-aarch64/ ← used when logged into an Arm node
~/.local/opt/Linux-x86_64/ ← used when logged into an x86_64 node
~/.config is also symlinked so that tool configs
(e.g. pixi) live inside the repo and are version-controlled.
Skip this step if you already have a
.bashrc you are happy with.
Advanced users: You do not need to install these
dotfiles. Instead, open bootstrap/dotfiles/ and cherry-pick
the parts you want — the arch-dispatch logic (uname -sm)
and the MAMBA_ROOT_PREFIX / MAMBA_EXE exports
are the most useful pieces to copy into your own config.
Run the bootstrap scripts from inside the cloned repo
All bootstrap scripts are in the bootstrap/
subdirectory:
# Install VS Code CLI (skip if already done in pre-workshop setup)
install/code.sh install
# Install miniforge (mamba + conda, using conda-forge by default)
install/mamba.sh install
# Install a curated set of command-line tools into a "system" conda env
NAME=system install/mamba-env.sh installAfter mamba.sh install, open a new shell (or run
source ~/.bashrc) so that mamba is on your
path.
mamba-env.sh creates a system conda
environment with popular command-line tools: gh (GitHub
CLI), parallel, pandoc,
git-delta, ripgrep, pixi, and
direnv. Skip this step if you already set
up the environment to your own liking.
Anaconda, conda, mamba, miniforge — what is what?
What we use
conda (same commands; written in C++)What we avoid
pip install into the active env without
care — can conflict with conda-managed packagesHands-on: create a Python environment and activate it
Environments live under ~/.miniforge3/envs/. They can be
large — check $HOME quota with lfs quota if
space runs low.
A newer approach: project-scoped environments
Pixi is a newer Rust-based package manager from the conda-forge ecosystem. It manages environments per project rather than globally. This workshop repo uses pixi + direnv internally.
Beginners — just run this once and you are done:
cd gw4-isambard-3-practical-workshop-2026
direnv allow # activates the pixi env automatically every time you cd hereWhen you enter the repo directory, your shell will use the project environment automatically. You do not need to understand pixi to follow today’s exercises.
Advanced users — want full control over your own environment?
Ignore the direnv allow prompt. Two conda
environment*.yml files are committed at the repo root. Use
-n to give the environment a name of your choice:
# Standard environment (CPU-only):
mamba env create -f environment.yml -n isambard3-workshop -y
mamba activate isambard3-workshop
# HPC environment (Cray MPICH / MPI support --- required for Section 5 MPI examples):
mamba env create -f environment_hpc.yml -n isambard3-workshop-hpc -y
mamba activate isambard3-workshop-hpcOn Isambard 3, the dotfiles install
~/.config/pixi/config.toml with
detached-environments = true. Without this, pixi stores
each project’s env in .pixi/ inside the project
directory. Because x86_64 and aarch64 nodes share the same home
(and the same checkout), the two architectures would write incompatible
binaries into the same .pixi/ folder. Detached mode stores
envs in an arch-specific prefix
(~/.local/opt/<arch>/) instead.
This workshop repo uses pixi + direnv internally. You do not need to understand it for today’s exercises.
Beyond modules and conda — where to look next
Spack — for compiled software with fine-grained control over flags and dependencies:
https://docs.isambard.ac.uk/user-documentation/guides/spack/
Containers (Podman-HPC / Singularity) — for fully self-contained, portable stacks; can wrap any package manager inside:
https://docs.isambard.ac.uk/user-documentation/guides/containers/
Intro tour (official tutorials) — more worked examples on the BriCS docs site:
https://docs.isambard.ac.uk/user-documentation/tutorials/intro-tour/
If your own software stack does not fit conda, do not spend workshop time on it. Ask a helper to note it down and we will follow up after the session.
Discussion
Questions? Anything that did not work, or a tool you use that we have not mentioned?
# setup your PATH
export PATH="$PATH:${PROJECTDIR}/local/opt/Linux-aarch64/bin"
export PATH="$PATH:${PROJECTDIR}/local/opt/Linux-aarch64/system/bin"
export PATH="$PATH:${PROJECTDIR}/local/opt/Linux-aarch64/miniforge3/condabin"
# or more aggressively
export PATH="${PROJECTDIR}/local/opt/Linux-aarch64/bin:$PATH"
export PATH="${PROJECTDIR}/local/opt/Linux-aarch64/system/bin:$PATH"
export PATH="${PROJECTDIR}/local/opt/Linux-aarch64/miniforge3/condabin:$PATH"
# hooking mamba to your shell
. <(mamba shell hook --shell bash)
# if you're not using pixi,
# then activate the conda environment for this workshop optionally
mamba activate ${PROJECTDIR}/local/opt/Linux-aarch64/isambard3-workshop-hpc