Logging in, managing files, and running jobs
Unity Cluster
Research Computing
The Unity Research Computing Platform…
supports research computing tasks of all scales.
makes collaborating simple.
provides access to specialized equipment, including GPUs and large-memory servers.
connects researchers with interdisciplinary support staff.
Resource | Count |
---|---|
Nodes | ~450 |
CPUs | ~25000 |
GPUs | ~1400 |
Users | Over 2000 |
PIs | Over 450 |
Faculty or senior research staff (PIs) own PI Groups.
PIs control user access to Unity and their group.
The Unity Portal is the management interface.
Sign in to the Unity Portal.
Request a PI group (PIs) or access to an existing PI group (students, research staff, collaborators).
Students must join an existing PI group to get Unity access
See the "Requesting an Account" documentation for detailed instructions.
View cluster notices
Add and manage SSH keys
Join or manage PI groups
Access Unity OnDemand or the documentation
User and group management: Unity Portal
Interactive, graphical apps: Unity OnDemand
Command line access: SSH
Features:
Log in with your school credentials rather than SSH keys.
Access a shell from your browser.
Use interactive GUI applications like JupyterLab, RStudio, and a graphical desktop.
Get Slurm templates for common job setups.
SSH stands for secure shell
Unity only allows key-based authentication
Your Unity username is based off your email
Hostname/Address: unity.rc.umass.edu Username: NETID_school_edu Example email: alovelace@umass.edu Example username: alovelace_umass_edu |
ssh -i <path to private key> NETID_school_edu@unity.rc.umass.edu
SSH keys can be generated on the Unity portal
Under Account Settings
, click the plus button under the SSH Keys heading
Select Generate Key
Place the downloaded private key in the appropriate spot for your access method on your local computer (generally ~/.ssh
)
Never share your private key with anyone |
To add an existing ssh key:
Copy the public ssh key (often located in ~/.ssh
)
Log in to the Unity Portal
Click the red "plus" button under SSH Keys
Paste your public SSH key into the box and click Add Key
See the Unity documentation for more detailed information on connecting via SSH, including with PuTTY. |
UNITY IS NOT APPROPRIATE FOR PROTECTED DATA! |
No human subjects data with identifying info, including HIPAA.
Deidentified data is usually OK, but defer to your IRB.
Four storage options on Unity:
Home directory (50 GB quota): /home/<username>
PI group work directory (1 TB quota): /work/pi_<PI username>
Project storage (Free and paid options. Free tier project quota set by home institution): /project
Scratch space: via hpc-workspace
: /scratch<2,3>/workspace
See storage documentation here.
3 day snapshots for data in $HOME
and the PI Work directory
There are no backups on Unity
Do not rely on Unity to store data long term
To discuss options for long term storage, contact the facilitation team at hpc@umass.edu.
THERE ARE CURRENTLY NO BACKUPS ON UNITY. However, data backup is in the product roadmap. |
Command line tools like rsync
and scp
Unity OnDemand file browser
Globus
For detailed information, visit our file management documentation. |
CLI tools if you’re comfortable with the command line
Unity OnDemand for an intuitive interface and file size less than 5G
Globus for large files to and from existing endpoints or Globus Personal Endpoint
Slurm is a resource manager that
controls access to resources (CPUs, GPUs, memory, etc) for jobs
organizes a queue of jobs and arbitrates which jobs get resources next
See the Quickstart Guide for a crash course in Slurm |
An interactive session lets you access the command line on a compute node
# This requests an interactive job with
# four CPU cores for two hours on the cpu partition
# -c number of cores
# -p partition
# -t time in the format Days-Hours:Minutes:Seconds
# --mem memory per node (defaults to MB unless you include "G")
salloc -c 4 -p cpu -t 2:00:00 --mem=8G
Requests for high demand resources (like A100s) could wait a while! |
A batch job submits a pre-determined script to the queue for non-interactive execution
#!/bin/bash
#SBATCH -N 1 # Number of nodes requested
#SBATCH -n 1 # Number of tasks requested
#SBATCH -p cpu # Partition
#SBATCH -t 2:00:00 # Time (Days-Hours:Minutes:Seconds)
#SBATCH --mem=8G # memory per node (defaults to MB without "G")
#SBATCH -o o-%j.out # Output file (%j is job numbers)
echo "Hello World!"
To run: sbatch <filename>
Command | Description | Notes |
---|---|---|
| Runs an executable in a parallel job | |
| Submits a batch job | Slurm options are prefixed with |
| View information about running and queued jobs | Use |
| Cancels a running or queued job | If you’re using job arrays, this will cancel the whole array unless you specify a sub job |
| View partition, node, and cluster information | Use |
| Slurm accounting information | Use |
See our Slurm cheat sheet here!
Slurm draws the distinction between tasks and CPUs.
For most applications using threading, use
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --cpus-per-task=<number of CPUS desired>
For MPI applications, use
#SBATCH -N <number of nodes desired>
#SBATCH -n <number of tasks desired>
There are situations where code needs both multiple tasks and multiple CPUs per task, but they are rare! When in doubt, contact us for help. |
Load mpi with module load openmpi/<VERSION>
Compile your code with the mpicc
compiler
Run with srun
or mpirun
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 8
#SBATCH --mem=1G
module load openmpi/5.0.3 # Or other MPI version
srun ./my-mpi-program
# OR
mpirun ./my-mpi-program
Only some nodes have infiniband. If you need to use MPI across nodes, you can request only those nodes with --constraint=ib
#!/bin/bash
#SBATCH -N 4
#SBATCH -n 200
#SBATCH --mem=1G
#SBATCH --constraint=ib
#SBATCH -p cpu
module load openmpi/5.0.3 # Or other MPI version
mpirun ./my-mpi-program
Unity has a variety of GPUs available! To see available GPUs by node and partition, see our documentation.
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --gpus=1
#SBATCH -p gpu,gpu-preempt
#SBATCH --mem=8G
Using -p gpu,gpu-preempt will get you access to the most GPUs |
If you need a specific GPU, you can specify that in the --gpus
option in the format gpu-name:number
:
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --gpus=2080ti:1
#SBATCH -p gpu,gpu-preempt
#SBATCH --mem=8G
The GPU name will always be lowercase |
If you need one of several types of GPU, but you don’t care about which, you can use constraint
with --gpus=1
:
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --constraint=[a100|m40|rtx8000]
#SBATCH --gpus=1
#SBATCH -p gpu,gpu-preempt
#SBATCH --mem=8G
Constraints can be combined with | for "or" or & for "and": 'vram40&sm_80' |
Unity has three types of partitions:
General use partitions (open to all): cpu
, gpu
Condo partitions (open to specific groups)
Preempt partitions (open to all, but your job may be killed and requeued after 2 hours): cpu-preempt
, gpu-preempt
Preempt partitions allow anyone to run on private hardware. |
Constraints are a way to specify specific hardware features. Unity is a heterogeneous cluster. Unity nodes have constraints based on…
GPU type
Infiniband
Architecture / Microarchitecture
Vectorization instructions
For a full list of available constraints, run unity-slurm-list-constraints or see our documentation. |
Unity hosts a variety of centrally-installed packages for cluster-wide use, including…
MPI stacks
Compilers (gnu, intel)
Interpreters (Python, R, Julia, etc)
Scientific Libraries (hdf5, netcdf, openblas, etc)
GPU utilities (cuda, etc)
We use lmod
to manage most software
Or, via the shell…
To see currently loaded software, use module list
To see available software that is currently loadable, use module avail
To see all available software, use module spider
or module spider <package name>
To load a module, use
module load <package name>/<version>
Specifying a version is required. Module versions can be tab completed! |
If a module says it can’t be loaded, use spider
to find out requirements:
module spider <package name/version>
You’re welcome to install software in your work directory! There are some situations where it’s preferable to install software in user-space:
Conda environments and packages
R packages
Very specific software
If you need assistance with software, please submit a ticket to hpc@umass.edu |
Keep in touch! To contact us:
Join the Unity User Community on Slack
Browse the Unity User Documentation
Submit a ticket to hpc@umass.edu
Attend our Zoom Office Hours, Tuesdays 2:30 pm to 4 pm
Get this talk at unity.unityhpc.page/education/workshops/unity-onboarding |