High Performance Computing : Getting started
Created by Potthoff, Sebastian, last modified on 11. Aug 2022
CALCULATIONS ON THE LOGIN NODE ARE NOT ALLOWED!
The login node of PALMA is not a place to start any serious calculations nor is it a playground for testing purposes or compiling programs! Any user processes violating this rule will be terminated immediately without any warning!
This page gives you an overview of the most important aspects of the PALMA-II cluster, how to connect to it, and what a typical workflow could look like. Specifics can be found in the subsections for each topic separately.
PALMA-II consists of several hundred compute nodes running CentOS 7 as the operating system. Every node consists of 2 Intel Xeon Gold 6140 CPUs containing 18 CPU cores each. Users will first have to connect to so-called login nodes to gain access to the compute nodes themselves. Applications or compute jobs can then be started via an interactive session or via job/batch scripts using the slurm workload manager.
Table of Contents
Access
To access the PALMA-II cluster you have to connect to one of the login nodes via an SSH connection using a terminal emulator. If you are on Linux or Mac OS, you can use the terminal applications that come pre-installed with your OS. If you are on Windows you can use PuTTY or (since Windows 10) the command prompt if you enabled the Windows 10 OpenSSH feature.
- SSH address:
username@palma.uni-muenster.de
You will need an ssh-key to access PALMA. Please find further instructions here.
Tip: If you need a short introduction on how to use a shell (terminal), we highly recommend this video tutorial series: Introduction to Linux in HPC.
For verification, the current SSH host keys of the login node have the following fingerprints:
- SHA256:QrEL/sQR1Au04vZm28OMc5rJKdXiZ3p8zEd2LkKDohs (DSA)
- MD5:b2:b6:6b:f9:a7:e0:0e:e8:23:2e:cb:68:5f:bb:bf:c8 (DSA)
- SHA256:teQfp9+Rdk4/QeFCF/jrsaH4+jtDLd8ywFNyvoSfB5o (ECDSA)
- MD5:51:2d:01:0a:66:10:3e:77:06:62:21:5e:8a:07:5c:5e (ECDSA)
- SHA256:CFa8LuaWz2huvbN3Ahtej+WLmUugAKmt85ZZ6eG+1lw (ED25519)
- MD5:33:27:23:aa:4b:c9:8f:d7:4d:30:0c:d9:a0:a7:46:88 (ED25519)
- SHA256:2DgT9f4Egl82qbm+eLtryg3bNfIxYuT2z8Mhqa5H4B0 (RSA)
- MD5:f9:e8:28:e6:18:ae:7b:3e:f0:41:20:38:f3:cd:b2:dc (RSA)
Filesystem
When you log in to the cluster for the first time, a directory /home/[a-z]/
Directory | Purpose |
---|---|
/home/[a-z]/ |
Personal applications, binaries, libraries, etc. |
/scratch/tmp/ |
Working directory, storage for numerical output (start your applications from here!) |
/mnt/beeond | Temporary working filesystem, provided on a per-job basis |
Note: There is NO BACKUP of the home and working directory on PALMAII as these are not intended as an archive. We kindly ask you to remove your data there as soon as it is no longer needed.
Further information can be found at: Data & Storage.
Datatransfer
To transfer data from and to the cluster, you can either use the scp command from a terminal (Linux & Mac OS) or use something like WinSCP (Windows) or FileZilla (all platforms).
An example using scp is given below:
Task | Command |
---|---|
Copy a local file to your home folder on PALMA-II | scp -i $HOME/.ssh/id_rsa_palma MyLocalFile username@palma.uni-muenster.de:/home/[a-z]/username/ |
Copy a local folder to your working directory on PALMA-II | scp -i $HOME/.ssh/id_rsa_palma -r MyLocalDir username@palma.uni-muenster.de:/scratch/tmp/username/ |
Copy a file on PALMA-II to your local computer | scp -i $HOME/.ssh/id_rsa_palma username@palma.uni-muenster.de:/scratch/tmp/username/MyData/MyResult.dat /PATH/ON/YOUR/LOCAL/COMPUTER/ |
Loading available software
Common applications are provided through a module system and have to be loaded into your environment before usage. You can load and unload them depending on your needs and preferences. The most important commands are:
Command | Description |
---|---|
module avail |
List currently available modules (software) |
module spider <NAME> |
Search for a module with a specific NAME |
module load <NAME> |
Load a module into your environment |
module unload <NAME> |
Un-load a module from your environment |
module list |
List all currently loaded modules |
module purge |
Unload all modules in your environment |
Details and specifics can be found in the module system section.
Starting Jobs
Jobs can be started in three different ways using the slurm workload manager:
- Interactive (giving you direct access to the compute nodes)
- Non-interactive (using slurm to directly queue a job)
- Batch system (submitting a job script)
Type | Command |
---|---|
Interactive | salloc <config options> |
Non-interactive | srun <config options> <my-applications-to-run> |
Batch | sbatch my_job_script.sh |
All methods will reserve a certain amount of CPU cores or nodes for a given amount of time depending on your settings. Further information about the submission of jobs, configuration options, example job scripts can be found in the Submitting Jobs section.
Workflow
An example of a typical workflow on PALMA-II is given below:
Step | Command |
---|---|
1. Connect to the login node of palma | ssh username@palma.uni-muenster.de |
2. Navigate to your working directory | cd /scratch/tmp/username/MySimulation/ |
3. Load the needed software into your environment | module load intel/2019a module load GROMACS/2019.1 |
4. Start your simulation / computation | srun -N 4 -n 36 --partition normal --time 12:00:00 gmx mdrun -v --deffnm NPT |