Access to the HPC cluster

  • Accounts
    • The access for the HPC Cluster at TUHH-RZ has to applied with the following digital form ("Benutzerantrag"). Please tick HPC as additional permission.

      Students can get a second account in the institute or their student account can be activated directly depending on the role.
  • Access hosts
    • Access to the login nodes via SSH protocol. (Linux: ssh username@hpclogin.rz.tuhh.de, Windows: putty). Data transfer with scp or sftp (Windows: WinSCP).
    • SSH fingerprints may vary by time and server but are signed by a certificate authority.
      Please add the content of this file to your known-host file /etc/ssh/ssh_known_hosts or ~/.ssh/known_hosts to ensure you are really connecting to our servers.
    • The login nodes are usually accessed from within the TUHH network.
      Unless you have special hardware or software requirements on the login node, you are advised to use the alias hpclogin.rz.tuhh.de.
      The login node hpc1.rz.tuhh.de can be accessed worldwide in particular for data transfer, with restrictions concerning availability and performance.
    • The login nodes are for interactive use (Pre- and post processing, building software or alike), the computes nodes are accessible via the batch system.

Best practice for using the HPC clusters

The HPC cluster uses the free batch system SLURM for submitting and handling compute jobs.

Therefore each user has to formulate the compute jobs as a Bash script with special directives for SLURM. With the directives mentioned below a user requests certain ressources (Number of CPU cores, time, memory) from the cluster. This information is used by SLURM to start the compute job when the requested hardware becomes idle.

Typical steps while performing a scientific simulation is usually as follows:

  • Generating a model (e.g. Matlab script or Ansys case) and graphical preprocessing (if applicable) on your personal workstation.
  • Make yourself familiar with the command line handling of your software, e.g. with a short run on a Linux computer.
  • Copy the input data with scp onto the HPC cluster, e.g. to your directory below /work.
  • SSH into a login node and generate a batch script.
  • Submit the batch script with the command sbatch <scriptname> . Helpful commands include squeue, scancel and sview.
  • Wait until job has finished.
  • Copy the results back to your personal workstation.
  • Evaluate the results and graphical postprocessing (if applicable) on your local workstation.